Science.gov

Sample records for 3d virtual immersive

  1. Presence Pedagogy: Teaching and Learning in a 3D Virtual Immersive World

    ERIC Educational Resources Information Center

    Bronack, Stephen; Sanders, Robert; Cheney, Amelia; Riedl, Richard; Tashner, John; Matzen, Nita

    2008-01-01

    As the use of 3D immersive virtual worlds in higher education expands, it is important to examine which pedagogical approaches are most likely to bring about success. AET Zone, a 3D immersive virtual world in use for more than seven years, is one embodiment of pedagogical innovation that capitalizes on what virtual worlds have to offer to social…

  2. An Australian and New Zealand Scoping Study on the Use of 3D Immersive Virtual Worlds in Higher Education

    ERIC Educational Resources Information Center

    Dalgarno, Barney; Lee, Mark J. W.; Carlson, Lauren; Gregory, Sue; Tynan, Belinda

    2011-01-01

    This article describes the research design of, and reports selected findings from, a scoping study aimed at examining current and planned applications of 3D immersive virtual worlds at higher education institutions across Australia and New Zealand. The scoping study is the first of its kind in the region, intended to parallel and complement a…

  3. Visuomotor learning in immersive 3D virtual reality in Parkinson's disease and in aging.

    PubMed

    Messier, Julie; Adamovich, Sergei; Jack, David; Hening, Wayne; Sage, Jacob; Poizner, Howard

    2007-05-01

    Successful adaptation to novel sensorimotor contexts critically depends on efficient sensory processing and integration mechanisms, particularly those required to combine visual and proprioceptive inputs. If the basal ganglia are a critical part of specialized circuits that adapt motor behavior to new sensorimotor contexts, then patients who are suffering from basal ganglia dysfunction, as in Parkinson's disease should show sensorimotor learning impairments. However, this issue has been under-explored. We tested the ability of 8 patients with Parkinson's disease (PD), off medication, ten healthy elderly subjects and ten healthy young adults to reach to a remembered 3D location presented in an immersive virtual environment. A multi-phase learning paradigm was used having four conditions: baseline, initial learning, reversal learning and aftereffect. In initial learning, the computer altered the position of a simulated arm endpoint used for movement feedback by shifting its apparent location diagonally, requiring thereby both horizontal and vertical compensations. This visual distortion forced subjects to learn new coordinations between what they saw in the virtual environment and the actual position of their limbs, which they had to derive from proprioceptive information (or efference copy). In reversal learning, the sign of the distortion was reversed. Both elderly subjects and PD patients showed learning phase-dependent difficulties. First, elderly controls were slower than young subjects when learning both dimensions of the initial biaxial discordance. However, their performance improved during reversal learning and as a result elderly and young controls showed similar adaptation rates during reversal learning. Second, in striking contrast to healthy elderly subjects, PD patients were more profoundly impaired during the reversal phase of learning. PD patients were able to learn the initial biaxial discordance but were on average slower than age-matched controls

  4. Versatile, Immersive, Creative and Dynamic Virtual 3-D Healthcare Learning Environments: A Review of the Literature

    PubMed Central

    2008-01-01

    The author provides a critical overview of three-dimensional (3-D) virtual worlds and “serious gaming” that are currently being developed and used in healthcare professional education and medicine. The relevance of this e-learning innovation for teaching students and professionals is debatable and variables influencing adoption, such as increased knowledge, self-directed learning, and peer collaboration, by academics, healthcare professionals, and business executives are examined while looking at various Web 2.0/3.0 applications. There is a need for more empirical research in order to unearth the pedagogical outcomes and advantages associated with this e-learning technology. A brief description of Roger’s Diffusion of Innovations Theory and Siemens’ Connectivism Theory for today’s learners is presented as potential underlying pedagogical tenets to support the use of virtual 3-D learning environments in higher education and healthcare. PMID:18762473

  5. Three‐dimensional immersive virtual reality for studying cellular compartments in 3D models from EM preparations of neural tissues

    PubMed Central

    Baghabra, Jumana; Boges, Daniya J.; Holst, Glendon R.; Kreshuk, Anna; Hamprecht, Fred A.; Srinivasan, Madhusudhanan; Lehväslaiho, Heikki

    2016-01-01

    ABSTRACT Advances in the application of electron microscopy (EM) to serial imaging are opening doors to new ways of analyzing cellular structure. New and improved algorithms and workflows for manual and semiautomated segmentation allow us to observe the spatial arrangement of the smallest cellular features with unprecedented detail in full three‐dimensions. From larger samples, higher complexity models can be generated; however, they pose new challenges to data management and analysis. Here we review some currently available solutions and present our approach in detail. We use the fully immersive virtual reality (VR) environment CAVE (cave automatic virtual environment), a room in which we are able to project a cellular reconstruction and visualize in 3D, to step into a world created with Blender, a free, fully customizable 3D modeling software with NeuroMorph plug‐ins for visualization and analysis of EM preparations of brain tissue. Our workflow allows for full and fast reconstructions of volumes of brain neuropil using ilastik, a software tool for semiautomated segmentation of EM stacks. With this visualization environment, we can walk into the model containing neuronal and astrocytic processes to study the spatial distribution of glycogen granules, a major energy source that is selectively stored in astrocytes. The use of CAVE was key to the observation of a nonrandom distribution of glycogen, and led us to develop tools to quantitatively analyze glycogen clustering and proximity to other subcellular features. J. Comp. Neurol. 524:23–38, 2016. © 2015 Wiley Periodicals, Inc. PMID:26179415

  6. Three-dimensional immersive virtual reality for studying cellular compartments in 3D models from EM preparations of neural tissues.

    PubMed

    Calì, Corrado; Baghabra, Jumana; Boges, Daniya J; Holst, Glendon R; Kreshuk, Anna; Hamprecht, Fred A; Srinivasan, Madhusudhanan; Lehväslaiho, Heikki; Magistretti, Pierre J

    2016-01-01

    Advances in the application of electron microscopy (EM) to serial imaging are opening doors to new ways of analyzing cellular structure. New and improved algorithms and workflows for manual and semiautomated segmentation allow us to observe the spatial arrangement of the smallest cellular features with unprecedented detail in full three-dimensions. From larger samples, higher complexity models can be generated; however, they pose new challenges to data management and analysis. Here we review some currently available solutions and present our approach in detail. We use the fully immersive virtual reality (VR) environment CAVE (cave automatic virtual environment), a room in which we are able to project a cellular reconstruction and visualize in 3D, to step into a world created with Blender, a free, fully customizable 3D modeling software with NeuroMorph plug-ins for visualization and analysis of EM preparations of brain tissue. Our workflow allows for full and fast reconstructions of volumes of brain neuropil using ilastik, a software tool for semiautomated segmentation of EM stacks. With this visualization environment, we can walk into the model containing neuronal and astrocytic processes to study the spatial distribution of glycogen granules, a major energy source that is selectively stored in astrocytes. The use of CAVE was key to the observation of a nonrandom distribution of glycogen, and led us to develop tools to quantitatively analyze glycogen clustering and proximity to other subcellular features. PMID:26179415

  7. L2 Immersion in 3D Virtual Worlds: The Next Thing to Being There?

    ERIC Educational Resources Information Center

    Paillat, Edith

    2014-01-01

    Second Life is one of the many three-dimensional virtual environments accessible through a computer and a fast broadband connection. Thousands of participants connect to this platform to interact virtually with the world, join international communities of practice and, for some, role play groups. Unlike online role play games however, Second Life…

  8. Enhancing Time-Connectives with 3D Immersive Virtual Reality (IVR)

    ERIC Educational Resources Information Center

    Passig, David; Eden, Sigal

    2010-01-01

    This study sought to test the most efficient representation mode with which children with hearing impairment could express a story while producing connectives indicating relations of time and of cause and effect. Using Bruner's (1973, 1986, 1990) representation stages, we tested the comparative effectiveness of Virtual Reality (VR) as a mode of…

  9. Immersive 3D geovisualisation in higher education

    NASA Astrophysics Data System (ADS)

    Philips, Andrea; Walz, Ariane; Bergner, Andreas; Graeff, Thomas; Heistermann, Maik; Kienzler, Sarah; Korup, Oliver; Lipp, Torsten; Schwanghart, Wolfgang; Zeilinger, Gerold

    2014-05-01

    Through geovisualisation we explore spatial data, we analyse it towards a specific questions, we synthesise results, and we present and communicate them to a specific audience (MacEachren & Kraak 1997). After centuries of paper maps, the means to represent and visualise our physical environment and its abstract qualities have changed dramatically since the 1990s - and accordingly the methods how to use geovisualisation in teaching. Whereas some people might still consider the traditional classroom as ideal setting for teaching and learning geographic relationships and its mapping, we used a 3D CAVE (computer-animated virtual environment) as environment for a problem-oriented learning project called "GEOSimulator". Focussing on this project, we empirically investigated, if such a technological advance like the CAVE make 3D visualisation, including 3D geovisualisation, not only an important tool for businesses (Abulrub et al. 2012) and for the public (Wissen et al. 2008), but also for educational purposes, for which it had hardly been used yet. The 3D CAVE is a three-sided visualisation platform, that allows for immersive and stereoscopic visualisation of observed and simulated spatial data. We examined the benefits of immersive 3D visualisation for geographic research and education and synthesized three fundamental technology-based visual aspects: First, the conception and comprehension of space and location does not need to be generated, but is instantaneously and intuitively present through stereoscopy. Second, optical immersion into virtual reality strengthens this spatial perception which is in particular important for complex 3D geometries. And third, a significant benefit is interactivity, which is enhanced through immersion and allows for multi-discursive and dynamic data exploration and knowledge transfer. Based on our problem-oriented learning project, which concentrates on a case study on flood risk management at the Wilde Weisseritz in Germany, a river

  10. Immersive 3D Visualization of Astronomical Data

    NASA Astrophysics Data System (ADS)

    Schaaff, A.; Berthier, J.; Da Rocha, J.; Deparis, N.; Derriere, S.; Gaultier, P.; Houpin, R.; Normand, J.; Ocvirk, P.

    2015-09-01

    The immersive-3D visualization, or Virtual Reality in our study, was previously dedicated to specific uses (research, flight simulators, etc.) The investment in infrastructure and its cost was reserved to large laboratories or companies. Lately we saw the development of immersive-3D masks intended for wide distribution, for example the Oculus Rift and the Sony Morpheus projects. The usual reaction is to say that these tools are primarily intended for games since it is easy to imagine a player in a virtual environment and the added value to conventional 2D screens. Yet it is likely that there are many applications in the professional field if these tools are becoming common. Introducing this technology into existing applications or new developments makes sense only if interest is properly evaluated. The use in Astronomy is clear for education, it is easy to imagine mobile and light planetariums or to reproduce poorly accessible environments (e.g., large instruments). In contrast, in the field of professional astronomy the use is probably less obvious and it requires to conduct studies to determine the most appropriate ones and to assess the contributions compared to the other display modes.

  11. Quality of Grasping and the Role of Haptics in a 3-D Immersive Virtual Reality Environment in Individuals With Stroke.

    PubMed

    Levin, Mindy F; Magdalon, Eliane C; Michaelsen, Stella M; Quevedo, Antonio A F

    2015-11-01

    Reaching and grasping parameters with and without haptic feedback were characterized in people with chronic post-stroke behaviors. Twelve (67 ± 10 years) individuals with chronic stroke and arm/hand paresis (Fugl-Meyer Assessment-Arm: ≥ 46/66 pts) participated. Three dimensional (3-D) temporal and spatial kinematics of reaching and grasping movements to three objects (can: cylindrical grasp; screwdriver: power grasp; pen: precision grasp) in a physical environment (PE) with and without additional haptic feedback and a 3-D virtual environment (VE) with haptic feedback were recorded. Participants reached, grasped and transported physical and virtual objects using similar movement strategies in all conditions. Reaches made in VE were less smooth and slower compared to the PE. Arm and trunk kinematics were similar in both environments and glove conditions. For grasping, stroke subjects preserved aperture scaling to object size but used wider hand apertures with longer delays between times to maximal reaching velocity and maximal grasping aperture. Wearing the glove decreased reaching velocity. Our results in a small group of subjects suggest that providing haptic information in the VE did not affect the validity of reaching and grasping movement. Small disparities in movement parameters between environments may be due to differences in perception of object distance in VE. Reach-to-grasp kinematics to smaller objects may be improved by better 3-D rendering. Comparable kinematics between environments and conditions is encouraging for the incorporation of high quality VEs in rehabilitation programs aimed at improving upper limb recovery.

  12. Immersive 3D Geovisualization in Higher Education

    ERIC Educational Resources Information Center

    Philips, Andrea; Walz, Ariane; Bergner, Andreas; Graeff, Thomas; Heistermann, Maik; Kienzler, Sarah; Korup, Oliver; Lipp, Torsten; Schwanghart, Wolfgang; Zeilinger, Gerold

    2015-01-01

    In this study, we investigate how immersive 3D geovisualization can be used in higher education. Based on MacEachren and Kraak's geovisualization cube, we examine the usage of immersive 3D geovisualization and its usefulness in a research-based learning module on flood risk, called GEOSimulator. Results of a survey among participating students…

  13. Use of Three-Dimensional (3-D) Immersive Virtual Worlds in K-12 And Higher Education Settings: A Review of the Research

    ERIC Educational Resources Information Center

    Hew, Khe Foon; Cheung, Wing Sum

    2010-01-01

    In this paper, we review past empirical research studies on the use of three-dimensional immersive virtual worlds in education settings such as K-12 and higher education. Three questions guided our review: (1) How are virtual worlds (eg, "Active Worlds", "Second Life") used by students and teachers? (2) What types of research methods have been…

  14. Understanding Immersivity: Image Generation and Transformation Processes in 3D Immersive Environments

    PubMed Central

    Kozhevnikov, Maria; Dhond, Rupali P.

    2012-01-01

    Most research on three-dimensional (3D) visual-spatial processing has been conducted using traditional non-immersive 2D displays. Here we investigated how individuals generate and transform mental images within 3D immersive (3DI) virtual environments, in which the viewers perceive themselves as being surrounded by a 3D world. In Experiment 1, we compared participants’ performance on the Shepard and Metzler (1971) mental rotation (MR) task across the following three types of visual presentation environments; traditional 2D non-immersive (2DNI), 3D non-immersive (3DNI – anaglyphic glasses), and 3DI (head mounted display with position and head orientation tracking). In Experiment 2, we examined how the use of different backgrounds affected MR processes within the 3DI environment. In Experiment 3, we compared electroencephalogram data recorded while participants were mentally rotating visual-spatial images presented in 3DI vs. 2DNI environments. Overall, the findings of the three experiments suggest that visual-spatial processing is different in immersive and non-immersive environments, and that immersive environments may require different image encoding and transformation strategies than the two other non-immersive environments. Specifically, in a non-immersive environment, participants may utilize a scene-based frame of reference and allocentric encoding whereas immersive environments may encourage the use of a viewer-centered frame of reference and egocentric encoding. These findings also suggest that MR performed in laboratory conditions using a traditional 2D computer screen may not reflect spatial processing as it would occur in the real world. PMID:22908003

  15. Designing Virtual Museum Using Web3D Technology

    NASA Astrophysics Data System (ADS)

    Zhao, Jianghai

    VRT was born to have the potentiality of constructing an effective learning environment due to its 3I characteristics: Interaction, Immersion and Imagination. It is now applied in education in a more profound way along with the development of VRT. Virtual Museum is one of the applications. The Virtual Museum is based on the WEB3D technology and extensibility is the most important factor. Considering the advantage and disadvantage of each WEB3D technology, VRML, CULT3D AND VIEWPOINT technologies are chosen. A web chatroom based on flash and ASP technology is also been created in order to make the Virtual Museum an interactive learning environment.

  16. 3D Elevation Program—Virtual USA in 3D

    USGS Publications Warehouse

    Lukas, Vicki; Stoker, J.M.

    2016-01-01

    The U.S. Geological Survey (USGS) 3D Elevation Program (3DEP) uses a laser system called ‘lidar’ (light detection and ranging) to create a virtual reality map of the Nation that is very accurate. 3D maps have many uses with new uses being discovered all the time.  

  17. 3D Elevation Program—Virtual USA in 3D

    USGS Publications Warehouse

    Lukas, Vicki; Stoker, J.M.

    2016-04-14

    The U.S. Geological Survey (USGS) 3D Elevation Program (3DEP) uses a laser system called ‘lidar’ (light detection and ranging) to create a virtual reality map of the Nation that is very accurate. 3D maps have many uses with new uses being discovered all the time.  

  18. How 3D immersive visualization is changing medical diagnostics

    NASA Astrophysics Data System (ADS)

    Koning, Anton H. J.

    2011-03-01

    Originally the only way to look inside the human body without opening it up was by means of two dimensional (2D) images obtained using X-ray equipment. The fact that human anatomy is inherently three dimensional leads to ambiguities in interpretation and problems of occlusion. Three dimensional (3D) imaging modalities such as CT, MRI and 3D ultrasound remove these drawbacks and are now part of routine medical care. While most hospitals 'have gone digital', meaning that the images are no longer printed on film, they are still being viewed on 2D screens. However, this way valuable depth information is lost, and some interactions become unnecessarily complex or even unfeasible. Using a virtual reality (VR) system to present volumetric data means that depth information is presented to the viewer and 3D interaction is made possible. At the Erasmus MC we have developed V-Scope, an immersive volume visualization system for visualizing a variety of (bio-)medical volumetric datasets, ranging from 3D ultrasound, via CT and MRI, to confocal microscopy, OPT and 3D electron-microscopy data. In this talk we will address the advantages of such a system for both medical diagnostics as well as for (bio)medical research.

  19. "Immersed in Learning": Supporting Creative Practice in Virtual Worlds

    ERIC Educational Resources Information Center

    Doyle, Denise

    2010-01-01

    The "Immersed in Learning" project began in 2007 to evaluate the use of 3D virtual worlds as a teaching and learning tool in undergraduate programmes in digital media at the University of Wolverhampton, UK. A question that the research set out to explore was what were the benefits of integrating 3D immersive learning with face-to-face learning for…

  20. Computer-assisted three-dimensional surgical planning and simulation: 3D virtual osteotomy.

    PubMed

    Xia, J; Ip, H H; Samman, N; Wang, D; Kot, C S; Yeung, R W; Tideman, H

    2000-02-01

    A computer-assisted three-dimensional virtual osteotomy system for orthognathic surgery (CAVOS) is presented. The virtual reality workbench is used for surgical planning. The surgeon immerses in a virtual reality environment with stereo eyewear, holds a virtual "scalpel" (3D Mouse) and operates on a "real" patient (3D visualization) to obtain pre-surgical prediction (3D bony segment movements). Virtual surgery on a computer-generated 3D head model is simulated and can be visualized from any arbitrary viewing point in a personal computer system.

  1. Virtual Representations in 3D Learning Environments

    ERIC Educational Resources Information Center

    Shonfeld, Miri; Kritz, Miki

    2013-01-01

    This research explores the extent to which virtual worlds can serve as online collaborative learning environments for students by increasing social presence and engagement. 3D environments enable learning, which simulates face-to-face encounters while retaining the advantages of online learning. Students in Education departments created avatars…

  2. Cognitive Aspects of Collaboration in 3d Virtual Environments

    NASA Astrophysics Data System (ADS)

    Juřík, V.; Herman, L.; Kubíček, P.; Stachoň, Z.; Šašinka, Č.

    2016-06-01

    Human-computer interaction has entered the 3D era. The most important models representing spatial information — maps — are transferred into 3D versions regarding the specific content to be displayed. Virtual worlds (VW) become promising area of interest because of possibility to dynamically modify content and multi-user cooperation when solving tasks regardless to physical presence. They can be used for sharing and elaborating information via virtual images or avatars. Attractiveness of VWs is emphasized also by possibility to measure operators' actions and complex strategies. Collaboration in 3D environments is the crucial issue in many areas where the visualizations are important for the group cooperation. Within the specific 3D user interface the operators' ability to manipulate the displayed content is explored regarding such phenomena as situation awareness, cognitive workload and human error. For such purpose, the VWs offer a great number of tools for measuring the operators' responses as recording virtual movement or spots of interest in the visual field. Study focuses on the methodological issues of measuring the usability of 3D VWs and comparing them with the existing principles of 2D maps. We explore operators' strategies to reach and interpret information regarding the specific type of visualization and different level of immersion.

  3. Objective and subjective quality assessment of geometry compression of reconstructed 3D humans in a 3D virtual room

    NASA Astrophysics Data System (ADS)

    Mekuria, Rufael; Cesar, Pablo; Doumanis, Ioannis; Frisiello, Antonella

    2015-09-01

    Compression of 3D object based video is relevant for 3D Immersive applications. Nevertheless, the perceptual aspects of the degradation introduced by codecs for meshes and point clouds are not well understood. In this paper we evaluate the subjective and objective degradations introduced by such codecs in a state of art 3D immersive virtual room. In the 3D immersive virtual room, users are captured with multiple cameras, and their surfaces are reconstructed as photorealistic colored/textured 3D meshes or point clouds. To test the perceptual effect of compression and transmission, we render degraded versions with different frame rates in different contexts (near/far) in the scene. A quantitative subjective study with 16 users shows that negligible distortion of decoded surfaces compared to the original reconstructions can be achieved in the 3D virtual room. In addition, a qualitative task based analysis in a full prototype field trial shows increased presence, emotion, user and state recognition of the reconstructed 3D Human representation compared to animated computer avatars.

  4. Full Immersive Virtual Environment Cave[TM] in Chemistry Education

    ERIC Educational Resources Information Center

    Limniou, M.; Roberts, D.; Papadopoulos, N.

    2008-01-01

    By comparing two-dimensional (2D) chemical animations designed for computer's desktop with three-dimensional (3D) chemical animations designed for the full immersive virtual reality environment CAVE[TM] we studied how virtual reality environments could raise student's interest and motivation for learning. By using the 3ds max[TM], we can visualize…

  5. 3DIVS: 3-Dimensional Immersive Virtual Sculpting

    SciTech Connect

    Kuester, F; Duchaineau, M A; Hamann, B; Joy, K I; Uva, A E

    2001-10-03

    Virtual Environments (VEs) have the potential to revolutionize traditional product design by enabling the transition from conventional CAD to fully digital product development. The presented prototype system targets closing the ''digital gap'' as introduced by the need for physical models such as clay models or mockups in the traditional product design and evaluation cycle. We describe a design environment that provides an intuitive human-machine interface for the creation and manipulation of three-dimensional (3D) models in a semi-immersive design space, focusing on ease of use and increased productivity for both designer and CAD engineers.

  6. [3D virtual endoscopy of heart].

    PubMed

    Du, Aan; Yang, Xin; Xue, Haihong; Yao, Liping; Sun, Kun

    2012-10-01

    In this paper, we present a virtual endoscopy (VE) for diagnosis of heart diseases, which is proved efficient and affordable, easy to popularize for viewing the interior of the heart. The dual source CT (DSCT) data were used as primary data in our system. The 3D structure of virtual heart was reconstructed with 3D texture mapping technology based on graphics processing unit (GPU), and could be displayed dynamically in real time. When we displayed it in real time, we could not only observe the inside of the chambers of heart but also examine from the new angle of view by the 3D data which were already clipped according to doctor's desire. In the pattern of observation, we used both mutual interactive mode and auto mode. In the auto mode, we used Dijkstra Algorithm which treated the 3D Euler distance as weighting factor to find out the view path quickly, and, used view path to calculate the four chamber plane. PMID:23198444

  7. 3D Virtual Reality for Teaching Astronomy

    NASA Astrophysics Data System (ADS)

    Speck, Angela; Ruzhitskaya, L.; Laffey, J.; Ding, N.

    2012-01-01

    We are developing 3D virtual learning environments (VLEs) as learning materials for an undergraduate astronomy course, in which will utilize advances both in technologies available and in our understanding of the social nature of learning. These learning materials will be used to test whether such VLEs can indeed augment science learning so that it is more engaging, active, visual and effective. Our project focuses on the challenges and requirements of introductory college astronomy classes. Here we present our virtual world of the Jupiter system and how we plan to implement it to allow students to learn course material - physical laws and concepts in astronomy - while engaging them into exploration of the Jupiter's system, encouraging their imagination, curiosity, and motivation. The VLE can allow students to work individually or collaboratively. The 3D world also provides an opportunity for research in astronomy education to investigate impact of social interaction, gaming features, and use of manipulatives offered by a learning tool on students’ motivation and learning outcomes. Use of this VLE is also a valuable source for exploration of how the learners’ spatial awareness can be enhanced by working in 3D environment. We will present the Jupiter-system environment along with a preliminary study of the efficacy and usability of our Jupiter 3D VLE.

  8. Virtual VMASC: A 3D Game Environment

    NASA Technical Reports Server (NTRS)

    Manepalli, Suchitra; Shen, Yuzhong; Garcia, Hector M.; Lawsure, Kaleen

    2010-01-01

    The advantages of creating interactive 3D simulations that allow viewing, exploring, and interacting with land improvements, such as buildings, in digital form are manifold and range from allowing individuals from anywhere in the world to explore those virtual land improvements online, to training military personnel in dealing with war-time environments, and to making those land improvements available in virtual worlds such as Second Life. While we haven't fully explored the true potential of such simulations, we have identified a requirement within our organization to use simulations like those to replace our front-desk personnel and allow visitors to query, naVigate, and communicate virtually with various entities within the building. We implemented the Virtual VMASC 3D simulation of the Virginia Modeling Analysis and Simulation Center (VMASC) office building to not only meet our front-desk requirement but also to evaluate the effort required in designing such a simulation and, thereby, leverage the experience we gained in future projects of this kind. This paper describes the goals we set for our implementation, the software approach taken, the modeling contribution made, and the technologies used such as XNA Game Studio, .NET framework, Autodesk software packages, and, finally, the applicability of our implementation on a variety of architectures including Xbox 360 and PC. This paper also summarizes the result of our evaluation and the lessons learned from our effort.

  9. Virtual reality 3D headset based on DMD light modulators

    SciTech Connect

    Bernacki, Bruce E.; Evans, Allan; Tang, Edward

    2014-06-13

    We present the design of an immersion-type 3D headset suitable for virtual reality applications based upon digital micro-mirror devices (DMD). Our approach leverages silicon micro mirrors offering 720p resolution displays in a small form-factor. Supporting chip sets allow rapid integration of these devices into wearable displays with high resolution and low power consumption. Applications include night driving, piloting of UAVs, fusion of multiple sensors for pilots, training, vision diagnostics and consumer gaming. Our design is described in which light from the DMD is imaged to infinity and the user’s own eye lens forms a real image on the user’s retina.

  10. Inertial Motion-Tracking Technology for Virtual 3-D

    NASA Technical Reports Server (NTRS)

    2005-01-01

    In the 1990s, NASA pioneered virtual reality research. The concept was present long before, but, prior to this, the technology did not exist to make a viable virtual reality system. Scientists had theories and ideas they knew that the concept had potential, but the computers of the 1970s and 1980s were not fast enough, sensors were heavy and cumbersome, and people had difficulty blending fluidly with the machines. Scientists at Ames Research Center built upon the research of previous decades and put the necessary technology behind them, making the theories of virtual reality a reality. Virtual reality systems depend on complex motion-tracking sensors to convey information between the user and the computer to give the user the feeling that he is operating in the real world. These motion-tracking sensors measure and report an object s position and orientation as it changes. A simple example of motion tracking would be the cursor on a computer screen moving in correspondence to the shifting of the mouse. Tracking in 3-D, necessary to create virtual reality, however, is much more complex. To be successful, the perspective of the virtual image seen on the computer must be an accurate representation of what is seen in the real world. As the user s head or camera moves, turns, or tilts, the computer-generated environment must change accordingly with no noticeable lag, jitter, or distortion. Historically, the lack of smooth and rapid tracking of the user s motion has thwarted the widespread use of immersive 3-D computer graphics. NASA uses virtual reality technology for a variety of purposes, mostly training of astronauts. The actual missions are costly and dangerous, so any opportunity the crews have to practice their maneuvering in accurate situations before the mission is valuable and instructive. For that purpose, NASA has funded a great deal of virtual reality research, and benefited from the results.

  11. Immersive virtual reality simulations in nursing education.

    PubMed

    Kilmon, Carol A; Brown, Leonard; Ghosh, Sumit; Mikitiuk, Artur

    2010-01-01

    This article explores immersive virtual reality as a potential educational strategy for nursing education and describes an immersive learning experience now being developed for nurses. This pioneering project is a virtual reality application targeting speed and accuracy of nurse response in emergency situations requiring cardiopulmonary resuscitation. Other potential uses and implications for the development of virtual reality learning programs are discussed. PMID:21086871

  12. Virtual environment display for a 3D audio room simulation

    NASA Technical Reports Server (NTRS)

    Chapin, William L.; Foster, Scott H.

    1992-01-01

    The development of a virtual environment simulation system integrating a 3D acoustic audio model with an immersive 3D visual scene is discussed. The system complements the acoustic model and is specified to: allow the listener to freely move about the space, a room of manipulable size, shape, and audio character, while interactively relocating the sound sources; reinforce the listener's feeling of telepresence in the acoustical environment with visual and proprioceptive sensations; enhance the audio with the graphic and interactive components, rather than overwhelm or reduce it; and serve as a research testbed and technology transfer demonstration. The hardware/software design of two demonstration systems, one installed and one portable, are discussed through the development of four iterative configurations.

  13. 3D Virtual Reality Check: Learner Engagement and Constructivist Theory

    ERIC Educational Resources Information Center

    Bair, Richard A.

    2013-01-01

    The inclusion of three-dimensional (3D) virtual tools has created a need to communicate the engagement of 3D tools and specify learning gains that educators and the institutions, which are funding 3D tools, can expect. A review of literature demonstrates that specific models and theories for 3D Virtual Reality (VR) learning do not exist "per…

  14. Social Interaction Development through Immersive Virtual Environments

    ERIC Educational Resources Information Center

    Beach, Jason; Wendt, Jeremy

    2014-01-01

    The purpose of this pilot study was to determine if participants could improve their social interaction skills by participating in a virtual immersive environment. The participants used a developing virtual reality head-mounted display to engage themselves in a fully-immersive environment. While in the environment, participants had an opportunity…

  15. Immersive video for virtual tourism

    NASA Astrophysics Data System (ADS)

    Hernandez, Luis A.; Taibo, Javier; Seoane, Antonio J.

    2001-11-01

    This paper describes a new panoramic, 360 degree(s) video system and its use in a real application for virtual tourism. The development of this system has required to design new hardware for multi-camera recording, and software for video processing in order to elaborate the panorama frames and to playback the resulting high resolution video footage on a regular PC. The system makes use of new VR display hardware, such as WindowVR, in order to make the view dependent on the viewer's spatial orientation and so enhance immersiveness. There are very few examples of similar technologies and the existing ones are extremely expensive and/or impossible to be implemented on personal computers with acceptable quality. The idea of the system starts from the concept of Panorama picture, developed in technologies such as QuickTimeVR. This idea is extended to the concept of panorama frame that leads to panorama video. However, many problems are to be solved to implement this simple scheme. Data acquisition involves simultaneously footage recording in every direction, and latter processing to convert every set of frames in a single high resolution panorama frame. Since there is no common hardware capable of 4096x512 video playback at 25 fps rate, it must be stripped in smaller pieces which the system must manage to get the right frames of the right parts as the user movement demands it. As the system must be immersive, the physical interface to watch the 360 degree(s) video is a WindowVR, that is, a flat screen with an orientation tracker that the user holds in his hands, moving it like if it were a virtual window through which the city and its activity is being shown.

  16. Faculty Perceptions of Instruction in Collaborative Virtual Immersive Learning Environments in Higher Education

    ERIC Educational Resources Information Center

    Janson, Barbara

    2013-01-01

    Use of 3D (three-dimensional) avatars in a synchronous virtual world for educational purposes has only been adopted for about a decade. Universities are offering synchronous, avatar-based virtual courses for credit - within 3D worlds (Luo & Kemp, 2008). Faculty and students immerse themselves, via avatars, in virtual worlds and communicate…

  17. 3D Immersive Visualization: An Educational Tool in Geosciences

    NASA Astrophysics Data System (ADS)

    Pérez-Campos, N.; Cárdenas-Soto, M.; Juárez-Casas, M.; Castrejón-Pineda, R.

    2007-05-01

    3D immersive visualization is an innovative tool currently used in various disciplines, such as medicine, architecture, engineering, video games, etc. Recently, the Universidad Nacional Autónoma de México (UNAM) mounted a visualization theater (Ixtli) with leading edge technology, for academic and research purposes that require immersive 3D tools for a better understanding of the concepts involved. The Division of Engineering in Earth Sciences of the School of Engineering, UNAM, is running a project focused on visualization of geoscience data. Its objective is to incoporate educational material in geoscience courses in order to support and to improve the teaching-learning process, especially in well-known difficult topics for students. As part of the project, proffessors and students are trained in visualization techniques, then their data are adapted and visualized in Ixtli as part of a class or a seminar, where all the attendants can interact, not only among each other but also with the object under study. As part of our results, we present specific examples used in basic geophysics courses, such as interpreted seismic cubes, seismic-wave propagation models, and structural models from bathymetric, gravimetric and seismological data; as well as examples from ongoing applied projects, such as a modeled SH upward wave, the occurrence of an earthquake cluster in 1999 in the Popocatepetl volcano, and a risk atlas from Delegación Alvaro Obregón in Mexico City. All these examples, plus those to come, constitute a library for students and professors willing to explore another dimension of the teaching-learning process. Furthermore, this experience can be enhaced by rich discussions and interactions by videoconferences with other universities and researchers.

  18. Learning in 3-D Virtual Worlds: Rethinking Media Literacy

    ERIC Educational Resources Information Center

    Qian, Yufeng

    2008-01-01

    3-D virtual worlds, as a new form of learning environments in the 21st century, hold great potential in education. Learning in such environments, however, demands a broader spectrum of literacy skills. This article identifies a new set of media literacy skills required in 3-D virtual learning environments by reviewing exemplary 3-D virtual…

  19. 3D Immersive Patient Simulators and Their Impact on Learning Success: A Thematic Review

    PubMed Central

    Wahba, Roger; Chang, De-Hua; Plum, Patrick; Hölscher, Arnulf H; Stippel, Dirk L

    2015-01-01

    Background Immersive patient simulators (IPSs) combine the simulation of virtual patients with a three-dimensional (3D) environment and, thus, allow an illusionary immersion into a synthetic world, similar to computer games. Playful learning in a 3D environment is motivating and allows repetitive training and internalization of medical workflows (ie, procedural knowledge) without compromising real patients. The impact of this innovative educational concept on learning success requires review of feasibility and validity. Objective It was the aim of this paper to conduct a survey of all immersive patient simulators currently available. In addition, we address the question of whether the use of these simulators has an impact on knowledge gain by summarizing the existing validation studies. Methods A systematic literature search via PubMed was performed using predefined inclusion criteria (ie, virtual worlds, focus on education of medical students, validation testing) to identify all available simulators. Validation testing was defined as the primary end point. Results There are currently 13 immersive patient simulators available. Of these, 9 are Web-based simulators and represent feasibility studies. None of these simulators are used routinely for student education. The workstation-based simulators are commercially driven and show a higher quality in terms of graphical quality and/or data content. Out of the studies, 1 showed a positive correlation between simulated content and real content (ie, content validity). There was a positive correlation between the outcome of simulator training and alternative training methods (ie, concordance validity), and a positive coherence between measured outcome and future professional attitude and performance (ie, predictive validity). Conclusions IPSs can promote learning and consolidation of procedural knowledge. The use of immersive patient simulators is still marginal, and technical and educational approaches are heterogeneous

  20. Data Visualization Using Immersive Virtual Reality Tools

    NASA Astrophysics Data System (ADS)

    Cioc, Alexandru; Djorgovski, S. G.; Donalek, C.; Lawler, E.; Sauer, F.; Longo, G.

    2013-01-01

    The growing complexity of scientific data poses serious challenges for an effective visualization. Data sets, e.g., catalogs of objects detected in sky surveys, can have a very high dimensionality, ~ 100 - 1000. Visualizing such hyper-dimensional data parameter spaces is essentially impossible, but there are ways of visualizing up to ~ 10 dimensions in a pseudo-3D display. We have been experimenting with the emerging technologies of immersive virtual reality (VR) as a platform for a scientific, interactive, collaborative data visualization. Our initial experiments used the virtual world of Second Life, and more recently VR worlds based on its open source code, OpenSimulator. There we can visualize up to ~ 100,000 data points in ~ 7 - 8 dimensions (3 spatial and others encoded as shapes, colors, sizes, etc.), in an immersive virtual space where scientists can interact with their data and with each other. We are now developing a more scalable visualization environment using the popular (practically an emerging standard) Unity 3D Game Engine, coded using C#, JavaScript, and the Unity Scripting Language. This visualization tool can be used through a standard web browser, or a standalone browser of its own. Rather than merely plotting data points, the application creates interactive three-dimensional objects of various shapes, colors, and sizes, and of course the XYZ positions, encoding various dimensions of the parameter space, that can be associated interactively. Multiple users can navigate through this data space simultaneously, either with their own, independent vantage points, or with a shared view. At this stage ~ 100,000 data points can be easily visualized within seconds on a simple laptop. The displayed data points can contain linked information; e.g., upon a clicking on a data point, a webpage with additional information can be rendered within the 3D world. A range of functionalities has been already deployed, and more are being added. We expect to make this

  1. A specification of 3D manipulation in virtual environments

    NASA Technical Reports Server (NTRS)

    Su, S. Augustine; Furuta, Richard

    1994-01-01

    In this paper we discuss the modeling of three basic kinds of 3-D manipulations in the context of a logical hand device and our virtual panel architecture. The logical hand device is a useful software abstraction representing hands in virtual environments. The virtual panel architecture is the 3-D component of the 2-D window systems. Both of the abstractions are intended to form the foundation for adaptable 3-D manipulation.

  2. Virtual reality 3D headset based on DMD light modulators

    NASA Astrophysics Data System (ADS)

    Bernacki, Bruce E.; Evans, Allan; Tang, Edward

    2014-06-01

    We present the design of an immersion-type 3D headset suitable for virtual reality applications based upon digital micromirror devices (DMD). Current methods for presenting information for virtual reality are focused on either polarizationbased modulators such as liquid crystal on silicon (LCoS) devices, or miniature LCD or LED displays often using lenses to place the image at infinity. LCoS modulators are an area of active research and development, and reduce the amount of viewing light by 50% due to the use of polarization. Viewable LCD or LED screens may suffer low resolution, cause eye fatigue, and exhibit a "screen door" or pixelation effect due to the low pixel fill factor. Our approach leverages a mature technology based on silicon micro mirrors delivering 720p resolution displays in a small form-factor with high fill factor. Supporting chip sets allow rapid integration of these devices into wearable displays with high-definition resolution and low power consumption, and many of the design methods developed for DMD projector applications can be adapted to display use. Potential applications include night driving with natural depth perception, piloting of UAVs, fusion of multiple sensors for pilots, training, vision diagnostics and consumer gaming. Our design concept is described in which light from the DMD is imaged to infinity and the user's own eye lens forms a real image on the user's retina resulting in a virtual retinal display.

  3. A geoscience perspective on immersive 3D gridded data visualization

    NASA Astrophysics Data System (ADS)

    Billen, Magali I.; Kreylos, Oliver; Hamann, Bernd; Jadamec, Margarete A.; Kellogg, Louise H.; Staadt, Oliver; Sumner, Dawn Y.

    2008-09-01

    We describe visualization software, Visualizer, that was developed specifically for interactive, visual exploration in immersive virtual reality (VR) environments. Visualizer uses carefully optimized algorithms and data structures to support the high frame rates required for immersion and the real-time feedback required for interactivity. As an application developed for VR from the ground up, Visualizer realizes benefits that usually cannot be achieved by software initially developed for the desktop and later ported to VR. However, Visualizer can also be used on desktop systems (unix/linux-based operating systems including Mac OS X) with a similar level of real-time interactivity, bridging the "software gap" between desktop and VR that has been an obstacle for the adoption of VR methods in the Geosciences. While many of the capabilities of Visualizer are already available in other software packages used in a desktop environment, the features that distinguish Visualizer are: (1) Visualizer can be used in any VR environment including the desktop, GeoWall, or CAVE, (2) in non-desktop environments the user interacts with the data set directly using a wand or other input devices instead of working indirectly via dialog boxes or text input, (3) on the desktop, Visualizer provides real-time interaction with very large data sets that cannot easily be viewed or manipulated in other software packages. Three case studies are presented that illustrate the direct scientific benefits realized by analyzing data or simulation results with Visualizer in a VR environment. We also address some of the main obstacles to widespread use of VR environments in scientific research with a user study that shows Visualizer is easy to learn and to use in a VR environment and can be as effective on desktop systems as native desktop applications.

  4. A 3D Immersive Fault Visualizer and Editor

    NASA Astrophysics Data System (ADS)

    Yikilmaz, M. B.; van Aalsburg, J.; Kreylos, O.; Kellogg, L. H.; Rundle, J. B.

    2007-12-01

    Digital fault models are an important resource for the study of earthquake dynamics, fault-earthquake interactions and seismicity. Once digitized these fault models can be used in Finite Element Model (FEM) programs or earthquake simulations such as Virtual California (VC). However, these models are often difficult to create, requiring a substantial amount of time to generate the fault topology and compute the properties of the individual segments. To aid in the construction of such models we have developed an immersive virtual reality (VR) application to visualize and edit fault models. Our program is designed to run in a CAVE (walk-in VR environment), but also works in a wide range of other environments, including desktop systems and GeoWalls. It is being developed at the UC Davis W.M. Keck Center for Active Visualization in the Earth Sciences (KeckCAVES, http://www.keckcaves.org). Immersive VR environments are ideal for visualizing and manipulating three- dimensional data sets. Our program allows users to create new models or modify existing ones; for example by repositioning individual fault-segments, by changing the dip angle, or by modifying (or assigning) the value of a property associated with a particular fault segment (i.e. slip rate). With the addition of high resolution Digital Elevation Models (DEM) the user can accurately add new segments to an existing model or create a fault model entirely from scratch. Interactively created or modified models can be written to XML files at any time; from there the data may easily be converted into various formats required by the analysis software or simulation. We believe that the ease of interaction provided by VR technology is ideally suited to the problem of creating and editing digital fault models. Our software provides the user with an intuitive environment for visualizing and editing fault model data. This translates not only into less time spent creating fault models, but also enables the researcher to

  5. Interpersonal distance in immersive virtual environments.

    PubMed

    Bailenson, Jeremy N; Blascovich, Jim; Beall, Andrew C; Loomis, Jack M

    2003-07-01

    Digital immersive virtual environment technology (IVET) enables behavioral scientists to conduct ecologically realistic experiments with near-perfect experimental control. The authors employed IVET to study the interpersonal distance maintained between participants and virtual humans. In Study 1, participants traversed a three-dimensional virtual room in which a virtual human stood. In Study 2, a virtual human approached participants. In both studies, participant gender, virtual human gender, virtual human gaze behavior, and whether virtual humans were allegedly controlled by humans (i.e., avatars) or computers (i.e., agents) were varied. Results indicated that participants maintained greater distance from virtual humans when approaching their fronts compared to their backs. In addition, participants gave more personal space to virtual agents who engaged them in mutual gaze. Moreover, when virtual humans invaded their personal space, participants moved farthest from virtual human agents. The advantages and disadvantages of IVET for the study of human behavior are discussed.

  6. Design and Implementation of a 3D Multi-User Virtual World for Language Learning

    ERIC Educational Resources Information Center

    Ibanez, Maria Blanca; Garcia, Jose Jesus; Galan, Sergio; Maroto, David; Morillo, Diego; Kloos, Carlos Delgado

    2011-01-01

    The best way to learn is by having a good teacher and the best language learning takes place when the learner is immersed in an environment where the language is natively spoken. 3D multi-user virtual worlds have been claimed to be useful for learning, and the field of exploiting them for education is becoming more and more active thanks to the…

  7. Learning in 3D Virtual Environments: Collaboration and Knowledge Spirals

    ERIC Educational Resources Information Center

    Burton, Brian G.; Martin, Barbara N.

    2010-01-01

    The purpose of this case study was to determine if learning occurred within a 3D virtual learning environment by determining if elements of collaboration and Nonaka and Takeuchi's (1995) knowledge spiral were present. A key portion of this research was the creation of a Virtual Learning Environment. This 3D VLE utilized the Torque Game Engine…

  8. Digital Immersive Virtual Environments and Instructional Computing

    ERIC Educational Resources Information Center

    Blascovich, Jim; Beall, Andrew C.

    2010-01-01

    This article reviews theory and research relevant to the development of digital immersive virtual environment-based instructional computing systems. The review is organized within the context of a multidimensional model of social influence and interaction within virtual environments that models the interaction of four theoretical factors: theory…

  9. Modulation of cortical activity in 2D versus 3D virtual reality environments: an EEG study.

    PubMed

    Slobounov, Semyon M; Ray, William; Johnson, Brian; Slobounov, Elena; Newell, Karl M

    2015-03-01

    There is a growing empirical evidence that virtual reality (VR) is valuable for education, training, entertaining and medical rehabilitation due to its capacity to represent real-life events and situations. However, the neural mechanisms underlying behavioral confounds in VR environments are still poorly understood. In two experiments, we examined the effect of fully immersive 3D stereoscopic presentations and less immersive 2D VR environments on brain functions and behavioral outcomes. In Experiment 1 we examined behavioral and neural underpinnings of spatial navigation tasks using electroencephalography (EEG). In Experiment 2, we examined EEG correlates of postural stability and balance. Our major findings showed that fully immersive 3D VR induced a higher subjective sense of presence along with enhanced success rate of spatial navigation compared to 2D. In Experiment 1 power of frontal midline EEG (FM-theta) was significantly higher during the encoding phase of route presentation in the 3D VR. In Experiment 2, the 3D VR resulted in greater postural instability and modulation of EEG patterns as a function of 3D versus 2D environments. The findings support the inference that the fully immersive 3D enriched-environment requires allocation of more brain and sensory resources for cognitive/motor control during both tasks than 2D presentations. This is further evidence that 3D VR tasks using EEG may be a promising approach for performance enhancement and potential applications in clinical/rehabilitation settings. PMID:25448267

  10. Contextual EFL Learning in a 3D Virtual Environment

    ERIC Educational Resources Information Center

    Lan, Yu-Ju

    2015-01-01

    The purposes of the current study are to develop virtually immersive EFL learning contexts for EFL learners in Taiwan to pre- and review English materials beyond the regular English class schedule. A 2-iteration action research lasting for one semester was conducted to evaluate the effects of virtual contexts on learners' EFL learning. 132…

  11. Virtual environment display for a 3D audio room simulation

    NASA Astrophysics Data System (ADS)

    Chapin, William L.; Foster, Scott

    1992-06-01

    Recent developments in virtual 3D audio and synthetic aural environments have produced a complex acoustical room simulation. The acoustical simulation models a room with walls, ceiling, and floor of selected sound reflecting/absorbing characteristics and unlimited independent localizable sound sources. This non-visual acoustic simulation, implemented with 4 audio ConvolvotronsTM by Crystal River Engineering and coupled to the listener with a Poihemus IsotrakTM, tracking the listener's head position and orientation, and stereo headphones returning binaural sound, is quite compelling to most listeners with eyes closed. This immersive effect should be reinforced when properly integrated into a full, multi-sensory virtual environment presentation. This paper discusses the design of an interactive, visual virtual environment, complementing the acoustic model and specified to: 1) allow the listener to freely move about the space, a room of manipulable size, shape, and audio character, while interactively relocating the sound sources; 2) reinforce the listener's feeling of telepresence into the acoustical environment with visual and proprioceptive sensations; 3) enhance the audio with the graphic and interactive components, rather than overwhelm or reduce it; and 4) serve as a research testbed and technology transfer demonstration. The hardware/software design of two demonstration systems, one installed and one portable, are discussed through the development of four iterative configurations. The installed system implements a head-coupled, wide-angle, stereo-optic tracker/viewer and multi-computer simulation control. The portable demonstration system implements a head-mounted wide-angle, stereo-optic display, separate head and pointer electro-magnetic position trackers, a heterogeneous parallel graphics processing system, and object oriented C++ program code.

  12. Development of visual 3D virtual environment for control software

    NASA Technical Reports Server (NTRS)

    Hirose, Michitaka; Myoi, Takeshi; Amari, Haruo; Inamura, Kohei; Stark, Lawrence

    1991-01-01

    Virtual environments for software visualization may enable complex programs to be created and maintained. A typical application might be for control of regional electric power systems. As these encompass broader computer networks than ever, construction of such systems becomes very difficult. Conventional text-oriented environments are useful in programming individual processors. However, they are obviously insufficient to program a large and complicated system, that includes large numbers of computers connected to each other; such programming is called 'programming in the large.' As a solution for this problem, the authors are developing a graphic programming environment wherein one can visualize complicated software in virtual 3D world. One of the major features of the environment is the 3D representation of concurrent process. 3D representation is used to supply both network-wide interprocess programming capability (capability for 'programming in the large') and real-time programming capability. The authors' idea is to fuse both the block diagram (which is useful to check relationship among large number of processes or processors) and the time chart (which is useful to check precise timing for synchronization) into a single 3D space. The 3D representation gives us a capability for direct and intuitive planning or understanding of complicated relationship among many concurrent processes. To realize the 3D representation, a technology to enable easy handling of virtual 3D object is a definite necessity. Using a stereo display system and a gesture input device (VPL DataGlove), our prototype of the virtual workstation has been implemented. The workstation can supply the 'sensation' of the virtual 3D space to a programmer. Software for the 3D programming environment is implemented on the workstation. According to preliminary assessments, a 50 percent reduction of programming effort is achieved by using the virtual 3D environment. The authors expect that the 3D

  13. What Are the Learning Affordances of 3-D Virtual Environments?

    ERIC Educational Resources Information Center

    Dalgarno, Barney; Lee, Mark J. W.

    2010-01-01

    This article explores the potential learning benefits of three-dimensional (3-D) virtual learning environments (VLEs). Drawing on published research spanning two decades, it identifies a set of unique characteristics of 3-D VLEs, which includes aspects of their representational fidelity and aspects of the learner-computer interactivity they…

  14. ESL Teacher Training in 3D Virtual Worlds

    ERIC Educational Resources Information Center

    Kozlova, Iryna; Priven, Dmitri

    2015-01-01

    Although language learning in 3D Virtual Worlds (VWs) has become a focus of recent research, little is known about the knowledge and skills teachers need to acquire to provide effective task-based instruction in 3D VWs and the type of teacher training that best prepares instructors for such an endeavor. This study employs a situated learning…

  15. Educational Visualizations in 3D Collaborative Virtual Environments: A Methodology

    ERIC Educational Resources Information Center

    Fominykh, Mikhail; Prasolova-Forland, Ekaterina

    2012-01-01

    Purpose: Collaborative virtual environments (CVEs) have become increasingly popular in educational settings and the role of 3D content is becoming more and more important. Still, there are many challenges in this area, such as lack of empirical studies that provide design for educational activities in 3D CVEs and lack of norms of how to support…

  16. Virtual 3d City Modeling: Techniques and Applications

    NASA Astrophysics Data System (ADS)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2013-08-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as Building, Tree, Vegetation, and some manmade feature belonging to urban area. There are various terms used for 3D city models such as "Cybertown", "Cybercity", "Virtual City", or "Digital City". 3D city models are basically a computerized or digital model of a city contains the graphic representation of buildings and other objects in 2.5 or 3D. Generally three main Geomatics approach are using for Virtual 3-D City models generation, in first approach, researcher are using Conventional techniques such as Vector Map data, DEM, Aerial images, second approach are based on High resolution satellite images with LASER scanning, In third method, many researcher are using Terrestrial images by using Close Range Photogrammetry with DSM & Texture mapping. We start this paper from the introduction of various Geomatics techniques for 3D City modeling. These techniques divided in to two main categories: one is based on Automation (Automatic, Semi-automatic and Manual methods), and another is Based on Data input techniques (one is Photogrammetry, another is Laser Techniques). After details study of this, finally in short, we are trying to give the conclusions of this study. In the last, we are trying to give the conclusions of this research paper and also giving a short view for justification and analysis, and present trend for 3D City modeling. This paper gives an overview about the Techniques related with "Generation of Virtual 3-D City models using Geomatics Techniques" and the Applications of Virtual 3D City models. Photogrammetry, (Close range, Aerial, Satellite), Lasergrammetry, GPS, or combination of these modern Geomatics techniques play a major role to create a virtual 3-D City model. Each and every techniques and method has some advantages and some drawbacks. Point cloud model is a modern trend for virtual 3-D city model. Photo-realistic, Scalable, Geo-referenced virtual 3

  17. Game-Like Language Learning in 3-D Virtual Environments

    ERIC Educational Resources Information Center

    Berns, Anke; Gonzalez-Pardo, Antonio; Camacho, David

    2013-01-01

    This paper presents our recent experiences with the design of game-like applications in 3-D virtual environments as well as its impact on student motivation and learning. Therefore our paper starts with a brief analysis of the motivational aspects of videogames and virtual worlds (VWs). We then go on to explore the possible benefits of both in the…

  18. Using Immersive Virtual Environments for Certification

    NASA Technical Reports Server (NTRS)

    Lutz, R.; Cruz-Neira, C.

    1998-01-01

    Immersive virtual environments (VEs) technology has matured to the point where it can be utilized as a scientific and engineering problem solving tool. In particular, VEs are starting to be used to design and evaluate safety-critical systems that involve human operators, such as flight and driving simulators, complex machinery training, and emergency rescue strategies.

  19. Organizational Learning Goes Virtual?: A Study of Employees' Learning Achievement in Stereoscopic 3D Virtual Reality

    ERIC Educational Resources Information Center

    Lau, Kung Wong

    2015-01-01

    Purpose: This study aims to deepen understanding of the use of stereoscopic 3D technology (stereo3D) in facilitating organizational learning. The emergence of advanced virtual technologies, in particular to the stereo3D virtual reality, has fundamentally changed the ways in which organizations train their employees. However, in academic or…

  20. Foreign language learning in immersive virtual environments

    NASA Astrophysics Data System (ADS)

    Chang, Benjamin; Sheldon, Lee; Si, Mei; Hand, Anton

    2012-03-01

    Virtual reality has long been used for training simulations in fields from medicine to welding to vehicular operation, but simulations involving more complex cognitive skills present new design challenges. Foreign language learning, for example, is increasingly vital in the global economy, but computer-assisted education is still in its early stages. Immersive virtual reality is a promising avenue for language learning as a way of dynamically creating believable scenes for conversational training and role-play simulation. Visual immersion alone, however, only provides a starting point. We suggest that the addition of social interactions and motivated engagement through narrative gameplay can lead to truly effective language learning in virtual environments. In this paper, we describe the development of a novel application for teaching Mandarin using CAVE-like VR, physical props, human actors and intelligent virtual agents, all within a semester-long multiplayer mystery game. Students travel (virtually) to China on a class field trip, which soon becomes complicated with intrigue and mystery surrounding the lost manuscript of an early Chinese literary classic. Virtual reality environments such as the Forbidden City and a Beijing teahouse provide the setting for learning language, cultural traditions, and social customs, as well as the discovery of clues through conversation in Mandarin with characters in the game.

  1. [Development of a software for 3D virtual phantom design].

    PubMed

    Zou, Lian; Xie, Zhao; Wu, Qi

    2014-02-01

    In this paper, we present a 3D virtual phantom design software, which was developed based on object-oriented programming methodology and dedicated to medical physics research. This software was named Magical Phan tom (MPhantom), which is composed of 3D visual builder module and virtual CT scanner. The users can conveniently construct any complex 3D phantom, and then export the phantom as DICOM 3.0 CT images. MPhantom is a user-friendly and powerful software for 3D phantom configuration, and has passed the real scene's application test. MPhantom will accelerate the Monte Carlo simulation for dose calculation in radiation therapy and X ray imaging reconstruction algorithm research. PMID:24804488

  2. Web-based three-dimensional Virtual Body Structures: W3D-VBS.

    PubMed

    Temkin, Bharti; Acosta, Eric; Hatfield, Paul; Onal, Erhan; Tong, Alex

    2002-01-01

    Major efforts are being made to improve the teaching of human anatomy to foster cognition of visuospatial relationships. The Visible Human Project of the National Library of Medicine makes it possible to create virtual reality-based applications for teaching anatomy. Integration of traditional cadaver and illustration-based methods with Internet-based simulations brings us closer to this goal. Web-based three-dimensional Virtual Body Structures (W3D-VBS) is a next-generation immersive anatomical training system for teaching human anatomy over the Internet. It uses Visible Human data to dynamically explore, select, extract, visualize, manipulate, and stereoscopically palpate realistic virtual body structures with a haptic device. Tracking user's progress through evaluation tools helps customize lesson plans. A self-guided "virtual tour" of the whole body allows investigation of labeled virtual dissections repetitively, at any time and place a user requires it. PMID:12223495

  3. Web-based Three-dimensional Virtual Body Structures: W3D-VBS

    PubMed Central

    Temkin, Bharti; Acosta, Eric; Hatfield, Paul; Onal, Erhan; Tong, Alex

    2002-01-01

    Major efforts are being made to improve the teaching of human anatomy to foster cognition of visuospatial relationships. The Visible Human Project of the National Library of Medicine makes it possible to create virtual reality-based applications for teaching anatomy. Integration of traditional cadaver and illustration-based methods with Internet-based simulations brings us closer to this goal. Web-based three-dimensional Virtual Body Structures (W3D-VBS) is a next-generation immersive anatomical training system for teaching human anatomy over the Internet. It uses Visible Human data to dynamically explore, select, extract, visualize, manipulate, and stereoscopically palpate realistic virtual body structures with a haptic device. Tracking user’s progress through evaluation tools helps customize lesson plans. A self-guided “virtual tour” of the whole body allows investigation of labeled virtual dissections repetitively, at any time and place a user requires it. PMID:12223495

  4. Discovering new methods of data fusion, visualization, and analysis in 3D immersive environments for hyperspectral and laser altimetry data

    NASA Astrophysics Data System (ADS)

    Moore, C. A.; Gertman, V.; Olsoy, P.; Mitchell, J.; Glenn, N. F.; Joshi, A.; Norpchen, D.; Shrestha, R.; Pernice, M.; Spaete, L.; Grover, S.; Whiting, E.; Lee, R.

    2011-12-01

    Immersive virtual reality environments such as the IQ-Station or CAVE° (Cave Automated Virtual Environment) offer new and exciting ways to visualize and explore scientific data and are powerful research and educational tools. Combining remote sensing data from a range of sensor platforms in immersive 3D environments can enhance the spectral, textural, spatial, and temporal attributes of the data, which enables scientists to interact and analyze the data in ways never before possible. Visualization and analysis of large remote sensing datasets in immersive environments requires software customization for integrating LiDAR point cloud data with hyperspectral raster imagery, the generation of quantitative tools for multidimensional analysis, and the development of methods to capture 3D visualizations for stereographic playback. This study uses hyperspectral and LiDAR data acquired over the China Hat geologic study area near Soda Springs, Idaho, USA. The data are fused into a 3D image cube for interactive data exploration and several methods of recording and playback are investigated that include: 1) creating and implementing a Virtual Reality User Interface (VRUI) patch configuration file to enable recording and playback of VRUI interactive sessions within the CAVE and 2) using the LiDAR and hyperspectral remote sensing data and GIS data to create an ArcScene 3D animated flyover, where left- and right-eye visuals are captured from two independent monitors for playback in a stereoscopic player. These visualizations can be used as outreach tools to demonstrate how integrated data and geotechnology techniques can help scientists see, explore, and more adequately comprehend scientific phenomena, both real and abstract.

  5. Creating an Immersive Mars Experience Using Unity3D

    NASA Technical Reports Server (NTRS)

    Miles, Sarah

    2011-01-01

    Between the two Mars Exploration Rovers, Spirit and Opportunity, NASA has collected over 280,000 images while studying the Martian surface. This number will continue to grow, with Opportunity continuing to send images and with another rover, Curiosity, launching soon. Using data collected by and for these Mars rovers, I am contributing to the creation of virtual experiences that will expose the general public to Mars. These experiences not only work to increase public knowledge, but they attempt to do so in an engaging manner more conducive to knowledge retention by letting others view Mars through the rovers' eyes. My contributions include supporting image viewing (for example, allowing users to click on panoramic images of the Martian surface to access closer range photos) as well as enabling tagging of points of interest. By creating a more interactive way of viewing the information we have about Mars, we are not just educating the public about a neighboring planet. We are showing the importance of doing such research.

  6. The SEE Experience: Edutainment in 3D Virtual Worlds.

    ERIC Educational Resources Information Center

    Di Blas, Nicoletta; Paolini, Paolo; Hazan, Susan

    Shared virtual worlds are innovative applications where several users, represented by Avatars, simultaneously access via Internet a 3D space. Users cooperate through interaction with the environment and with each other, manipulating objects and chatting as they go. Apart from in the well documented online action games industry, now often played…

  7. 3D Virtual Worlds as Environments for Literacy Learning

    ERIC Educational Resources Information Center

    Merchant, Guy

    2010-01-01

    Background: Although much has been written about the ways in which new technology might transform educational practice, particularly in the area of literacy learning, there is relatively little empirical work that explores the possibilities and problems--or even what such a transformation might look like in the classroom. 3D virtual worlds offer a…

  8. Learning Relative Motion Concepts in Immersive and Non-Immersive Virtual Environments

    ERIC Educational Resources Information Center

    Kozhevnikov, Michael; Gurlitt, Johannes; Kozhevnikov, Maria

    2013-01-01

    The focus of the current study is to understand which unique features of an immersive virtual reality environment have the potential to improve learning relative motion concepts. Thirty-seven undergraduate students learned relative motion concepts using computer simulation either in immersive virtual environment (IVE) or non-immersive desktop…

  9. Re-Dimensional Thinking in Earth Science: From 3-D Virtual Reality Panoramas to 2-D Contour Maps

    ERIC Educational Resources Information Center

    Park, John; Carter, Glenda; Butler, Susan; Slykhuis, David; Reid-Griffin, Angelia

    2008-01-01

    This study examines the relationship of gender and spatial perception on student interactivity with contour maps and non-immersive virtual reality. Eighteen eighth-grade students elected to participate in a six-week activity-based course called "3-D GeoMapping." The course included nine days of activities related to topographic mapping. At the end…

  10. 3-D Sound for Virtual Reality and Multimedia

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Trejo, Leonard J. (Technical Monitor)

    2000-01-01

    Technology and applications for the rendering of virtual acoustic spaces are reviewed. Chapter 1 deals with acoustics and psychoacoustics. Chapters 2 and 3 cover cues to spatial hearing and review psychoacoustic literature. Chapter 4 covers signal processing and systems overviews of 3-D sound systems. Chapter 5 covers applications to computer workstations, communication systems, aeronautics and space, and sonic arts. Chapter 6 lists resources. This TM is a reprint of the 1994 book from Academic Press.

  11. Dental impressions using 3D digital scanners: virtual becomes reality.

    PubMed

    Birnbaum, Nathan S; Aaronson, Heidi B

    2008-10-01

    The technologies that have made the use of three-dimensional (3D) digital scanners an integral part of many industries for decades have been improved and refined for application to dentistry. Since the introduction of the first dental impressioning digital scanner in the 1980s, development engineers at a number of companies have enhanced the technologies and created in-office scanners that are increasingly user-friendly and able to produce precisely fitting dental restorations. These systems are capable of capturing 3D virtual images of tooth preparations, from which restorations may be fabricated directly (ie, CAD/CAM systems) or fabricated indirectly (ie, dedicated impression scanning systems for the creation of accurate master models). The use of these products is increasing rapidly around the world and presents a paradigm shift in the way in which dental impressions are made. Several of the leading 3D dental digital scanning systems are presented and discussed in this article.

  12. Virtual environment interaction through 3D audio by blind children.

    PubMed

    Sánchez, J; Lumbreras, M

    1999-01-01

    Interactive software is actively used for learning, cognition, and entertainment purposes. Educational entertainment software is not very popular among blind children because most computer games and electronic toys have interfaces that are only accessible through visual cues. This work applies the concept of interactive hyperstories to blind children. Hyperstories are implemented in a 3D acoustic virtual world. In past studies we have conceptualized a model to design hyperstories. This study illustrates the feasibility of the model. It also provides an introduction to researchers to the field of entertainment software for blind children. As a result, we have designed and field tested AudioDoom, a virtual environment interacted through 3D Audio by blind children. AudioDoom is also a software that enables testing nontrivial interfaces and cognitive tasks with blind children. We explored the construction of cognitive spatial structures in the minds of blind children through audio-based entertainment and spatial sound navigable experiences. Children playing AudioDoom were exposed to first person experiences by exploring highly interactive virtual worlds through the use of 3D aural representations of the space. This experience was structured in several cognitive tasks where they had to build concrete models of their spatial representations constructed through the interaction with AudioDoom by using Legotrade mark blocks. We analyze our preliminary results after testing AudioDoom with Chilean children from a school for blind children. We discuss issues such as interactivity in software without visual cues, the representation of spatial sound navigable experiences, and entertainment software such as computer games for blind children. We also evaluate the feasibility to construct virtual environments through the design of dynamic learning materials with audio cues.

  13. Gravity and spatial orientation in virtual 3D-mazes.

    PubMed

    Vidal, Manuel; Lipshits, Mark; McIntyre, Joseph; Berthoz, Alain

    2003-01-01

    In order to bring new insights into the processing of 3D spatial information, we conducted experiments on the capacity of human subjects to memorize 3D-structured environments, such as buildings with several floors or the potentially complex 3D structure of an orbital space station. We had subjects move passively in one of two different exploration modes, through a visual virtual environment that consisted of a series of connected tunnels. In upright displacement, self-rotation when going around corners in the tunnels was limited to yaw rotations. For horizontal translations, subjects faced forward in the direction of motion. When moving up or down through vertical segments of the 3D tunnels, however, subjects facing the tunnel wall, remaining upright as if moving up and down in a glass elevator. In the unconstrained displacement mode, subjects would appear to climb or dive face-forward when moving vertically; thus, in this mode subjects could experience visual flow consistent with rotations about any of the 3 canonical axes. In a previous experiment, subjects were asked to determine whether a static, outside view of a test tunnel corresponded or not to the tunnel through which they had just passed. Results showed that performance was better on this task for the upright than for the unconstrained displacement mode; i.e. when subjects remained "upright" with respect to the virtual environment as defined by subject's posture in the first segment. This effect suggests that gravity may provide a key reference frame used in the shift between egocentric and allocentric representations of the 3D virtual world. To check whether it is the polarizing effects of gravity that leads to the favoring of the upright displacement mode, the experimental paradigm was adapted for orbital flight and performed by cosmonauts onboard the International Space Station. For these flight experiments the previous recognition task was replaced by a computerized reconstruction task, which proved

  14. Heard on The Street: GIS-Guided Immersive 3D Models as an Augmented Reality for Team Collaboration

    NASA Astrophysics Data System (ADS)

    Quinn, B. B.

    2007-12-01

    Grid computing can be configured to run physics simulations for spatially contiguous virtual 3D model spaces. Each cell is run by a single processor core simulating 1/16 square kilometer of surface and can contain up to 15,000 objects. In this work, a model of one urban block was constructed in the commercial 3D online digital world Second Life http://secondlife.com to prove concept that GIS data can guide the build of an accurate in-world model. Second Life simulators support terrain modeling at two-meter grid intervals. Access to the Second Life grid is worldwide if connections to the US-based servers are possible. This immersive 3D model allows visitors to explore the space at will, with physics simulated for object collisions, gravity, and wind forces about 40 times per second. Visitors view this world as renderings by their 3-D display card of graphic objects and raster textures that are streamed from the simulator grid to the Second Life client, based on that client's instantaneous field of view. Visitors to immersive 3D models experience a virtual world that engages their innate abilities to relate to the real immersive 3D world in which humans have evolved. These abilities enable far more complex and dynamic 3D environments to be quickly and accurately comprehended by more visitors than most non-immersive 3D environments. Objects of interest at ground surface and below can be walked around, possibly entered, viewed at arm's length or flown over at 500 meters above. Videos of renderings have been recorded (as machinima) to share a visit as part of public presentations. Key to this experience is that dozens of simultaneous visitors can experience the model at the same time, each exploring it at will and seeing (if not colliding with) one another---like twenty geology students on a virtual outcrop, where each student might fly if they chose to. This work modeled the downtown Berkeley, CA, transit station in the Second Life region "Gualala" near [170, 35, 35

  15. Immersive Training Systems: Virtual Reality and Education and Training.

    ERIC Educational Resources Information Center

    Psotka, Joseph

    1995-01-01

    Describes virtual reality (VR) technology and VR research on education and training. Focuses on immersion as the key added value of VR, analyzes cognitive variables connected to immersion, how it is generated in synthetic environments and its benefits. Discusses value of tracked, immersive visual displays over nonimmersive simulations. Contains 78…

  16. Enhanced LOD Concepts for Virtual 3d City Models

    NASA Astrophysics Data System (ADS)

    Benner, J.; Geiger, A.; Gröger, G.; Häfele, K.-H.; Löwner, M.-O.

    2013-09-01

    Virtual 3D city models contain digital three dimensional representations of city objects like buildings, streets or technical infrastructure. Because size and complexity of these models continuously grow, a Level of Detail (LoD) concept effectively supporting the partitioning of a complete model into alternative models of different complexity and providing metadata, addressing informational content, complexity and quality of each alternative model is indispensable. After a short overview on various LoD concepts, this paper discusses the existing LoD concept of the CityGML standard for 3D city models and identifies a number of deficits. Based on this analysis, an alternative concept is developed and illustrated with several examples. It differentiates between first, a Geometric Level of Detail (GLoD) and a Semantic Level of Detail (SLoD), and second between the interior building and its exterior shell. Finally, a possible implementation of the new concept is demonstrated by means of an UML model.

  17. Declarative Knowledge Acquisition in Immersive Virtual Learning Environments

    ERIC Educational Resources Information Center

    Webster, Rustin

    2016-01-01

    The author investigated the interaction effect of immersive virtual reality (VR) in the classroom. The objective of the project was to develop and provide a low-cost, scalable, and portable VR system containing purposely designed and developed immersive virtual learning environments for the US Army. The purpose of the mixed design experiment was…

  18. Machinima Interventions: Innovative Approaches to Immersive Virtual World Curriculum Integration

    ERIC Educational Resources Information Center

    Middleton, Andrew John; Mather, Richard

    2008-01-01

    The educational value of Immersive Virtual Worlds (IVWs) seems to be in their social immersive qualities and as an accessible simulation technology. In contrast to these synchronous applications this paper discusses the use of educational machinima developed in IVW virtual film sets. It also introduces the concept of media intervention, proposing…

  19. From Multi-User Virtual Environment to 3D Virtual Learning Environment

    ERIC Educational Resources Information Center

    Livingstone, Daniel; Kemp, Jeremy; Edgar, Edmund

    2008-01-01

    While digital virtual worlds have been used in education for a number of years, advances in the capabilities and spread of technology have fed a recent boom in interest in massively multi-user 3D virtual worlds for entertainment, and this in turn has led to a surge of interest in their educational applications. In this paper we briefly review the…

  20. A New Navigation Method for 3D Virtual Environment Exploration

    NASA Astrophysics Data System (ADS)

    Haydar, Mahmoud; Maidi, Madjid; Roussel, David; Mallem, Malik

    2009-03-01

    Navigation in virtual environments is a complex task which imposes a high cognitive load on the user. It consists on maintaining knowledge of current position and orientation of the user while he moves through the space. In this paper, we present a novel approach for navigation in 3D virtual environments. The method is based on the principle of skiing, and the idea is to provide to the user a total control of his navigation speed and rotation using his two hands. This technique enables user-steered exploration by determining the direction and the speed of motion using the knowledge of the positions of the user hands. A module of speed control is included to the technique to easily control the speed using the angle between the hands. The direction of motion is given by the orthogonal axis of the segment joining the two hands. A user study will show the efficiency of the method in performing exploration tasks in complex 3D large-scale environments. Furthermore, we proposed an experimental protocol to prove that this technique presents a high level of navigation guidance and control, achieving significantly better performance in comparison to simple navigation techniques.

  1. Virtually Ostracized: Studying Ostracism in Immersive Virtual Environments

    PubMed Central

    Wesselmann, Eric D.; Law, Alvin Ty; Williams, Kipling D.

    2012-01-01

    Abstract Electronic-based communication (such as Immersive Virtual Environments; IVEs) may offer new ways of satisfying the need for social connection, but they also provide ways this need can be thwarted. Ostracism, being ignored and excluded, is a common social experience that threatens fundamental human needs (i.e., belonging, control, self-esteem, and meaningful existence). Previous ostracism research has made use of a variety of paradigms, including minimal electronic-based interactions (e.g., Cyberball) and communication (e.g., chatrooms and Short Message Services). These paradigms, however, lack the mundane realism that many IVEs now offer. Further, IVE paradigms designed to measure ostracism may allow researchers to test more nuanced hypotheses about the effects of ostracism. We created an IVE in which ostracism could be manipulated experimentally, emulating a previously validated minimal ostracism paradigm. We found that participants who were ostracized in this IVE experienced the same negative effects demonstrated in other ostracism paradigms, providing, to our knowledge, the first evidence of the negative effects of ostracism in virtual environments. Though further research directly exploring these effects in online virtual environments is needed, this research suggests that individuals encountering ostracism in other virtual environments (such as massively multiplayer online role playing games; MMORPGs) may experience negative effects similar to those of being ostracized in real life. This possibility may have serious implications for individuals who are marginalized in their real life and turn to IVEs to satisfy their need for social connection. PMID:22897472

  2. A Voice and Mouse Input Interface for 3D Virtual Environments

    NASA Technical Reports Server (NTRS)

    Kao, David L.; Bryson, Steve T.

    2003-01-01

    There have been many successful stories on how 3D input devices can be fully integrated into an immersive virtual environment. Electromagnetic trackers, optical trackers, gloves, and flying mice are just some of these input devices. Though we can use existing 3D input devices that are commonly used for VR applications, there are several factors that prevent us from choosing these input devices for our applications. One main factor is that most of these tracking devices are not suitable for prolonged use due to human fatigue associated with using them. A second factor is that many of them would occupy additional office space. Another factor is that many of the 3D input devices are expensive due to the unusual hardware that are required. For our VR applications, we want a user interface that would work naturally with standard equipment. In this paper, we demonstrate applications or our proposed muitimodal interface using a 3D dome display. We also show that effective data analysis can be achieved while the scientists view their data rendered inside the dome display and perform user interactions simply using the mouse and voice input. Though the sphere coordinate grid seems to be ideal for interaction using a 3D dome display, we can also use other non-spherical grids as well.

  3. 3D Technology Selection for a Virtual Learning Environment by Blending ISO 9126 Standard and AHP

    ERIC Educational Resources Information Center

    Cetin, Aydin; Guler, Inan

    2011-01-01

    Web3D presents many opportunities for learners in a virtual world or virtual environment over the web. This is a great opportunity for open-distance education institutions to benefit from web3d technologies to create courses with interactive 3d materials. There are many open source and commercial products offering 3d technologies over the web…

  4. Second Life, a 3-D Animated Virtual World: An Alternative Platform for (Art) Education

    ERIC Educational Resources Information Center

    Han, Hsiao-Cheng

    2011-01-01

    3-D animated virtual worlds are no longer only for gaming. With the advance of technology, animated virtual worlds not only are found on every computer, but also connect users with the internet. Today, virtual worlds are created not only by companies, but also through the collaboration of users. Online 3-D animated virtual worlds provide a new…

  5. Load Assembly of the Ignitor Machine with 3D Interactive Virtual Reality

    NASA Astrophysics Data System (ADS)

    Migliori, S.; Pierattini, S.

    2003-10-01

    The main purpose of this work is to assist the Ignitor team in every phase of the project using the new Virtual Reality Technology (VR). Through the VR it is possible to see, plan and test the machine assembly sequence and the total layout. We are also planning to simulate in VR the remote handling systems. The complexity of the system requires a large and powerful graphical device. The ENEA?s "Advanced Visualization Technology" team has implemented a repository file data structure integrated with the CATIA drawing cams from the designer of Ignitor. The 3D virtual mockup software is used to view and analyze all objects that compose the mockup and also to analyze the correct assembly sequences. The ENEA?s 3D immersive system and software are fully integrated in the ENEA?s supercomputing GRID infrastructure. At any time all members of the Ignitor Project can view the status of the mockup in 3D (draft and/or final objects) through the net. During the conference examples of the assembly sequence and load assembly structure will be presented.

  6. The Effects of Instructor-Avatar Immediacy in Second Life, an Immersive and Interactive Three-Dimensional Virtual Environment

    ERIC Educational Resources Information Center

    Lawless-Reljic, Sabine Karine

    2010-01-01

    Growing interest of educational institutions in desktop 3D graphic virtual environments for hybrid and distance education prompts questions on the efficacy of such tools. Virtual worlds, such as Second Life[R], enable computer-mediated immersion and interactions encompassing multimodal communication channels including audio, video, and text-.…

  7. 3D virtual screening of large combinatorial spaces.

    PubMed

    Muegge, Ingo; Zhang, Qiang

    2015-01-01

    A new method for 3D in silico screening of large virtual combinatorial chemistry spaces is described. The software PharmShape screens millions of individual compounds applying a multi-conformational pharmacophore and shape based approach. Its extension, PharmShapeCC, is capable of screening trillions of compounds from tens of thousands of combinatorial libraries. Key elements of PharmShape and PharmShapeCC are customizable pharmacophore features, a composite inclusion sphere, library core intermediate clustering, and the determination of combinatorial library consensus orientations that allow for orthogonal enumeration of libraries. The performance of the software is illustrated by the prospective identification of a novel CXCR5 antagonist and examples of finding novel chemotypes from synthesizing and evaluating combinatorial hit libraries identified from PharmShapeCC screens for CCR1, LTA4 hydrolase, and MMP-13.

  8. Immersive virtual environments in cue exposure.

    PubMed

    Kuntze, M F; Stoermer, R; Mager, R; Roessler, A; Mueller-Spahn, F; Bullinger, A H

    2001-08-01

    Cue reactivity to drug-related stimuli is a frequently observed phenomenon in drug addiction. Cue reactivity refers to a classical conditioned response pattern that occurs when an addicted subject is exposed to drug-related stimuli. This response consists of physiological and cognitive reactions. Craving, a subjective desire to use the drug of choice, is believed to play an important role in the occurrence of relapse in the natural setting. Besides craving, other subjective cue-elicited reactions have been reported, including withdrawal symptoms, drug-agonistic effects, and mood swings. Physiological reactions that have been investigated include skin conductance, heart rate, salivation, and body temperature. Conditioned reactivity to cues is an important factor in addiction to alcohol, nicotine, opiates, and cocaine. Cue exposure treatment (CET) refers to a manualized, repeated exposure to drug-related cues, aimed at the reduction of cue reactivity by extinction. In CET, different stimuli are presented, for example, slides, video tapes, pictures, or paraphernalia in nonrealistic, experimental settings. Most often assessments consist in subjective ratings by craving scales. Our pilot study will show that immersive virtual reality (IVR) is as good or even better in eliciting subjective and physiological craving symptoms as classical devices. PMID:11708729

  9. The ALIVE Project: Astronomy Learning in Immersive Virtual Environments

    NASA Astrophysics Data System (ADS)

    Yu, K. C.; Sahami, K.; Denn, G.

    2008-06-01

    The Astronomy Learning in Immersive Virtual Environments (ALIVE) project seeks to discover learning modes and optimal teaching strategies using immersive virtual environments (VEs). VEs are computer-generated, three-dimensional environments that can be navigated to provide multiple perspectives. Immersive VEs provide the additional benefit of surrounding a viewer with the simulated reality. ALIVE evaluates the incorporation of an interactive, real-time ``virtual universe'' into formal college astronomy education. In the experiment, pre-course, post-course, and curriculum tests will be used to determine the efficacy of immersive visualizations presented in a digital planetarium versus the same visual simulations in the non-immersive setting of a normal classroom, as well as a control case using traditional classroom multimedia. To normalize for inter-instructor variability, each ALIVE instructor will teach at least one of each class in each of the three test groups.

  10. Participatory Gis: Experimentations for a 3d Social Virtual Globe

    NASA Astrophysics Data System (ADS)

    Brovelli, M. A.; Minghini, M.; Zamboni, G.

    2013-08-01

    The dawn of GeoWeb 2.0, the geographic extension of Web 2.0, has opened new possibilities in terms of online dissemination and sharing of geospatial contents, thus laying the foundations for a fruitful development of Participatory GIS (PGIS). The purpose of the study is to investigate the extension of PGIS applications, which are quite mature in the traditional bi-dimensional framework, up to the third dimension. More in detail, the system should couple a powerful 3D visualization with an increase of public participation by means of a tool allowing data collecting from mobile devices (e.g. smartphones and tablets). The PGIS application, built using the open source NASA World Wind virtual globe, is focussed on the cultural and tourism heritage of Como city, located in Northern Italy. An authentication mechanism was implemented, which allows users to create and manage customized projects through cartographic mash-ups of Web Map Service (WMS) layers. Saved projects populate a catalogue which is available to the entire community. Together with historical maps and the current cartography of the city, the system is also able to manage geo-tagged multimedia data, which come from user field-surveys performed through mobile devices and report POIs (Points Of Interest). Each logged user can then contribute to POIs characterization by adding textual and multimedia information (e.g. images, audios and videos) directly on the globe. All in all, the resulting application allows users to create and share contributions as it usually happens on social platforms, additionally providing a realistic 3D representation enhancing the expressive power of data.

  11. Virtual Reality--Learning by Immersion.

    ERIC Educational Resources Information Center

    Dunning, Jeremy

    1998-01-01

    Discusses the use of virtual reality in educational software. Topics include CAVE (Computer-Assisted Virtual Environments); cost-effective virtual environment tools including QTVR (Quick Time Virtual Reality); interactive exercises; educational criteria for technology-based educational tools; and examples of screen displays. (LRW)

  12. Going Virtual… or Not: Development and Testing of a 3D Virtual Astronomy Environment

    NASA Astrophysics Data System (ADS)

    Ruzhitskaya, L.; Speck, A.; Ding, N.; Baldridge, S.; Witzig, S.; Laffey, J.

    2013-04-01

    We present our preliminary results of a pilot study of students' knowledge transfer of an astronomy concept into a new environment. We also share our discoveries on what aspects of a 3D environment students consider being motivational and discouraging for their learning. This study was conducted among 64 non-science major students enrolled in an astronomy laboratory course. During the course, students learned the concept and applications of Kepler's laws using a 2D interactive environment. Later in the semester, the students were placed in a 3D environment in which they were asked to conduct observations and to answers a set of questions pertaining to the Kepler's laws of planetary motion. In this study, we were interested in observing scrutinizing and assessing students' behavior: from choices that they made while creating their avatars (virtual representations) to tools they choose to use, to their navigational patterns, to their levels of discourse in the environment. These helped us to identify what features of the 3D environment our participants found to be helpful and interesting and what tools created unnecessary clutter and distraction. The students' social behavior patterns in the virtual environment together with their answers to the questions helped us to determine how well they understood Kepler's laws, how well they could transfer the concepts to a new situation, and at what point a motivational tool such as a 3D environment becomes a disruption to the constructive learning. Our founding confirmed that students construct deeper knowledge of a concept when they are fully immersed in the environment.

  13. Implementation of virtual models from sheet metal forming simulation into physical 3D colour models using 3D printing

    NASA Astrophysics Data System (ADS)

    Junk, S.

    2016-08-01

    Today the methods of numerical simulation of sheet metal forming offer a great diversity of possibilities for optimization in product development and in process design. However, the results from simulation are only available as virtual models. Because there are any forming tools available during the early stages of product development, physical models that could serve to represent the virtual results are therefore lacking. Physical 3D-models can be created using 3D-printing and serve as an illustration and present a better understanding of the simulation results. In this way, the results from the simulation can be made more “comprehensible” within a development team. This paper presents the possibilities of 3D-colour printing with particular consideration of the requirements regarding the implementation of sheet metal forming simulation. Using concrete examples of sheet metal forming, the manufacturing of 3D colour models will be expounded upon on the basis of simulation results.

  14. The Components of Effective Teacher Training in the Use of Three-Dimensional Immersive Virtual Worlds for Learning and Instruction Purposes: A Literature Review

    ERIC Educational Resources Information Center

    Nussli, Natalie; Oh, Kevin

    2014-01-01

    The overarching question that guides this review is to identify the key components of effective teacher training in virtual schooling, with a focus on three-dimensional (3D) immersive virtual worlds (IVWs). The process of identifying the essential components of effective teacher training in the use of 3D IVWs will be described step-by-step. First,…

  15. Immersive Virtual Moon Scene System Based on Panoramic Camera Data of Chang'E-3

    NASA Astrophysics Data System (ADS)

    Gao, X.; Liu, J.; Mu, L.; Yan, W.; Zeng, X.; Zhang, X.; Li, C.

    2014-12-01

    The system "Immersive Virtual Moon Scene" is used to show the virtual environment of Moon surface in immersive environment. Utilizing stereo 360-degree imagery from panoramic camera of Yutu rover, the system enables the operator to visualize the terrain and the celestial background from the rover's point of view in 3D. To avoid image distortion, stereo 360-degree panorama stitched by 112 images is projected onto inside surface of sphere according to panorama orientation coordinates and camera parameters to build the virtual scene. Stars can be seen from the Moon at any time. So we render the sun, planets and stars according to time and rover's location based on Hipparcos catalogue as the background on the sphere. Immersing in the stereo virtual environment created by this imaged-based rendering technique, the operator can zoom, pan to interact with the virtual Moon scene and mark interesting objects. Hardware of the immersive virtual Moon system is made up of four high lumen projectors and a huge curve screen which is 31 meters long and 5.5 meters high. This system which take all panoramic camera data available and use it to create an immersive environment, enable operator to interact with the environment and mark interesting objects contributed heavily to establishment of science mission goals in Chang'E-3 mission. After Chang'E-3 mission, the lab with this system will be open to public. Besides this application, Moon terrain stereo animations based on Chang'E-1 and Chang'E-2 data will be showed to public on the huge screen in the lab. Based on the data of lunar exploration,we will made more immersive virtual moon scenes and animations to help the public understand more about the Moon in the future.

  16. Sensor Spatial Distortion, Visual Latency, and Update Rate Effects on 3D Tracking in Virtual Environments

    NASA Technical Reports Server (NTRS)

    Ellis, S. R.; Adelstein, B. D.; Baumeler, S.; Jense, G. J.; Jacoby, R. H.; Trejo, Leonard (Technical Monitor)

    1998-01-01

    Several common defects that we have sought to minimize in immersing virtual environments are: static sensor spatial distortion, visual latency, and low update rates. Human performance within our environments during large amplitude 3D tracking was assessed by objective and subjective methods in the presence and absence of these defects. Results show that 1) removal of our relatively small spatial sensor distortion had minor effects on the tracking activity, 2) an Adapted Cooper-Harper controllability scale proved the most sensitive subjective indicator of the degradation of dynamic fidelity caused by increasing latency and decreasing frame rates, and 3) performance, as measured by normalized RMS tracking error or subjective impressions, was more markedly influenced by changing visual latency than by update rate.

  17. Performance of dental students versus prosthodontics residents on a 3D immersive haptic simulator.

    PubMed

    Eve, Elizabeth J; Koo, Samuel; Alshihri, Abdulmonem A; Cormier, Jeremy; Kozhenikov, Maria; Donoff, R Bruce; Karimbux, Nadeem Y

    2014-04-01

    This study evaluated the performance of dental students versus prosthodontics residents on a simulated caries removal exercise using a newly designed, 3D immersive haptic simulator. The intent of this study was to provide an initial assessment of the simulator's construct validity, which in the context of this experiment was defined as its ability to detect a statistically significant performance difference between novice dental students (n=12) and experienced prosthodontics residents (n=14). Both groups received equivalent calibration training on the simulator and repeated the same caries removal exercise three times. Novice and experienced subjects' average performance differed significantly on the caries removal exercise with respect to the percentage of carious lesion removed and volume of surrounding sound tooth structure removed (p<0.05). Experienced subjects removed a greater portion of the carious lesion, but also a greater volume of the surrounding tooth structure. Efficiency, defined as percentage of carious lesion removed over drilling time, improved significantly over the course of the experiment for both novice and experienced subjects (p<0.001). Within the limitations of this study, experienced subjects removed a greater portion of carious lesion on a 3D immersive haptic simulator. These results are a first step in establishing the validity of this device. PMID:24706694

  18. Performance of dental students versus prosthodontics residents on a 3D immersive haptic simulator.

    PubMed

    Eve, Elizabeth J; Koo, Samuel; Alshihri, Abdulmonem A; Cormier, Jeremy; Kozhenikov, Maria; Donoff, R Bruce; Karimbux, Nadeem Y

    2014-04-01

    This study evaluated the performance of dental students versus prosthodontics residents on a simulated caries removal exercise using a newly designed, 3D immersive haptic simulator. The intent of this study was to provide an initial assessment of the simulator's construct validity, which in the context of this experiment was defined as its ability to detect a statistically significant performance difference between novice dental students (n=12) and experienced prosthodontics residents (n=14). Both groups received equivalent calibration training on the simulator and repeated the same caries removal exercise three times. Novice and experienced subjects' average performance differed significantly on the caries removal exercise with respect to the percentage of carious lesion removed and volume of surrounding sound tooth structure removed (p<0.05). Experienced subjects removed a greater portion of the carious lesion, but also a greater volume of the surrounding tooth structure. Efficiency, defined as percentage of carious lesion removed over drilling time, improved significantly over the course of the experiment for both novice and experienced subjects (p<0.001). Within the limitations of this study, experienced subjects removed a greater portion of carious lesion on a 3D immersive haptic simulator. These results are a first step in establishing the validity of this device.

  19. Liquid immersion thermal crosslinking of 3D polymer nanopatterns for direct carbonisation with high structural integrity

    PubMed Central

    Kang, Da-Young; Kim, Cheolho; Park, Gyurim; Moon, Jun Hyuk

    2015-01-01

    The direct pyrolytic carbonisation of polymer patterns has attracted interest for its use in obtaining carbon materials. In the case of carbonisation of nanopatterned polymers, the polymer flow and subsequent pattern change may occur in order to relieve their high surface energies. Here, we demonstrated that liquid immersion thermal crosslinking of polymer nanopatterns effectively enhanced the thermal resistance and maintained the structure integrity during the heat treatment. We employed the liquid immersion thermal crosslinking for 3D porous SU8 photoresist nanopatterns and successfully converted them to carbon nanopatterns while maintaining their porous features. The thermal crosslinking reaction and carbonisation of SU8 nanopatterns were characterised. The micro-crystallinity of the SU8-derived carbon nanopatterns was also characterised. The liquid immersion heat treatment can be extended to the carbonisation of various polymer or photoresist nanopatterns and also provide a facile way to control the surface energy of polymer nanopatterns for various purposes, for example, to block copolymer or surfactant self-assemblies. PMID:26677949

  20. Liquid immersion thermal crosslinking of 3D polymer nanopatterns for direct carbonisation with high structural integrity

    NASA Astrophysics Data System (ADS)

    Kang, Da-Young; Kim, Cheolho; Park, Gyurim; Moon, Jun Hyuk

    2015-12-01

    The direct pyrolytic carbonisation of polymer patterns has attracted interest for its use in obtaining carbon materials. In the case of carbonisation of nanopatterned polymers, the polymer flow and subsequent pattern change may occur in order to relieve their high surface energies. Here, we demonstrated that liquid immersion thermal crosslinking of polymer nanopatterns effectively enhanced the thermal resistance and maintained the structure integrity during the heat treatment. We employed the liquid immersion thermal crosslinking for 3D porous SU8 photoresist nanopatterns and successfully converted them to carbon nanopatterns while maintaining their porous features. The thermal crosslinking reaction and carbonisation of SU8 nanopatterns were characterised. The micro-crystallinity of the SU8-derived carbon nanopatterns was also characterised. The liquid immersion heat treatment can be extended to the carbonisation of various polymer or photoresist nanopatterns and also provide a facile way to control the surface energy of polymer nanopatterns for various purposes, for example, to block copolymer or surfactant self-assemblies.

  1. Student Responses to Their Immersion in a Virtual Environment.

    ERIC Educational Resources Information Center

    Taylor, Wayne

    Undertaken in conjunction with a larger study that investigated the educational efficacy of students building their own virtual worlds, this study measures the reactions of students in grades 4-12 to the experience of being immersed in virtual reality (VR). The study investigated the sense of "presence" experienced by the students, the extent to…

  2. The cranial nerve skywalk: A 3D tutorial of cranial nerves in a virtual platform.

    PubMed

    Richardson-Hatcher, April; Hazzard, Matthew; Ramirez-Yanez, German

    2014-01-01

    Visualization of the complex courses of the cranial nerves by students in the health-related professions is challenging through either diagrams in books or plastic models in the gross laboratory. Furthermore, dissection of the cranial nerves in the gross laboratory is an extremely meticulous task. Teaching and learning the cranial nerve pathways is difficult using two-dimensional (2D) illustrations alone. Three-dimensional (3D) models aid the teacher in describing intricate and complex anatomical structures and help students visualize them. The study of the cranial nerves can be supplemented with 3D, which permits the students to fully visualize their distribution within the craniofacial complex. This article describes the construction and usage of a virtual anatomy platform in Second Life™, which contains 3D models of the cranial nerves III, V, VII, and IX. The Cranial Nerve Skywalk features select cranial nerves and the associated autonomic pathways in an immersive online environment. This teaching supplement was introduced to groups of pre-healthcare professional students in gross anatomy courses at both institutions and student feedback is included.

  3. The cranial nerve skywalk: A 3D tutorial of cranial nerves in a virtual platform.

    PubMed

    Richardson-Hatcher, April; Hazzard, Matthew; Ramirez-Yanez, German

    2014-01-01

    Visualization of the complex courses of the cranial nerves by students in the health-related professions is challenging through either diagrams in books or plastic models in the gross laboratory. Furthermore, dissection of the cranial nerves in the gross laboratory is an extremely meticulous task. Teaching and learning the cranial nerve pathways is difficult using two-dimensional (2D) illustrations alone. Three-dimensional (3D) models aid the teacher in describing intricate and complex anatomical structures and help students visualize them. The study of the cranial nerves can be supplemented with 3D, which permits the students to fully visualize their distribution within the craniofacial complex. This article describes the construction and usage of a virtual anatomy platform in Second Life™, which contains 3D models of the cranial nerves III, V, VII, and IX. The Cranial Nerve Skywalk features select cranial nerves and the associated autonomic pathways in an immersive online environment. This teaching supplement was introduced to groups of pre-healthcare professional students in gross anatomy courses at both institutions and student feedback is included. PMID:24678025

  4. VR-Planets : a 3D immersive application for real-time flythrough images of planetary surfaces

    NASA Astrophysics Data System (ADS)

    Civet, François; Le Mouélic, Stéphane

    2015-04-01

    During the last two decades, a fleet of planetary probes has acquired several hundred gigabytes of images of planetary surfaces. Mars has been particularly well covered thanks to the Mars Global Surveyor, Mars Express and Mars Reconnaissance Orbiter spacecrafts. HRSC, CTX, HiRISE instruments allowed the computation of Digital Elevation Models with a resolution from hundreds of meters up to 1 meter per pixel, and corresponding orthoimages with a resolution from few hundred of meters up to 25 centimeters per pixel. The integration of such huge data sets into a system allowing user-friendly manipulation either for scientific investigation or for public outreach can represent a real challenge. We are investigating how innovative tools can be used to freely fly over reconstructed landscapes in real time, using technologies derived from the game industry and virtual reality. We have developed an application based on a game engine, using planetary data, to immerse users in real martian landscapes. The user can freely navigate in each scene at full spatial resolution using a game controller. The actual rendering is compatible with several visualization devices such as 3D active screen, virtual reality headsets (Oculus Rift), and android devices.

  5. iVirtualWorld: A Domain-Oriented End-User Development Environment for Building 3D Virtual Chemistry Experiments

    ERIC Educational Resources Information Center

    Zhong, Ying

    2013-01-01

    Virtual worlds are well-suited for building virtual laboratories for educational purposes to complement hands-on physical laboratories. However, educators may face technical challenges because developing virtual worlds requires skills in programming and 3D design. Current virtual world building tools are developed for users who have programming…

  6. Integration of the virtual 3D model of a control system with the virtual controller

    NASA Astrophysics Data System (ADS)

    Herbuś, K.; Ociepka, P.

    2015-11-01

    Nowadays the design process includes simulation analysis of different components of a constructed object. It involves the need for integration of different virtual object to simulate the whole investigated technical system. The paper presents the issues related to the integration of a virtual 3D model of a chosen control system of with a virtual controller. The goal of integration is to verify the operation of an adopted object of in accordance with the established control program. The object of the simulation work is the drive system of a tunneling machine for trenchless work. In the first stage of work was created an interactive visualization of functioning of the 3D virtual model of a tunneling machine. For this purpose, the software of the VR (Virtual Reality) class was applied. In the elaborated interactive application were created adequate procedures allowing controlling the drive system of a translatory motion, a rotary motion and the drive system of a manipulator. Additionally was created the procedure of turning on and off the output crushing head, mounted on the last element of the manipulator. In the elaborated interactive application have been established procedures for receiving input data from external software, on the basis of the dynamic data exchange (DDE), which allow controlling actuators of particular control systems of the considered machine. In the next stage of work, the program on a virtual driver, in the ladder diagram (LD) language, was created. The control program was developed on the basis of the adopted work cycle of the tunneling machine. The element integrating the virtual model of the tunneling machine for trenchless work with the virtual controller is the application written in a high level language (Visual Basic). In the developed application was created procedures responsible for collecting data from the running, in a simulation mode, virtual controller and transferring them to the interactive application, in which is verified the

  7. A Second Life for eHealth: Prospects for the Use of 3-D Virtual Worlds in Clinical Psychology

    PubMed Central

    Gaggioli, Andrea; Vigna, Cinzia; Riva, Giuseppe

    2008-01-01

    The aim of the present paper is to describe the role played by three-dimensional (3-D) virtual worlds in eHealth applications, addressing some potential advantages and issues related to the use of this emerging medium in clinical practice. Due to the enormous diffusion of the World Wide Web (WWW), telepsychology, and telehealth in general, have become accepted and validated methods for the treatment of many different health care concerns. The introduction of the Web 2.0 has facilitated the development of new forms of collaborative interaction between multiple users based on 3-D virtual worlds. This paper describes the development and implementation of a form of tailored immersive e-therapy called p-health whose key factor is interreality, that is, the creation of a hybrid augmented experience merging physical and virtual worlds. We suggest that compared with conventional telehealth applications such as emails, chat, and videoconferences, the interaction between real and 3-D virtual worlds may convey greater feelings of presence, facilitate the clinical communication process, positively influence group processes and cohesiveness in group-based therapies, and foster higher levels of interpersonal trust between therapists and patients. However, challenges related to the potentially addictive nature of such virtual worlds and questions related to privacy and personal safety will also be discussed. PMID:18678557

  8. A second life for eHealth: prospects for the use of 3-D virtual worlds in clinical psychology.

    PubMed

    Gorini, Alessandra; Gaggioli, Andrea; Vigna, Cinzia; Riva, Giuseppe

    2008-01-01

    The aim of the present paper is to describe the role played by three-dimensional (3-D) virtual worlds in eHealth applications, addressing some potential advantages and issues related to the use of this emerging medium in clinical practice. Due to the enormous diffusion of the World Wide Web (WWW), telepsychology, and telehealth in general, have become accepted and validated methods for the treatment of many different health care concerns. The introduction of the Web 2.0 has facilitated the development of new forms of collaborative interaction between multiple users based on 3-D virtual worlds. This paper describes the development and implementation of a form of tailored immersive e-therapy called p-health whose key factor is interreality, that is, the creation of a hybrid augmented experience merging physical and virtual worlds. We suggest that compared with conventional telehealth applications such as emails, chat, and videoconferences, the interaction between real and 3-D virtual worlds may convey greater feelings of presence, facilitate the clinical communication process, positively influence group processes and cohesiveness in group-based therapies, and foster higher levels of interpersonal trust between therapists and patients. However, challenges related to the potentially addictive nature of such virtual worlds and questions related to privacy and personal safety will also be discussed.

  9. Curvilinear Immersed Boundary Method for Simulating Fluid Structure Interaction with Complex 3D Rigid Bodies

    PubMed Central

    Borazjani, Iman; Ge, Liang; Sotiropoulos, Fotis

    2010-01-01

    The sharp-interface CURVIB approach of Ge and Sotiropoulos [L. Ge, F. Sotiropoulos, A Numerical Method for Solving the 3D Unsteady Incompressible Navier-Stokes Equations in Curvilinear Domains with Complex Immersed Boundaries, Journal of Computational Physics 225 (2007) 1782–1809] is extended to simulate fluid structure interaction (FSI) problems involving complex 3D rigid bodies undergoing large structural displacements. The FSI solver adopts the partitioned FSI solution approach and both loose and strong coupling strategies are implemented. The interfaces between immersed bodies and the fluid are discretized with a Lagrangian grid and tracked with an explicit front-tracking approach. An efficient ray-tracing algorithm is developed to quickly identify the relationship between the background grid and the moving bodies. Numerical experiments are carried out for two FSI problems: vortex induced vibration of elastically mounted cylinders and flow through a bileaflet mechanical heart valve at physiologic conditions. For both cases the computed results are in excellent agreement with benchmark simulations and experimental measurements. The numerical experiments suggest that both the properties of the structure (mass, geometry) and the local flow conditions can play an important role in determining the stability of the FSI algorithm. Under certain conditions unconditionally unstable iteration schemes result even when strong coupling FSI is employed. For such cases, however, combining the strong-coupling iteration with under-relaxation in conjunction with the Aitken’s acceleration technique is shown to effectively resolve the stability problems. A theoretical analysis is presented to explain the findings of the numerical experiments. It is shown that the ratio of the added mass to the mass of the structure as well as the sign of the local time rate of change of the force or moment imparted on the structure by the fluid determine the stability and convergence of the

  10. EXPLORING ENVIRONMENTAL DATA IN A HIGHLY IMMERSIVE VIRTUAL REALITY ENVIRONMENT

    EPA Science Inventory

    Geography inherently fills a 3D space and yet we struggle with displaying geography using, primaarily, 2D display devices. Virtual environments offer a more realistically-dimensioned display space and this is being realized in the expanding area of research on 3D Geographic Infor...

  11. Rapid prototyping 3D virtual world interfaces within a virtual factory environment

    NASA Technical Reports Server (NTRS)

    Kosta, Charles Paul; Krolak, Patrick D.

    1993-01-01

    On-going work into user requirements analysis using CLIPS (NASA/JSC) expert systems as an intelligent event simulator has led to research into three-dimensional (3D) interfaces. Previous work involved CLIPS and two-dimensional (2D) models. Integral to this work was the development of the University of Massachusetts Lowell parallel version of CLIPS, called PCLIPS. This allowed us to create both a Software Bus and a group problem-solving environment for expert systems development. By shifting the PCLIPS paradigm to use the VEOS messaging protocol we have merged VEOS (HlTL/Seattle) and CLIPS into a distributed virtual worlds prototyping environment (VCLIPS). VCLIPS uses the VEOS protocol layer to allow multiple experts to cooperate on a single problem. We have begun to look at the control of a virtual factory. In the virtual factory there are actors and objects as found in our Lincoln Logs Factory of the Future project. In this artificial reality architecture there are three VCLIPS entities in action. One entity is responsible for display and user events in the 3D virtual world. Another is responsible for either simulating the virtual factory or communicating with the real factory. The third is a user interface expert. The interface expert maps user input levels, within the current prototype, to control information for the factory. The interface to the virtual factory is based on a camera paradigm. The graphics subsystem generates camera views of the factory on standard X-Window displays. The camera allows for view control and object control. Control or the factory is accomplished by the user reaching into the camera views to perform object interactions. All communication between the separate CLIPS expert systems is done through VEOS.

  12. Issues and Challenges of Teaching and Learning in 3D Virtual Worlds: Real Life Case Studies

    ERIC Educational Resources Information Center

    Pfeil, Ulrike; Ang, Chee Siang; Zaphiris, Panayiotis

    2009-01-01

    We aimed to study the characteristics and usage patterns of 3D virtual worlds in the context of teaching and learning. To achieve this, we organised a full-day workshop to explore, discuss and investigate the educational use of 3D virtual worlds. Thirty participants took part in the workshop. All conversations were recorded and transcribed for…

  13. 3D Inhabited Virtual Worlds: Interactivity and Interaction between Avatars, Autonomous Agents, and Users.

    ERIC Educational Resources Information Center

    Jensen, Jens F.

    This paper addresses some of the central questions currently related to 3-Dimensional Inhabited Virtual Worlds (3D-IVWs), their virtual interactions, and communication, drawing from the theory and methodology of sociology, interaction analysis, interpersonal communication, semiotics, cultural studies, and media studies. First, 3D-IVWs--seen as a…

  14. The Virtual Radiopharmacy Laboratory: A 3-D Simulation for Distance Learning

    ERIC Educational Resources Information Center

    Alexiou, Antonios; Bouras, Christos; Giannaka, Eri; Kapoulas, Vaggelis; Nani, Maria; Tsiatsos, Thrasivoulos

    2004-01-01

    This article presents Virtual Radiopharmacy Laboratory (VR LAB), a virtual laboratory accessible through the Internet. VR LAB is designed and implemented in the framework of the VirRAD European project. This laboratory represents a 3D simulation of a radio-pharmacy laboratory, where learners, represented by 3D avatars, can experiment on…

  15. Learning Relative Motion Concepts in Immersive and Non-immersive Virtual Environments

    NASA Astrophysics Data System (ADS)

    Kozhevnikov, Michael; Gurlitt, Johannes; Kozhevnikov, Maria

    2013-12-01

    The focus of the current study is to understand which unique features of an immersive virtual reality environment have the potential to improve learning relative motion concepts. Thirty-seven undergraduate students learned relative motion concepts using computer simulation either in immersive virtual environment (IVE) or non-immersive desktop virtual environment (DVE) conditions. Our results show that after the simulation activities, both IVE and DVE groups exhibited a significant shift toward a scientific understanding in their conceptual models and epistemological beliefs about the nature of relative motion, and also a significant improvement on relative motion problem-solving tests. In addition, we analyzed students' performance on one-dimensional and two-dimensional questions in the relative motion problem-solving test separately and found that after training in the simulation, the IVE group performed significantly better than the DVE group on solving two-dimensional relative motion problems. We suggest that egocentric encoding of the scene in IVE (where the learner constitutes a part of a scene they are immersed in), as compared to allocentric encoding on a computer screen in DVE (where the learner is looking at the scene from "outside"), is more beneficial than DVE for studying more complex (two-dimensional) relative motion problems. Overall, our findings suggest that such aspects of virtual realities as immersivity, first-hand experience, and the possibility of changing different frames of reference can facilitate understanding abstract scientific phenomena and help in displacing intuitive misconceptions with more accurate mental models.

  16. The Engelbourg's ruins: from 3D TLS point cloud acquisition to 3D virtual and historic models

    NASA Astrophysics Data System (ADS)

    Koehl, Mathieu; Berger, Solveig; Nobile, Sylvain

    2014-05-01

    The Castle of Engelbourg was built at the beginning of the 13th century, at the top of the Schlossberg. It is situated on the territory of the municipality of Thann (France), at the crossroads of Alsace and Lorraine, and dominates the outlet of the valley of Thur. Its strategic position was one of the causes of its systematic destructions during the 17th century, and Louis XIV finished his fate by ordering his demolition in 1673. Today only few vestiges remain, of which a section of the main tower from about 7m of diameter and 4m of wide laying on its slice, unique characteristic in the regional castral landscape. It is visible since the valley, was named "the Eye of the witch", and became a key attraction of the region. The site, which extends over approximately one hectare, is for several years the object of numerous archaeological studies and is at the heart of a project of valuation of the vestiges today. It was indeed a key objective, among the numerous planned works, to realize a 3D model of the site in its current state, in other words, a virtual model "such as seized", exploitable as well from a cultural and tourist point of view as by scientists and in archaeological researches. The team of the ICube/INSA lab had in responsibility the realization of this model, the acquisition of the data until the delivery of the virtual model, thanks to 3D TLS and topographic surveying methods. It was also planned to integrate into this 3D model, data of 2D archives, stemming from series of former excavations. The objectives of this project were the following ones: • Acquisition of 3D digital data of the site and 3D modelling • Digitization of the 2D archaeological data and integration in the 3D model • Implementation of a database connected to the 3D model • Virtual Visit of the site The obtained results allowed us to visualize every 3D object individually, under several forms (point clouds, 3D meshed objects and models, etc.) and at several levels of detail

  17. Interaction Design and Usability of Learning Spaces in 3D Multi-user Virtual Worlds

    NASA Astrophysics Data System (ADS)

    Minocha, Shailey; Reeves, Ahmad John

    Three-dimensional virtual worlds are multimedia, simulated environments, often managed over the Web, which users can 'inhabit' and interact via their own graphical, self-representations known as 'avatars'. 3D virtual worlds are being used in many applications: education/training, gaming, social networking, marketing and commerce. Second Life is the most widely used 3D virtual world in education. However, problems associated with usability, navigation and way finding in 3D virtual worlds may impact on student learning and engagement. Based on empirical investigations of learning spaces in Second Life, this paper presents design guidelines to improve the usability and ease of navigation in 3D spaces. Methods of data collection include semi-structured interviews with Second Life students, educators and designers. The findings have revealed that design principles from the fields of urban planning, Human- Computer Interaction, Web usability, geography and psychology can influence the design of spaces in 3D multi-user virtual environments.

  18. Virtual and Printed 3D Models for Teaching Crystal Symmetry and Point Groups

    ERIC Educational Resources Information Center

    Casas, Lluís; Estop, Euge`nia

    2015-01-01

    Both, virtual and printed 3D crystal models can help students and teachers deal with chemical education topics such as symmetry and point groups. In the present paper, two freely downloadable tools (interactive PDF files and a mobile app) are presented as examples of the application of 3D design to study point-symmetry. The use of 3D printing to…

  19. Implementation of 3d Tools and Immersive Experience Interaction for Supporting Learning in a Library-Archive Environment. Visions and Challenges

    NASA Astrophysics Data System (ADS)

    Angeletaki, A.; Carrozzino, M.; Johansen, S.

    2013-07-01

    In this paper we present an experimental environment of 3D books combined with a game application that has been developed by a collaboration project between the Norwegian University of Science and Technology in Trondheim, Norway the NTNU University Library, and the Percro laboratory of Santa Anna University in Pisa, Italy. MUBIL is an international research project involving museums, libraries and ICT academy partners aiming to develop a consistent methodology enabling the use of Virtual Environments as a metaphor to present manuscripts content through the paradigms of interaction and immersion, evaluating different possible alternatives. This paper presents the results of the application of two prototypes of books augmented with the use of XVR and IL technology. We explore immersive-reality design strategies in archive and library contexts for attracting new users. Our newly established Mubil-lab has invited school classes to test the books augmented with 3D models and other multimedia content in order to investigate whether the immersion in such environments can create wider engagement and support learning. The metaphor of 3D books and game designs in a combination allows the digital books to be handled through a tactile experience and substitute the physical browsing. In this paper we present some preliminary results about the enrichment of the user experience in such environment.

  20. CaveCAD: a tool for architectural design in immersive virtual environments

    NASA Astrophysics Data System (ADS)

    Schulze, Jürgen P.; Hughes, Cathleen E.; Zhang, Lelin; Edelstein, Eve; Macagno, Eduardo

    2014-02-01

    Existing 3D modeling tools were designed to run on desktop computers with monitor, keyboard and mouse. To make 3D modeling possible with mouse and keyboard, many 3D interactions, such as point placement or translations of geometry, had to be mapped to the 2D parameter space of the mouse, possibly supported by mouse buttons or keyboard keys. We hypothesize that had the designers of these existing systems had been able to assume immersive virtual reality systems as their target platforms, they would have been able to design 3D interactions much more intuitively. In collaboration with professional architects, we created a simple, but complete 3D modeling tool for virtual environments from the ground up and use direct 3D interaction wherever possible and adequate. In this publication, we present our approaches for interactions for typical 3D modeling functions, such as geometry creation, modification of existing geometry, and assignment of surface materials. We also discuss preliminary user experiences with this system.

  1. Evaluation of Home Delivery of Lectures Utilizing 3D Virtual Space Infrastructure

    ERIC Educational Resources Information Center

    Nishide, Ryo; Shima, Ryoichi; Araie, Hiromu; Ueshima, Shinichi

    2007-01-01

    Evaluation experiments have been essential in exploring home delivery of lectures for which users can experience campus lifestyle and distant learning through 3D virtual space. This paper discusses the necessity of virtual space for distant learners by examining the effects of virtual space. The authors have pursued the possibility of…

  2. Art-Science-Technology collaboration through immersive, interactive 3D visualization

    NASA Astrophysics Data System (ADS)

    Kellogg, L. H.

    2014-12-01

    At the W. M. Keck Center for Active Visualization in Earth Sciences (KeckCAVES), a group of geoscientists and computer scientists collaborate to develop and use of interactive, immersive, 3D visualization technology to view, manipulate, and interpret data for scientific research. The visual impact of immersion in a CAVE environment can be extremely compelling, and from the outset KeckCAVES scientists have collaborated with artists to bring this technology to creative works, including theater and dance performance, installations, and gamification. The first full-fledged collaboration designed and produced a performance called "Collapse: Suddenly falling down", choreographed by Della Davidson, which investigated the human and cultural response to natural and man-made disasters. Scientific data (lidar scans of disaster sites, such as landslides and mine collapses) were fully integrated into the performance by the Sideshow Physical Theatre. This presentation will discuss both the technological and creative characteristics of, and lessons learned from the collaboration. Many parallels between the artistic and scientific process emerged. We observed that both artists and scientists set out to investigate a topic, solve a problem, or answer a question. Refining that question or problem is an essential part of both the creative and scientific workflow. Both artists and scientists seek understanding (in this case understanding of natural disasters). Differences also emerged; the group noted that the scientists sought clarity (including but not limited to quantitative measurements) as a means to understanding, while the artists embraced ambiguity, also as a means to understanding. Subsequent art-science-technology collaborations have responded to evolving technology for visualization and include gamification as a means to explore data, and use of augmented reality for informal learning in museum settings.

  3. Situating Pedagogies, Positions and Practices in Immersive Virtual Worlds

    ERIC Educational Resources Information Center

    Savin-Baden, Maggi; Gourlay, Lesley; Tombs, Cathy; Steils, Nicole; Tombs, Gemma; Mawer, Matt

    2010-01-01

    Background: The literature on immersive virtual worlds and e-learning to date largely indicates that technology has led the pedagogy. Although rationales for implementing e-learning have included flexibility of provision and supporting diversity, none of these recommendations has helped to provide strong pedagogical location. Furthermore, there is…

  4. Immersive Virtual Worlds in University-Level Human Geography Courses

    ERIC Educational Resources Information Center

    Dittmer, Jason

    2010-01-01

    This paper addresses the potential for increased deployment of immersive virtual worlds in higher geographic education. An account of current practice regarding popular culture in the geography classroom is offered, focusing on the objectification of popular culture rather than its constitutive role vis-a-vis place. Current e-learning practice is…

  5. Virtual Worlds: Inherently Immersive, Highly Social Learning Spaces

    ERIC Educational Resources Information Center

    Johnson, Laurence F.; Levine, Alan H.

    2008-01-01

    Our essential premise in this article is that immersive learning is not new and that, as a practical matter, it is useful to view the relatively new virtual world platforms through that lens. By doing so, the premise continues, developers of learning experiences for these spaces will have a large theoretical base upon which to draw, as well as…

  6. Virtual 3D microscopy using multiplane whole slide images in diagnostic pathology.

    PubMed

    Kalinski, Thomas; Zwönitzer, Ralf; Sel, Saadettin; Evert, Matthias; Guenther, Thomas; Hofmann, Harald; Bernarding, Johannes; Roessner, Albert

    2008-08-01

    To reproduce focusing in virtual microscopy, it is necessary to construct 3-dimensional (3D) virtual slides composed of whole slide images with different focuses. As focusing is frequently used for the assessment of Helicobacter pylori colonization in diagnostic pathology, we prepared virtual 3D slides with up to 9 focus planes from 144 gastric biopsy specimens with or without H pylori gastritis. The biopsy specimens were diagnosed in a blinded manner by 3 pathologists according to the updated Sydney classification using conventional microscopy, virtual microscopy with a single focus plane, and virtual 3D microscopy with 5 and 9 focus planes enabling virtual focusing. Regarding the classification of H pylori, we found a positive correlation between the number of focus planes used in virtual microscopy and the number of correct diagnoses as determined by conventional microscopy. Concerning H pylori positivity, the specificity and sensitivity of virtual 3D microscopy using virtual slides with 9 focus planes achieved a minimum of 0.95 each, which was approximately the same as in conventional microscopy. We consider virtual 3D microscopy appropriate for primary diagnosis of H pylori gastritis and equivalent to conventional microscopy.

  7. Intelligent Tutors in Immersive Virtual Environments

    ERIC Educational Resources Information Center

    Yan, Peng; Slator, Brian M.; Vender, Bradley; Jin, Wei; Kariluoma, Matti; Borchert, Otto; Hokanson, Guy; Aggarwal, Vaibhav; Cosmano, Bob; Cox, Kathleen T.; Pilch, André; Marry, Andrew

    2013-01-01

    Research into virtual role-based learning has progressed over the past decade. Modern issues include gauging the difficulty of designing a goal system capable of meeting the requirements of students with different knowledge levels, and the reasonability and possibility of taking advantage of the well-designed formula and techniques served in other…

  8. Virtual reality and 3D animation in forensic visualization.

    PubMed

    Ma, Minhua; Zheng, Huiru; Lallie, Harjinder

    2010-09-01

    Computer-generated three-dimensional (3D) animation is an ideal media to accurately visualize crime or accident scenes to the viewers and in the courtrooms. Based upon factual data, forensic animations can reproduce the scene and demonstrate the activity at various points in time. The use of computer animation techniques to reconstruct crime scenes is beginning to replace the traditional illustrations, photographs, and verbal descriptions, and is becoming popular in today's forensics. This article integrates work in the areas of 3D graphics, computer vision, motion tracking, natural language processing, and forensic computing, to investigate the state-of-the-art in forensic visualization. It identifies and reviews areas where new applications of 3D digital technologies and artificial intelligence could be used to enhance particular phases of forensic visualization to create 3D models and animations automatically and quickly. Having discussed the relationships between major crime types and level-of-detail in corresponding forensic animations, we recognized that high level-of-detail animation involving human characters, which is appropriate for many major crime types but has had limited use in courtrooms, could be useful for crime investigation.

  9. Virtual reality and 3D animation in forensic visualization.

    PubMed

    Ma, Minhua; Zheng, Huiru; Lallie, Harjinder

    2010-09-01

    Computer-generated three-dimensional (3D) animation is an ideal media to accurately visualize crime or accident scenes to the viewers and in the courtrooms. Based upon factual data, forensic animations can reproduce the scene and demonstrate the activity at various points in time. The use of computer animation techniques to reconstruct crime scenes is beginning to replace the traditional illustrations, photographs, and verbal descriptions, and is becoming popular in today's forensics. This article integrates work in the areas of 3D graphics, computer vision, motion tracking, natural language processing, and forensic computing, to investigate the state-of-the-art in forensic visualization. It identifies and reviews areas where new applications of 3D digital technologies and artificial intelligence could be used to enhance particular phases of forensic visualization to create 3D models and animations automatically and quickly. Having discussed the relationships between major crime types and level-of-detail in corresponding forensic animations, we recognized that high level-of-detail animation involving human characters, which is appropriate for many major crime types but has had limited use in courtrooms, could be useful for crime investigation. PMID:20533989

  10. 3-D Virtual and Physical Reconstruction of Bendego Iron

    NASA Astrophysics Data System (ADS)

    Belmonte, S. L. R.; Zucolotto, M. E.; Fontes, R. C.; dos Santos, J. R. L.

    2012-09-01

    The use of 3D laser scanning to meteoritic to preserve the original shape of the meteorites before cutting and the facility of saved the datas in STL format (stereolithography) to print three-dimensional physical models and generate a digital replica.

  11. Orchestrating learning during implementation of a 3D virtual world

    NASA Astrophysics Data System (ADS)

    Karakus, Turkan; Baydas, Ozlem; Gunay, Fatma; Coban, Murat; Goktas, Yuksel

    2016-10-01

    There are many issues to be considered when designing virtual worlds for educational purposes. In this study, the term orchestration has acquired a new definition as the moderation of problems encountered during the activity of turning a virtual world into an educational setting for winter sports. A development case showed that community plays a key role in both the emergence of challenges and in the determination of their solutions. The implications of this study showed that activity theory was a useful tool for understanding contextual issues. Therefore, instructional designers first developed relevant tools and community-based solutions. This study attempts to use activity theory in a prescriptive way, though it is known as a descriptive theory. Finally, since virtual world projects have many aspects, the variety of challenges and practical solutions presented in this study will provide practitioners with suggestions on how to overcome problems in future.

  12. Immersive virtual reality for visualization of abdominal CT

    NASA Astrophysics Data System (ADS)

    Lin, Qiufeng; Xu, Zhoubing; Li, Bo; Baucom, Rebeccah; Poulose, Benjamin; Landman, Bennett A.; Bodenheimer, Robert E.

    2013-03-01

    Immersive virtual environments use a stereoscopic head-mounted display and data glove to create high fidelity virtual experiences in which users can interact with three-dimensional models and perceive relationships at their true scale. This stands in stark contrast to traditional PACS-based infrastructure in which images are viewed as stacks of two dimensional slices, or, at best, disembodied renderings. Although there has substantial innovation in immersive virtual environments for entertainment and consumer media, these technologies have not been widely applied in clinical applications. Here, we consider potential applications of immersive virtual environments for ventral hernia patients with abdominal computed tomography imaging data. Nearly a half million ventral hernias occur in the United States each year, and hernia repair is the most commonly performed general surgery operation worldwide. A significant problem in these conditions is communicating the urgency, degree of severity, and impact of a hernia (and potential repair) on patient quality of life. Hernias are defined by ruptures in the abdominal wall (i.e., the absence of healthy tissues) rather than a growth (e.g., cancer); therefore, understanding a hernia necessitates understanding the entire abdomen. Our environment allows surgeons and patients to view body scans at scale and interact with these virtual models using a data glove. This visualization and interaction allows users to perceive the relationship between physical structures and medical imaging data. The system provides close integration of PACS-based CT data with immersive virtual environments and creates opportunities to study and optimize interfaces for patient communication, operative planning, and medical education.

  13. Unstructured Cartesian refinement with sharp interface immersed boundary method for 3D unsteady incompressible flows

    NASA Astrophysics Data System (ADS)

    Angelidis, Dionysios; Chawdhary, Saurabh; Sotiropoulos, Fotis

    2016-11-01

    A novel numerical method is developed for solving the 3D, unsteady, incompressible Navier-Stokes equations on locally refined fully unstructured Cartesian grids in domains with arbitrarily complex immersed boundaries. Owing to the utilization of the fractional step method on an unstructured Cartesian hybrid staggered/non-staggered grid layout, flux mismatch and pressure discontinuity issues are avoided and the divergence free constraint is inherently satisfied to machine zero. Auxiliary/hanging nodes are used to facilitate the discretization of the governing equations. The second-order accuracy of the solver is ensured by using multi-dimension Lagrange interpolation operators and appropriate differencing schemes at the interface of regions with different levels of refinement. The sharp interface immersed boundary method is augmented with local near-boundary refinement to handle arbitrarily complex boundaries. The discrete momentum equation is solved with the matrix free Newton-Krylov method and the Krylov-subspace method is employed to solve the Poisson equation. The second-order accuracy of the proposed method on unstructured Cartesian grids is demonstrated by solving the Poisson equation with a known analytical solution. A number of three-dimensional laminar flow simulations of increasing complexity illustrate the ability of the method to handle flows across a range of Reynolds numbers and flow regimes. Laminar steady and unsteady flows past a sphere and the oblique vortex shedding from a circular cylinder mounted between two end walls demonstrate the accuracy, the efficiency and the smooth transition of scales and coherent structures across refinement levels. Large-eddy simulation (LES) past a miniature wind turbine rotor, parameterized using the actuator line approach, indicates the ability of the fully unstructured solver to simulate complex turbulent flows. Finally, a geometry resolving LES of turbulent flow past a complete hydrokinetic turbine illustrates

  14. Spilling the beans on java 3D: a tool for the virtual anatomist.

    PubMed

    Guttmann, G D

    1999-04-15

    The computing world has just provided the anatomist with another tool: Java 3D, within the Java 2 platform. On December 9, 1998, Sun Microsystems released Java 2. Java 3D classes are now included in the jar (Java Archive) archives of the extensions directory of Java 2. Java 3D is also a part of the Java Media Suite of APIs (Application Programming Interfaces). But what is Java? How does Java 3D work? How do you view Java 3D objects? A brief introduction to the concepts of Java and object-oriented programming is provided. Also, there is a short description of the tools of Java 3D and of the Java 3D viewer. Thus, the virtual anatomist has another set of computer tools to use for modeling various aspects of anatomy, such as embryological development. Also, the virtual anatomist will be able to assist the surgeon with virtual surgery using the tools found in Java 3D. Java 3D will be able to fulfill gaps, such as the lack of platform independence, interactivity, and manipulability of 3D images, currently existing in many anatomical computer-aided learning programs.

  15. Development of a 3D immersive videogame to improve arm-postural coordination in patients with TBI

    PubMed Central

    2011-01-01

    Background Traumatic brain injury (TBI) disrupts the central and executive mechanisms of arm(s) and postural (trunk and legs) coordination. To address these issues, we developed a 3D immersive videogame-- Octopus. The game was developed using the basic principles of videogame design and previous experience of using videogames for rehabilitation of patients with acquired brain injuries. Unlike many other custom-designed virtual environments, Octopus included an actual gaming component with a system of multiple rewards, making the game challenging, competitive, motivating and fun. Effect of a short-term practice with the Octopus game on arm-postural coordination in patients with TBI was tested. Methods The game was developed using WorldViz Vizard software, integrated with the Qualysis system for motion analysis. Avatars of the participant's hands precisely reproducing the real-time kinematic patterns were synchronized with the simulated environment, presented in the first person 3D view on an 82-inch DLP screen. 13 individuals with mild-to-moderate manifestations of TBI participated in the study. While standing in front of the screen, the participants interacted with a computer-generated environment by popping bubbles blown by the Octopus. The bubbles followed a specific trajectory. Interception of the bubbles with the left or right hand avatar allowed flexible use of the postural segments for balance maintenance and arm transport. All participants practiced ten 90-s gaming trials during a single session, followed by a retention test. Arm-postural coordination was analysed using principal component analysis. Results As a result of the short-term practice, the participants improved in game performance, arm movement time, and precision. Improvements were achieved mostly by adapting efficient arm-postural coordination strategies. Of the 13 participants, 10 showed an immediate increase in arm forward reach and single-leg stance time. Conclusion These results support the

  16. Combination of Virtual Tours, 3d Model and Digital Data in a 3d Archaeological Knowledge and Information System

    NASA Astrophysics Data System (ADS)

    Koehl, M.; Brigand, N.

    2012-08-01

    The site of the Engelbourg ruined castle in Thann, Alsace, France, has been for some years the object of all the attention of the city, which is the owner, and also of partners like historians and archaeologists who are in charge of its study. The valuation of the site is one of the main objective, as well as its conservation and its knowledge. The aim of this project is to use the environment of the virtual tour viewer as new base for an Archaeological Knowledge and Information System (AKIS). With available development tools we add functionalities in particular through diverse scripts that convert the viewer into a real 3D interface. By beginning with a first virtual tour that contains about fifteen panoramic images, the site of about 150 times 150 meters can be completely documented by offering the user a real interactivity and that makes visualization very concrete, almost lively. After the choice of pertinent points of view, panoramic images were realized. For the documentation, other sets of images were acquired at various seasons and climate conditions, which allow documenting the site in different environments and states of vegetation. The final virtual tour was deducted from them. The initial 3D model of the castle, which is virtual too, was also joined in the form of panoramic images for completing the understanding of the site. A variety of types of hotspots were used to connect the whole digital documentation to the site, including videos (as reports during the acquisition phases, during the restoration works, during the excavations, etc.), digital georeferenced documents (archaeological reports on the various constituent elements of the castle, interpretation of the excavations and the searches, description of the sets of collected objects, etc.). The completely personalized interface of the system allows either to switch from a panoramic image to another one, which is the classic case of the virtual tours, or to go from a panoramic photographic image

  17. Anesthesiology training using 3D imaging and virtual reality

    NASA Astrophysics Data System (ADS)

    Blezek, Daniel J.; Robb, Richard A.; Camp, Jon J.; Nauss, Lee A.

    1996-04-01

    Current training for regional nerve block procedures by anesthesiology residents requires expert supervision and the use of cadavers; both of which are relatively expensive commodities in today's cost-conscious medical environment. We are developing methods to augment and eventually replace these training procedures with real-time and realistic computer visualizations and manipulations of the anatomical structures involved in anesthesiology procedures, such as nerve plexus injections (e.g., celiac blocks). The initial work is focused on visualizations: both static images and rotational renderings. From the initial results, a coherent paradigm for virtual patient and scene representation will be developed.

  18. Introducing an Avatar Acceptance Model: Student Intention to Use 3D Immersive Learning Tools in an Online Learning Classroom

    ERIC Educational Resources Information Center

    Kemp, Jeremy William

    2011-01-01

    This quantitative survey study examines the willingness of online students to adopt an immersive virtual environment as a classroom tool and compares this with their feelings about more traditional learning modes including our ANGEL learning management system and the Elluminate live Web conferencing tool. I surveyed 1,108 graduate students in…

  19. Design of Learning Spaces in 3D Virtual Worlds: An Empirical Investigation of "Second Life"

    ERIC Educational Resources Information Center

    Minocha, Shailey; Reeves, Ahmad John

    2010-01-01

    "Second Life" (SL) is a three-dimensional (3D) virtual world, and educational institutions are adopting SL to support their teaching and learning. Although the question of how 3D learning spaces should be designed to support student learning and engagement has been raised among SL educators and designers, there is hardly any guidance or research…

  20. Employing Virtual Humans for Education and Training in X3D/VRML Worlds

    ERIC Educational Resources Information Center

    Ieronutti, Lucio; Chittaro, Luca

    2007-01-01

    Web-based education and training provides a new paradigm for imparting knowledge; students can access the learning material anytime by operating remotely from any location. Web3D open standards, such as X3D and VRML, support Web-based delivery of Educational Virtual Environments (EVEs). EVEs have a great potential for learning and training…

  1. Gaming in a 3D Multiuser Virtual Environment: Engaging Students in Science Lessons

    ERIC Educational Resources Information Center

    Lim, Cher, P.; Nonis, Darren; Hedberg, John

    2006-01-01

    Based on the exploratory study of a 3D multiuser virtual environment (3D MUVE), known as Quest Atlantis (QA), in a series of Primary Four (10- to 11-year-olds) Science lessons at Orchard Primary School in Singapore, this paper examines the issues of learning engagement and describes the socio-cultural context of QA's implementation. The students…

  2. Virtually numbed: immersive video gaming alters real-life experience.

    PubMed

    Weger, Ulrich W; Loughnan, Stephen

    2014-04-01

    As actors in a highly mechanized environment, we are citizens of a world populated not only by fellow humans, but also by virtual characters (avatars). Does immersive video gaming, during which the player takes on the mantle of an avatar, prompt people to adopt the coldness and rigidity associated with robotic behavior and desensitize them to real-life experience? In one study, we correlated participants' reported video-gaming behavior with their emotional rigidity (as indicated by the number of paperclips that they removed from ice-cold water). In a second experiment, we manipulated immersive and nonimmersive gaming behavior and then likewise measured the extent of the participants' emotional rigidity. Both studies yielded reliable impacts, and thus suggest that immersion into a robotic viewpoint desensitizes people to real-life experiences in oneself and others.

  3. Envisioning the future of home care: applications of immersive virtual reality.

    PubMed

    Brennan, Patricia Flatley; Arnott Smith, Catherine; Ponto, Kevin; Radwin, Robert; Kreutz, Kendra

    2013-01-01

    Accelerating the design of technologies to support health in the home requires 1) better understanding of how the household context shapes consumer health behaviors and (2) the opportunity to afford engineers, designers, and health professionals the chance to systematically study the home environment. We developed the Living Environments Laboratory (LEL) with a fully immersive, six-sided virtual reality CAVE to enable recreation of a broad range of household environments. We have successfully developed a virtual apartment, including a kitchen, living space, and bathroom. Over 2000 people have visited the LEL CAVE. Participants use an electronic wand to activate common household affordances such as opening a refrigerator door or lifting a cup. Challenges currently being explored include creating natural gesture to interface with virtual objects, developing robust, simple procedures to capture actual living environments and rendering them in a 3D visualization, and devising systematic stable terminologies to characterize home environments.

  4. Simulation of Physical Experiments in Immersive Virtual Environments

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Wasfy, Tamer M.

    2001-01-01

    An object-oriented event-driven immersive Virtual environment is described for the creation of virtual labs (VLs) for simulating physical experiments. Discussion focuses on a number of aspects of the VLs, including interface devices, software objects, and various applications. The VLs interface with output devices, including immersive stereoscopic screed(s) and stereo speakers; and a variety of input devices, including body tracking (head and hands), haptic gloves, wand, joystick, mouse, microphone, and keyboard. The VL incorporates the following types of primitive software objects: interface objects, support objects, geometric entities, and finite elements. Each object encapsulates a set of properties, methods, and events that define its behavior, appearance, and functions. A container object allows grouping of several objects. Applications of the VLs include viewing the results of the physical experiment, viewing a computer simulation of the physical experiment, simulation of the experiments procedure, computational steering, and remote control of the physical experiment. In addition, the VL can be used as a risk-free (safe) environment for training. The implementation of virtual structures testing machines, virtual wind tunnels, and a virtual acoustic testing facility is described.

  5. From Cognitive Capability to Social Reform? Shifting Perceptions of Learning in Immersive Virtual Worlds

    ERIC Educational Resources Information Center

    Savin-Baden, Maggi

    2008-01-01

    Learning in immersive virtual worlds (simulations and virtual worlds such as Second Life) could become a central learning approach in many curricula, but the socio-political impact of virtual world learning on higher education remains under-researched. Much of the recent research into learning in immersive virtual worlds centres around games and…

  6. On the Usability and Usefulness of 3d (geo)visualizations - a Focus on Virtual Reality Environments

    NASA Astrophysics Data System (ADS)

    Çöltekin, A.; Lokka, I.; Zahner, M.

    2016-06-01

    Whether and when should we show data in 3D is an on-going debate in communities conducting visualization research. A strong opposition exists in the information visualization (Infovis) community, and seemingly unnecessary/unwarranted use of 3D, e.g., in plots, bar or pie charts, is heavily criticized. The scientific visualization (Scivis) community, on the other hand, is more supportive of the use of 3D as it allows `seeing' invisible phenomena, or designing and printing things that are used in e.g., surgeries, educational settings etc. Geographic visualization (Geovis) stands between the Infovis and Scivis communities. In geographic information science, most visuo-spatial analyses have been sufficiently conducted in 2D or 2.5D, including analyses related to terrain and much of the urban phenomena. On the other hand, there has always been a strong interest in 3D, with similar motivations as in Scivis community. Among many types of 3D visualizations, a popular one that is exploited both for visual analysis and visualization is the highly realistic (geo)virtual environments. Such environments may be engaging and memorable for the viewers because they offer highly immersive experiences. However, it is not yet well-established if we should opt to show the data in 3D; and if yes, a) what type of 3D we should use, b) for what task types, and c) for whom. In this paper, we identify some of the central arguments for and against the use of 3D visualizations around these three considerations in a concise interdisciplinary literature review.

  7. SEISVIZ3D: Stereoscopic system for the representation of seismic data - Interpretation and Immersion

    NASA Astrophysics Data System (ADS)

    von Hartmann, Hartwig; Rilling, Stefan; Bogen, Manfred; Thomas, Rüdiger

    2015-04-01

    The seismic method is a valuable tool for getting 3D-images from the subsurface. Seismic data acquisition today is not only a topic for oil and gas exploration but is used also for geothermal exploration, inspections of nuclear waste sites and for scientific investigations. The system presented in this contribution may also have an impact on the visualization of 3D-data of other geophysical methods. 3D-seismic data can be displayed in different ways to give a spatial impression of the subsurface.They are a combination of individual vertical cuts, possibly linked to a cubical portion of the data volume, and the stereoscopic view of the seismic data. By these methods, the spatial perception for the structures and thus of the processes in the subsurface should be increased. Stereoscopic techniques are e. g. implemented in the CAVE and the WALL, both of which require a lot of space and high technical effort. The aim of the interpretation system shown here is stereoscopic visualization of seismic data at the workplace, i.e. at the personal workstation and monitor. The system was developed with following criteria in mind: • Fast rendering of large amounts of data so that a continuous view of the data when changing the viewing angle and the data section is possible, • defining areas in stereoscopic view to translate the spatial impression directly into an interpretation, • the development of an appropriate user interface, including head-tracking, for handling the increased degrees of freedom, • the possibility of collaboration, i.e. teamwork and idea exchange with the simultaneous viewing of a scene at remote locations. The possibilities offered by the use of a stereoscopic system do not replace a conventional interpretation workflow. Rather they have to be implemented into it as an additional step. The amplitude distribution of the seismic data is a challenge for the stereoscopic display because the opacity level and the scaling and selection of the data have to

  8. A Collaborative Virtual Environment for Situated Language Learning Using VEC3D

    ERIC Educational Resources Information Center

    Shih, Ya-Chun; Yang, Mau-Tsuen

    2008-01-01

    A 3D virtually synchronous communication architecture for situated language learning has been designed to foster communicative competence among undergraduate students who have studied English as a foreign language (EFL). We present an innovative approach that offers better e-learning than the previous virtual reality educational applications. The…

  9. The Virtual-casing Principle For 3D Toroidal Systems

    SciTech Connect

    Lazerson, Samuel A.

    2014-02-24

    The capability to calculate the magnetic field due to the plasma currents in a toroidally confined magnetic fusion equilibrium is of manifest relevance to equilibrium reconstruction and stellarator divertor design. Two methodologies arise for calculating such quantities. The first being a volume integral over the plasma current density for a given equilibrium. Such an integral is computationally expensive. The second is a surface integral over a surface current on the equilibrium boundary. This method is computationally desirable as the calculation does not grow as the radial resolution of the volume integral. This surface integral method has come to be known as the "virtual-casing principle". In this paper, a full derivation of this method is presented along with a discussion regarding its optimal application.

  10. Spatial integration of boundaries in a 3D virtual environment.

    PubMed

    Bouchekioua, Youcef; Miller, Holly C; Craddock, Paul; Blaisdell, Aaron P; Molet, Mikael

    2013-10-01

    Prior research, using two- and three-dimensional environments, has found that when both human and nonhuman animals independently acquire two associations between landmarks with a common landmark (e.g., LM1-LM2 and LM2-LM3), each with its own spatial relationship, they behave as if the two unique LMs have a known spatial relationship despite their never having been paired. Seemingly, they have integrated the two associations to create a third association with its own spatial relationship (LM1-LM3). Using sensory preconditioning (Experiment 1) and second-order conditioning (Experiment 2) procedures, we found that human participants integrated information about the boundaries of pathways to locate a goal within a three-dimensional virtual environment in the absence of any relevant landmarks. Spatial integration depended on the participant experiencing a common boundary feature with which to link the pathways. These results suggest that the principles of associative learning also apply to the boundaries of an environment.

  11. Cognitive factors associated with immersion in virtual environments

    NASA Technical Reports Server (NTRS)

    Psotka, Joseph; Davison, Sharon

    1993-01-01

    Immersion into the dataspace provided by a computer, and the feeling of really being there or 'presence', are commonly acknowledged as the uniquely important features of virtual reality environments. How immersed one feels appears to be determined by a complex set of physical components and affordances of the environment, and as yet poorly understood psychological processes. Pimentel and Teixeira say that the experience of being immersed in a computer-generated world involves the same mental shift of 'suspending your disbelief for a period of time' as 'when you get wrapped up in a good novel or become absorbed in playing a computer game'. That sounds as if it could be right, but it would be good to get some evidence for these important conclusions. It might be even better to try to connect these statements with theoretical positions that try to do justice to complex cognitive processes. The basic precondition for understanding Virtual Reality (VR) is understanding the spatial representation systems that localize our bodies or egocenters in space. The effort to understand these cognitive processes is being driven with new energy by the pragmatic demands of successful virtual reality environments, but the literature is largely sparse and anecdotal.

  12. Computer-assisted three-dimensional surgical planning: 3D virtual articulator: technical note.

    PubMed

    Ghanai, S; Marmulla, R; Wiechnik, J; Mühling, J; Kotrikova, B

    2010-01-01

    This study presents a computer-assisted planning system for dysgnathia treatment. It describes the process of information gathering using a virtual articulator and how the splints are constructed for orthognathic surgery. The deviation of the virtually planned splints is shown in six cases on the basis of conventionally planned cases. In all cases the plaster models were prepared and scanned using a 3D laser scanner. Successive lateral and posterior-anterior cephalometric images were used for reconstruction before surgery. By identifying specific points on the X-rays and marking them on the virtual models, it was possible to enhance the 2D images to create a realistic 3D environment and to perform virtual repositioning of the jaw. A hexapod was used to transfer the virtual planning to the real splints. Preliminary results showed that conventional repositioning could be replicated using the virtual articulator.

  13. Ontological implications of being in immersive virtual environments

    NASA Astrophysics Data System (ADS)

    Morie, Jacquelyn F.

    2008-02-01

    The idea of Virtual Reality once conjured up visions of new territories to explore, and expectations of awaiting worlds of wonder. VR has matured to become a practical tool for therapy, medicine and commercial interests, yet artists, in particular, continue to expand the possibilities for the medium. Artistic virtual environments created over the past two decades probe the phenomenological nature of these virtual environments. When we inhabit a fully immersive virtual environment, we have entered into a new form of Being. Not only does our body continue to exist in the real, physical world, we are also embodied within the virtual by means of technology that translates our bodied actions into interactions with the virtual environment. Very few states in human existence allow this bifurcation of our Being, where we can exist simultaneously in two spaces at once, with the possible exception of meta-physical states such as shamanistic trance and out-of-body experiences. This paper discusses the nature of this simultaneous Being, how we enter the virtual space, what forms of persona we can don there, what forms of spaces we can inhabit, and what type of wondrous experiences we can both hope for and expect.

  14. The virtual-casing principle for 3D toroidal systems

    NASA Astrophysics Data System (ADS)

    Lazerson, S. A.

    2012-12-01

    The capability to calculate the magnetic field due to the plasma currents in a toroidally confined magnetic fusion equilibrium is of manifest relevance to equilibrium reconstruction and stellarator divertor design. Two methodologies arise for calculating such quantities. The first being a volume integral over the plasma current density for a given equilibrium. Such an integral is computationally expensive. The second is a surface integral over a surface current on the equilibrium boundary. This method is computationally desirable as the calculation does not grow as the radial resolution of the volume integral. This surface integral method has come to be known as the ‘virtual-casing principle’. In this paper, a full derivation of this method is presented along with a discussion regarding its optimal application. This paper has been authored by Princeton University under Contract Number DE-AC02-09CH11466 with the US Department of Energy. The publisher, by accepting the paper for publication acknowledges, that the United States Government retains a non-exclusive,paid-up, irrevocable, worldwide license to publish or reproduce the published form of this paper, or allow others to do so, for United States Government purposes.

  15. Understanding Human Perception of Building Categories in Virtual 3d Cities - a User Study

    NASA Astrophysics Data System (ADS)

    Tutzauer, P.; Becker, S.; Niese, T.; Deussen, O.; Fritsch, D.

    2016-06-01

    Virtual 3D cities are becoming increasingly important as a means of visually communicating diverse urban-related information. To get a deeper understanding of a human's cognitive experience of virtual 3D cities, this paper presents a user study on the human ability to perceive building categories (e.g. residential home, office building, building with shops etc.) from geometric 3D building representations. The study reveals various dependencies between geometric properties of the 3D representations and the perceptibility of the building categories. Knowledge about which geometries are relevant, helpful or obstructive for perceiving a specific building category is derived. The importance and usability of such knowledge is demonstrated based on a perception-guided 3D building abstraction process.

  16. High refractive index immersion liquid for superresolution 3D imaging using sapphire-based aplanatic numerical aperture increasing lens optics.

    PubMed

    Laskar, Junaid M; Shravan Kumar, P; Herminghaus, Stephan; Daniels, Karen E; Schröter, Matthias

    2016-04-20

    Optically transparent immersion liquids with refractive index (n∼1.77) to match the sapphire-based aplanatic numerical aperture increasing lens (aNAIL) are necessary for achieving deep 3D imaging with high spatial resolution. We report that antimony tribromide (SbBr3) salt dissolved in liquid diiodomethane (CH2I2) provides a new high refractive index immersion liquid for optics applications. The refractive index is tunable from n=1.74 (pure) to n=1.873 (saturated), by adjusting either salt concentration or temperature; this allows it to match (or even exceed) the refractive index of sapphire. Importantly, the solution gives excellent light transmittance in the ultraviolet to near-infrared range, an improvement over commercially available immersion liquids. This refractive-index-matched immersion liquid formulation has enabled us to develop a sapphire-based aNAIL objective that has both high numerical aperture (NA=1.17) and long working distance (WD=12  mm). This opens up new possibilities for deep 3D imaging with high spatial resolution. PMID:27140083

  17. High refractive index immersion liquid for superresolution 3D imaging using sapphire-based aplanatic numerical aperture increasing lens optics.

    PubMed

    Laskar, Junaid M; Shravan Kumar, P; Herminghaus, Stephan; Daniels, Karen E; Schröter, Matthias

    2016-04-20

    Optically transparent immersion liquids with refractive index (n∼1.77) to match the sapphire-based aplanatic numerical aperture increasing lens (aNAIL) are necessary for achieving deep 3D imaging with high spatial resolution. We report that antimony tribromide (SbBr3) salt dissolved in liquid diiodomethane (CH2I2) provides a new high refractive index immersion liquid for optics applications. The refractive index is tunable from n=1.74 (pure) to n=1.873 (saturated), by adjusting either salt concentration or temperature; this allows it to match (or even exceed) the refractive index of sapphire. Importantly, the solution gives excellent light transmittance in the ultraviolet to near-infrared range, an improvement over commercially available immersion liquids. This refractive-index-matched immersion liquid formulation has enabled us to develop a sapphire-based aNAIL objective that has both high numerical aperture (NA=1.17) and long working distance (WD=12  mm). This opens up new possibilities for deep 3D imaging with high spatial resolution.

  18. Mobile viewer system for virtual 3D space using infrared LED point markers and camera

    NASA Astrophysics Data System (ADS)

    Sakamoto, Kunio; Taneji, Shoto

    2006-09-01

    The authors have developed a 3D workspace system using collaborative imaging devices. A stereoscopic display enables this system to project 3D information. In this paper, we describe the position detecting system for a see-through 3D viewer. A 3D display system is useful technology for virtual reality, mixed reality and augmented reality. We have researched spatial imaging and interaction system. We have ever proposed 3D displays using the slit as a parallax barrier, the lenticular screen and the holographic optical elements(HOEs) for displaying active image 1)2)3)4). The purpose of this paper is to propose the interactive system using these 3D imaging technologies. The observer can view virtual images in the real world when the user watches the screen of a see-through 3D viewer. The goal of our research is to build the display system as follows; when users see the real world through the mobile viewer, the display system gives users virtual 3D images, which is floating in the air, and the observers can touch these floating images and interact them such that kids can make play clay. The key technologies of this system are the position recognition system and the spatial imaging display. The 3D images are presented by the improved parallax barrier 3D display. Here the authors discuss the measuring method of the mobile viewer using infrared LED point markers and a camera in the 3D workspace (augmented reality world). The authors show the geometric analysis of the proposed measuring method, which is the simplest method using a single camera not the stereo camera, and the results of our viewer system.

  19. A new approach towards image based virtual 3D city modeling by using close range photogrammetry

    NASA Astrophysics Data System (ADS)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2014-05-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing day to day for various engineering and non-engineering applications. Generally three main image based approaches are using for virtual 3D city models generation. In first approach, researchers used Sketch based modeling, second method is Procedural grammar based modeling and third approach is Close range photogrammetry based modeling. Literature study shows that till date, there is no complete solution available to create complete 3D city model by using images. These image based methods also have limitations This paper gives a new approach towards image based virtual 3D city modeling by using close range photogrammetry. This approach is divided into three sections. First, data acquisition process, second is 3D data processing, and third is data combination process. In data acquisition process, a multi-camera setup developed and used for video recording of an area. Image frames created from video data. Minimum required and suitable video image frame selected for 3D processing. In second section, based on close range photogrammetric principles and computer vision techniques, 3D model of area created. In third section, this 3D model exported to adding and merging of other pieces of large area. Scaling and alignment of 3D model was done. After applying the texturing and rendering on this model, a final photo-realistic textured 3D model created. This 3D model transferred into walk-through model or in movie form. Most of the processing steps are automatic. So this method is cost effective and less laborious. Accuracy of this model is good. For this research work, study area is the campus of department of civil engineering, Indian Institute of Technology, Roorkee. This campus acts as a prototype for city. Aerial photography is restricted in many country

  20. Integration of virtual and real scenes within an integral 3D imaging environment

    NASA Astrophysics Data System (ADS)

    Ren, Jinsong; Aggoun, Amar; McCormick, Malcolm

    2002-11-01

    The Imaging Technologies group at De Montfort University has developed an integral 3D imaging system, which is seen as the most likely vehicle for 3D television avoiding psychological effects. To create real fascinating three-dimensional television programs, a virtual studio that performs the task of generating, editing and integrating the 3D contents involving virtual and real scenes is required. The paper presents, for the first time, the procedures, factors and methods of integrating computer-generated virtual scenes with real objects captured using the 3D integral imaging camera system. The method of computer generation of 3D integral images, where the lens array is modelled instead of the physical camera is described. In the model each micro-lens that captures different elemental images of the virtual scene is treated as an extended pinhole camera. An integration process named integrated rendering is illustrated. Detailed discussion and deep investigation are focused on depth extraction from captured integral 3D images. The depth calculation method from the disparity and the multiple baseline method that is used to improve the precision of depth estimation are also presented. The concept of colour SSD and its further improvement in the precision is proposed and verified.

  1. Integrated bronchoscopic video tracking and 3D CT registration for virtual bronchoscopy

    NASA Astrophysics Data System (ADS)

    Higgins, William E.; Helferty, James P.; Padfield, Dirk R.

    2003-05-01

    Lung cancer assessment involves an initial evaluation of 3D CT image data followed by interventional bronchoscopy. The physician, with only a mental image inferred from the 3D CT data, must guide the bronchoscope through the bronchial tree to sites of interest. Unfortunately, this procedure depends heavily on the physician's ability to mentally reconstruct the 3D position of the bronchoscope within the airways. In order to assist physicians in performing biopsies of interest, we have developed a method that integrates live bronchoscopic video tracking and 3D CT registration. The proposed method is integrated into a system we have been devising for virtual-bronchoscopic analysis and guidance for lung-cancer assessment. Previously, the system relied on a method that only used registration of the live bronchoscopic video to corresponding virtual endoluminal views derived from the 3D CT data. This procedure only performs the registration at manually selected sites; it does not draw upon the motion information inherent in the bronchoscopic video. Further, the registration procedure is slow. The proposed method has the following advantages: (1) it tracks the 3D motion of the bronchoscope using the bronchoscopic video; (2) it uses the tracked 3D trajectory of the bronchoscope to assist in locating sites in the 3D CT "virtual world" to perform the registration. In addition, the method incorporates techniques to: (1) detect and exclude corrupted video frames (to help make the video tracking more robust); (2) accelerate the computation of the many 3D virtual endoluminal renderings (thus, speeding up the registration process). We have tested the integrated tracking-registration method on a human airway-tree phantom and on real human data.

  2. How incorporation of scents could enhance immersive virtual experiences

    PubMed Central

    Ischer, Matthieu; Baron, Naëm; Mermoud, Christophe; Cayeux, Isabelle; Porcherot, Christelle; Sander, David; Delplanque, Sylvain

    2014-01-01

    Under normal everyday conditions, senses all work together to create experiences that fill a typical person's life. Unfortunately for behavioral and cognitive researchers who investigate such experiences, standard laboratory tests are usually conducted in a nondescript room in front of a computer screen. They are very far from replicating the complexity of real world experiences. Recently, immersive virtual reality (IVR) environments became promising methods to immerse people into an almost real environment that involves more senses. IVR environments provide many similarities to the complexity of the real world and at the same time allow experimenters to constrain experimental parameters to obtain empirical data. This can eventually lead to better treatment options and/or new mechanistic hypotheses. The idea that increasing sensory modalities improve the realism of IVR environments has been empirically supported, but the senses used did not usually include olfaction. In this technology report, we will present an odor delivery system applied to a state-of-the-art IVR technology. The platform provides a three-dimensional, immersive, and fully interactive visualization environment called “Brain and Behavioral Laboratory—Immersive System” (BBL-IS). The solution we propose can reliably deliver various complex scents during different virtual scenarios, at a precise time and space and without contamination of the environment. The main features of this platform are: (i) the limited cross-contamination between odorant streams with a fast odor delivery (< 500 ms), (ii) the ease of use and control, and (iii) the possibility to synchronize the delivery of the odorant with pictures, videos or sounds. How this unique technology could be used to investigate typical research questions in olfaction (e.g., emotional elicitation, memory encoding or attentional capture by scents) will also be addressed. PMID:25101017

  3. Embodied collaboration support system for 3D shape evaluation in virtual space

    NASA Astrophysics Data System (ADS)

    Okubo, Masashi; Watanabe, Tomio

    2005-12-01

    Collaboration mainly consists of two tasks; one is each partner's task that is performed by the individual, the other is communication with each other. Both of them are very important objectives for all the collaboration support system. In this paper, a collaboration support system for 3D shape evaluation in virtual space is proposed on the basis of both studies in 3D shape evaluation and communication support in virtual space. The proposed system provides the two viewpoints for each task. One is the viewpoint of back side of user's own avatar for the smooth communication. The other is that of avatar's eye for 3D shape evaluation. Switching the viewpoints satisfies the task conditions for 3D shape evaluation and communication. The system basically consists of PC, HMD and magnetic sensors, and users can share the embodied interaction by observing interaction between their avatars in virtual space. However, the HMD and magnetic sensors, which are put on the users, would restrict the nonverbal communication. Then, we have tried to compensate the loss of nodding of partner's avatar by introducing the speech-driven embodied interactive actor InterActor. Sensory evaluation by paired comparison of 3D shapes in the collaborative situation in virtual space and in real space and the questionnaire are performed. The result demonstrates the effectiveness of InterActor's nodding in the collaborative situation.

  4. Applying a 3D Situational Virtual Learning Environment to the Real World Business--An Extended Research in Marketing

    ERIC Educational Resources Information Center

    Wang, Shwu-huey

    2012-01-01

    In order to understand (1) what kind of students can be facilitated through the help of three-dimensional virtual learning environment (3D VLE), and (2) the relationship between a conventional test (ie, paper and pencil test) and the 3D VLE used in this study, the study designs a 3D virtual supermarket (3DVS) to help students transform their role…

  5. Three-Dimensional User Interfaces for Immersive Virtual Reality

    NASA Technical Reports Server (NTRS)

    vanDam, Andries

    1997-01-01

    The focus of this grant was to experiment with novel user interfaces for immersive Virtual Reality (VR) systems, and thus to advance the state of the art of user interface technology for this domain. Our primary test application was a scientific visualization application for viewing Computational Fluid Dynamics (CFD) datasets. This technology has been transferred to NASA via periodic status reports and papers relating to this grant that have been published in conference proceedings. This final report summarizes the research completed over the past year, and extends last year's final report of the first three years of the grant.

  6. Simulation and visualization of mechanical systems in immersive virtual environments

    SciTech Connect

    Canfield, T. R.

    1998-04-17

    A prototype for doing real-time simulation of mechanical systems in immersive virtual environments has been developed to run in the CAVE and on the ImmersaDesk at Argonne National Laboratory. This system has three principal software components: a visualization component for rendering the model and providing a user interface, communications software, and mechanics simulation software. The system can display the three-dimensional objects in the CAVE and project various scalar fields onto the exterior surface of the objects during real-time execution.

  7. Assessment of radiation awareness training in immersive virtual environments

    NASA Astrophysics Data System (ADS)

    Whisker, Vaughn E., III

    The prospect of new nuclear power plant orders in the near future and the graying of the current workforce create a need to train new personnel faster and better. Immersive virtual reality (VR) may offer a solution to the training challenge. VR technology presented in a CAVE Automatic Virtual Environment (CAVE) provides a high-fidelity, one-to-one scale environment where areas of the power plant can be recreated and virtual radiation environments can be simulated, making it possible to safely expose workers to virtual radiation in the context of the actual work environment. The use of virtual reality for training is supported by many educational theories; constructivism and discovery learning, in particular. Educational theory describes the importance of matching the training to the task. Plant access training and radiation worker training, common forms of training in the nuclear industry, rely on computer-based training methods in most cases, which effectively transfer declarative knowledge, but are poor at transferring skills. If an activity were to be added, the training would provide personnel with the opportunity to develop skills and apply their knowledge so they could be more effective when working in the radiation environment. An experiment was developed to test immersive virtual reality's suitability for training radiation awareness. Using a mixed methodology of quantitative and qualitative measures, the subjects' performances before and after training were assessed. First, subjects completed a pre-test to measure their knowledge prior to completing any training. Next they completed unsupervised computer-based training, which consisted of a PowerPoint presentation and a PDF document. After completing a brief orientation activity in the virtual environment, one group of participants received supplemental radiation awareness training in a simulated radiation environment presented in the CAVE, while a second group, the control group, moved directly to the

  8. Immersive virtual environments for emotional engineering: description and preliminary results.

    PubMed

    Rodríguez, Alejandro; Rey, Beatriz; Alcañiz, Mariano

    2011-01-01

    This work aims to identify the arousal and presence level during an emotional engineering study. During the experimental sessions, a high-immersion Virtual Reality (VR) system, a CAVE-like configuration, will be used. Thirty-six volunteers will navigate through virtual houses that can be customized and that have been designed for emotional induction. Emotional induction will be obtained by stimulating the senses of sight, hearing and smell. For this purpose, the ambient lighting, music and smell will be controlled by the researcher, who will create a comfortable environment for the subject. Several physiological variables - Electrocardiogram (ECG), Respiratory signal and Galvanic Skin Response (GSR) - will be recorded during the sessions. The obtained results will help furniture companies identify the senses that have more influence on emotions and will be the basis for new studies about user needs in the sector of furniture and interior decoration.

  9. Management and services for large-scale virtual 3D urban model data based on network

    NASA Astrophysics Data System (ADS)

    He, Zhengwei; Chen, Jing; Wu, Huayi

    2008-10-01

    The buildings in modern city are complex and diverse, and the quantity is huge. These bring very big challenge for constructing 3D GIS under network circumstance and eventually realizing the Digital Earth. After analyzed the characteristic of network service about massive 3D urban building model data, this paper focuses on the organization and management of spatial data and the network services strategy, proposes a progressive network transmission schema based on the spatial resolution and the component elements of 3D building model data. Next, this paper put forward multistage-link three-dimensional spatial data organization model and encoding method of spatial index based on fully level quadtree structure. Then, a virtual earth platform, called GeoGlobe, was developed using above theory. Experimental results show that above 3D spatial data management model and service theory can availably provide network services for large-scale 3D urban model data. The application results and user experience good .

  10. Approach to Constructing 3d Virtual Scene of Irrigation Area Using Multi-Source Data

    NASA Astrophysics Data System (ADS)

    Cheng, S.; Dou, M.; Wang, J.; Zhang, S.; Chen, X.

    2015-10-01

    For an irrigation area that is often complicated by various 3D artificial ground features and natural environment, disadvantages of traditional 2D GIS in spatial data representation, management, query, analysis and visualization is becoming more and more evident. Building a more realistic 3D virtual scene is thus especially urgent for irrigation area managers and decision makers, so that they can carry out various irrigational operations lively and intuitively. Based on previous researchers' achievements, a simple, practical and cost-effective approach was proposed in this study, by adopting3D geographic information system (3D GIS), remote sensing (RS) technology. Based on multi-source data such as Google Earth (GE) high-resolution remote sensing image, ASTER G-DEM, hydrological facility maps and so on, 3D terrain model and ground feature models were created interactively. Both of the models were then rendered with texture data and integrated under ArcGIS platform. A vivid, realistic 3D virtual scene of irrigation area that has a good visual effect and possesses primary GIS functions about data query and analysis was constructed.Yet, there is still a long way to go for establishing a true 3D GIS for the irrigation are: issues of this study were deeply discussed and future research direction was pointed out in the end of the paper.

  11. A numerical method for solving the 3D unsteady incompressible Navier Stokes equations in curvilinear domains with complex immersed boundaries

    NASA Astrophysics Data System (ADS)

    Ge, Liang; Sotiropoulos, Fotis

    2007-08-01

    A novel numerical method is developed that integrates boundary-conforming grids with a sharp interface, immersed boundary methodology. The method is intended for simulating internal flows containing complex, moving immersed boundaries such as those encountered in several cardiovascular applications. The background domain (e.g. the empty aorta) is discretized efficiently with a curvilinear boundary-fitted mesh while the complex moving immersed boundary (say a prosthetic heart valve) is treated with the sharp-interface, hybrid Cartesian/immersed-boundary approach of Gilmanov and Sotiropoulos [A. Gilmanov, F. Sotiropoulos, A hybrid cartesian/immersed boundary method for simulating flows with 3d, geometrically complex, moving bodies, Journal of Computational Physics 207 (2005) 457-492.]. To facilitate the implementation of this novel modeling paradigm in complex flow simulations, an accurate and efficient numerical method is developed for solving the unsteady, incompressible Navier-Stokes equations in generalized curvilinear coordinates. The method employs a novel, fully-curvilinear staggered grid discretization approach, which does not require either the explicit evaluation of the Christoffel symbols or the discretization of all three momentum equations at cell interfaces as done in previous formulations. The equations are integrated in time using an efficient, second-order accurate fractional step methodology coupled with a Jacobian-free, Newton-Krylov solver for the momentum equations and a GMRES solver enhanced with multigrid as preconditioner for the Poisson equation. Several numerical experiments are carried out on fine computational meshes to demonstrate the accuracy and efficiency of the proposed method for standard benchmark problems as well as for unsteady, pulsatile flow through a curved, pipe bend. To demonstrate the ability of the method to simulate flows with complex, moving immersed boundaries we apply it to calculate pulsatile, physiological flow

  12. Inertial Sensor-Based Touch and Shake Metaphor for Expressive Control of 3D Virtual Avatars

    PubMed Central

    Patil, Shashidhar; Chintalapalli, Harinadha Reddy; Kim, Dubeom; Chai, Youngho

    2015-01-01

    In this paper, we present an inertial sensor-based touch and shake metaphor for expressive control of a 3D virtual avatar in a virtual environment. An intuitive six degrees-of-freedom wireless inertial motion sensor is used as a gesture and motion control input device with a sensor fusion algorithm. The algorithm enables user hand motions to be tracked in 3D space via magnetic, angular rate, and gravity sensors. A quaternion-based complementary filter is implemented to reduce noise and drift. An algorithm based on dynamic time-warping is developed for efficient recognition of dynamic hand gestures with real-time automatic hand gesture segmentation. Our approach enables the recognition of gestures and estimates gesture variations for continuous interaction. We demonstrate the gesture expressivity using an interactive flexible gesture mapping interface for authoring and controlling a 3D virtual avatar and its motion by tracking user dynamic hand gestures. This synthesizes stylistic variations in a 3D virtual avatar, producing motions that are not present in the motion database using hand gesture sequences from a single inertial motion sensor. PMID:26094629

  13. Three Primary School Students' Cognition about 3D Rotation in a Virtual Reality Learning Environment

    ERIC Educational Resources Information Center

    Yeh, Andy

    2010-01-01

    This paper reports on three primary school students' explorations of 3D rotation in a virtual reality learning environment (VRLE) named VRMath. When asked to investigate if you would face the same direction when you turn right 45 degrees first then roll up 45 degrees, or when you roll up 45 degrees first then turn right 45 degrees, the students…

  14. The Cognitive Apprenticeship Theory for the Teaching of Mathematics in an Online 3D Virtual Environment

    ERIC Educational Resources Information Center

    Bouta, Hara; Paraskeva, Fotini

    2013-01-01

    Research spanning two decades shows that there is a continuing development of 3D virtual worlds and investment in such environments for educational purposes. Research stresses the need for these environments to be well-designed and for suitable pedagogies to be implemented in the teaching practice in order for these worlds to be fully effective.…

  15. Socialisation for Learning at a Distance in a 3-D Multi-User Virtual Environment

    ERIC Educational Resources Information Center

    Edirisingha, Palitha; Nie, Ming; Pluciennik, Mark; Young, Ruth

    2009-01-01

    This paper reports findings of a pilot study that examined the pedagogical potential of "Second Life" (SL), a popular three-dimensional multi-user virtual environment (3-D MUVE) developed by the Linden Lab. The study is part of a 1-year research and development project titled "Modelling of Secondlife Environments" (http://www.le.ac.uk/moose)…

  16. Inertial Sensor-Based Touch and Shake Metaphor for Expressive Control of 3D Virtual Avatars.

    PubMed

    Patil, Shashidhar; Chintalapalli, Harinadha Reddy; Kim, Dubeom; Chai, Youngho

    2015-01-01

    In this paper, we present an inertial sensor-based touch and shake metaphor for expressive control of a 3D virtual avatar in a virtual environment. An intuitive six degrees-of-freedom wireless inertial motion sensor is used as a gesture and motion control input device with a sensor fusion algorithm. The algorithm enables user hand motions to be tracked in 3D space via magnetic, angular rate, and gravity sensors. A quaternion-based complementary filter is implemented to reduce noise and drift. An algorithm based on dynamic time-warping is developed for efficient recognition of dynamic hand gestures with real-time automatic hand gesture segmentation. Our approach enables the recognition of gestures and estimates gesture variations for continuous interaction. We demonstrate the gesture expressivity using an interactive flexible gesture mapping interface for authoring and controlling a 3D virtual avatar and its motion by tracking user dynamic hand gestures. This synthesizes stylistic variations in a 3D virtual avatar, producing motions that are not present in the motion database using hand gesture sequences from a single inertial motion sensor. PMID:26094629

  17. Exploring 3-D Virtual Reality Technology for Spatial Ability and Chemistry Achievement

    ERIC Educational Resources Information Center

    Merchant, Z.; Goetz, E. T.; Keeney-Kennicutt, W.; Cifuentes, L.; Kwok, O.; Davis, T. J.

    2013-01-01

    We investigated the potential of Second Life® (SL), a three-dimensional (3-D) virtual world, to enhance undergraduate students' learning of a vital chemistry concept. A quasi-experimental pre-posttest control group design was used to conduct the study. A total of 387 participants completed three assignment activities either in SL or using…

  18. Laying the Groundwork for Socialisation and Knowledge Construction within 3D Virtual Worlds

    ERIC Educational Resources Information Center

    Minocha, Shailey; Roberts, Dave

    2008-01-01

    The paper reports the theoretical underpinnings for the pedagogical role and rationale for adopting 3D virtual worlds for socialisation and knowledge creation in distance education. Socialisation or "knowing one another" in remote distributed environments can be achieved through synchronous technologies such as instant messaging, audio and…

  19. Supporting Distributed Team Working in 3D Virtual Worlds: A Case Study in Second Life

    ERIC Educational Resources Information Center

    Minocha, Shailey; Morse, David R.

    2010-01-01

    Purpose: The purpose of this paper is to report on a study into how a three-dimensional (3D) virtual world (Second Life) can facilitate socialisation and team working among students working on a team project at a distance. This models the situation in many commercial sectors where work is increasingly being conducted across time zones and between…

  20. Teaching Digital Natives: 3-D Virtual Science Lab in the Middle School Science Classroom

    ERIC Educational Resources Information Center

    Franklin, Teresa J.

    2008-01-01

    This paper presents the development of a 3-D virtual environment in Second Life for the delivery of standards-based science content for middle school students in the rural Appalachian region of Southeast Ohio. A mixed method approach in which quantitative results of improved student learning and qualitative observations of implementation within…

  1. Virtual rough samples to test 3D nanometer-scale scanning electron microscopy stereo photogrammetry

    NASA Astrophysics Data System (ADS)

    Villarrubia, J. S.; Tondare, V. N.; Vladár, A. E.

    2016-03-01

    The combination of scanning electron microscopy for high spatial resolution, images from multiple angles to provide 3D information, and commercially available stereo photogrammetry software for 3D reconstruction offers promise for nanometer-scale dimensional metrology in 3D. A method is described to test 3D photogrammetry software by the use of virtual samples—mathematical samples from which simulated images are made for use as inputs to the software under test. The virtual sample is constructed by wrapping a rough skin with any desired power spectral density around a smooth near-trapezoidal line with rounded top corners. Reconstruction is performed with images simulated from different angular viewpoints. The software's reconstructed 3D model is then compared to the known geometry of the virtual sample. Three commercial photogrammetry software packages were tested. Two of them produced results for line height and width that were within close to 1 nm of the correct values. All of the packages exhibited some difficulty in reconstructing details of the surface roughness.

  2. Nomad devices for interactions in immersive virtual environments

    NASA Astrophysics Data System (ADS)

    George, Paul; Kemeny, Andras; Merienne, Frédéric; Chardonnet, Jean-Rémy; Thouvenin, Indira Mouttapa; Posselt, Javier; Icart, Emmanuel

    2013-03-01

    Renault is currently setting up a new CAVE™, a 5 rear-projected wall virtual reality room with a combined 3D resolution of 100 Mpixels, distributed over sixteen 4k projectors and two 2k projector as well as an additional 3D HD collaborative powerwall. Renault's CAVE™ aims at answering needs of the various vehicle conception steps [1]. Starting from vehicle Design, through the subsequent Engineering steps, Ergonomic evaluation and perceived quality control, Renault has built up a list of use-cases and carried out an early software evaluation in the four sided CAVE™ of Institute Image, called MOVE. One goal of the project is to study interactions in a CAVE™, especially with nomad devices such as IPhone or IPad to manipulate virtual objects and to develop visualization possibilities. Inspired by nomad devices current uses (multi-touch gestures, IPhone UI look'n'feel and AR applications), we have implemented an early feature set taking advantage of these popular input devices. In this paper, we present its performance through measurement data collected in our test platform, a 4-sided homemade low-cost virtual reality room, powered by ultra-short-range and standard HD home projectors.

  3. A parallel overset-curvilinear-immersed boundary framework for simulating complex 3D incompressible flows.

    PubMed

    Borazjani, Iman; Ge, Liang; Le, Trung; Sotiropoulos, Fotis

    2013-04-01

    We develop an overset-curvilinear immersed boundary (overset-CURVIB) method in a general non-inertial frame of reference to simulate a wide range of challenging biological flow problems. The method incorporates overset-curvilinear grids to efficiently handle multi-connected geometries and increase the resolution locally near immersed boundaries. Complex bodies undergoing arbitrarily large deformations may be embedded within the overset-curvilinear background grid and treated as sharp interfaces using the curvilinear immersed boundary (CURVIB) method (Ge and Sotiropoulos, Journal of Computational Physics, 2007). The incompressible flow equations are formulated in a general non-inertial frame of reference to enhance the overall versatility and efficiency of the numerical approach. Efficient search algorithms to identify areas requiring blanking, donor cells, and interpolation coefficients for constructing the boundary conditions at grid interfaces of the overset grid are developed and implemented using efficient parallel computing communication strategies to transfer information among sub-domains. The governing equations are discretized using a second-order accurate finite-volume approach and integrated in time via an efficient fractional-step method. Various strategies for ensuring globally conservative interpolation at grid interfaces suitable for incompressible flow fractional step methods are implemented and evaluated. The method is verified and validated against experimental data, and its capabilities are demonstrated by simulating the flow past multiple aquatic swimmers and the systolic flow in an anatomic left ventricle with a mechanical heart valve implanted in the aortic position.

  4. A parallel overset-curvilinear-immersed boundary framework for simulating complex 3D incompressible flows

    PubMed Central

    Borazjani, Iman; Ge, Liang; Le, Trung; Sotiropoulos, Fotis

    2013-01-01

    We develop an overset-curvilinear immersed boundary (overset-CURVIB) method in a general non-inertial frame of reference to simulate a wide range of challenging biological flow problems. The method incorporates overset-curvilinear grids to efficiently handle multi-connected geometries and increase the resolution locally near immersed boundaries. Complex bodies undergoing arbitrarily large deformations may be embedded within the overset-curvilinear background grid and treated as sharp interfaces using the curvilinear immersed boundary (CURVIB) method (Ge and Sotiropoulos, Journal of Computational Physics, 2007). The incompressible flow equations are formulated in a general non-inertial frame of reference to enhance the overall versatility and efficiency of the numerical approach. Efficient search algorithms to identify areas requiring blanking, donor cells, and interpolation coefficients for constructing the boundary conditions at grid interfaces of the overset grid are developed and implemented using efficient parallel computing communication strategies to transfer information among sub-domains. The governing equations are discretized using a second-order accurate finite-volume approach and integrated in time via an efficient fractional-step method. Various strategies for ensuring globally conservative interpolation at grid interfaces suitable for incompressible flow fractional step methods are implemented and evaluated. The method is verified and validated against experimental data, and its capabilities are demonstrated by simulating the flow past multiple aquatic swimmers and the systolic flow in an anatomic left ventricle with a mechanical heart valve implanted in the aortic position. PMID:23833331

  5. GEARS a 3D Virtual Learning Environment and Virtual Social and Educational World Used in Online Secondary Schools

    ERIC Educational Resources Information Center

    Barkand, Jonathan; Kush, Joseph

    2009-01-01

    Virtual Learning Environments (VLEs) are becoming increasingly popular in online education environments and have multiple pedagogical advantages over more traditional approaches to education. VLEs include 3D worlds where students can engage in simulated learning activities such as Second Life. According to Claudia L'Amoreaux at Linden Lab, "at…

  6. Immersion transmission ellipsometry (ITE): a new method for the precise determination of the 3D indicatrix of thin films

    NASA Astrophysics Data System (ADS)

    Jung, C. C.; Stumpe, J.

    2005-02-01

    The new method of immersion transmission ellipsometry (ITE) [1] has been developed. It allows the highly accurate determination of the absolute three-dimensional (3D) refractive indices of anisotropic thin films. The method is combined with conventional ellipsometry in transmission and reflection, and the thickness determination of anisotropic films solely by optical methods also becomes more accurate. The method is applied to the determination of the 3D refractive indices of thin spin-coated films of an azobenzene-containing liquid-crystalline copolymer. The development of the anisotropy in these films by photo-orientation and subsequent annealing is demonstrated. Depending on the annealing temperature, oblate or prolate orders are generated.

  7. An Interactive 3D Virtual Anatomy Puzzle for Learning and Simulation - Initial Demonstration and Evaluation.

    PubMed

    Messier, Erik; Wilcox, Jascha; Dawson-Elli, Alexander; Diaz, Gabriel; Linte, Cristian A

    2016-01-01

    To inspire young students (grades 6-12) to become medical practitioners and biomedical engineers, it is necessary to expose them to key concepts of the field in a way that is both exciting and informative. Recent advances in medical image acquisition, manipulation, processing, visualization, and display have revolutionized the approach in which the human body and internal anatomy can be seen and studied. It is now possible to collect 3D, 4D, and 5D medical images of patient specific data, and display that data to the end user using consumer level 3D stereoscopic display technology. Despite such advancements, traditional 2D modes of content presentation such as textbooks and slides are still the standard didactic equipment used to teach young students anatomy. More sophisticated methods of display can help to elucidate the complex 3D relationships between structures that are so often missed when viewing only 2D media, and can instill in students an appreciation for the interconnection between medicine and technology. Here we describe the design, implementation, and preliminary evaluation of a 3D virtual anatomy puzzle dedicated to helping users learn the anatomy of various organs and systems by manipulating 3D virtual data. The puzzle currently comprises several components of the human anatomy and can be easily extended to include additional organs and systems. The 3D virtual anatomy puzzle game was implemented and piloted using three display paradigms - a traditional 2D monitor, a 3D TV with active shutter glass, and the DK2 version Oculus Rift, as well as two different user interaction devices - a space mouse and traditional keyboard controls. PMID:27046584

  8. Algorithm for simulation of craniotomies assisted by peripheral for 3D virtual navigation.

    PubMed

    Duque, Sara I; Ochoa, John F; Botero, Andrés F; Ramirez, Mateo

    2015-01-01

    Neurosurgical procedures require high precision and an accurate localization of the structures. For that reason and due to the advances in 3D visualization, the software for planning and training neurosurgeries has become an important tool for neurosurgeons and students, but the manipulation of the 3D structures is not always easy for the staff that usually works with 2D images. This paper describes a system developed in open source software that allows performing a virtual craniotomy (a common procedure in neurosurgery that enables the access to intracranial lesions) in 3D slicer; the system includes a peripheral input in order to permit the manipulation of the 3D structures according to camera movements and to guide the movement of the craniotomy tool. PMID:26737914

  9. Assessment of radiation awareness training in immersive virtual environments

    NASA Astrophysics Data System (ADS)

    Whisker, Vaughn E., III

    The prospect of new nuclear power plant orders in the near future and the graying of the current workforce create a need to train new personnel faster and better. Immersive virtual reality (VR) may offer a solution to the training challenge. VR technology presented in a CAVE Automatic Virtual Environment (CAVE) provides a high-fidelity, one-to-one scale environment where areas of the power plant can be recreated and virtual radiation environments can be simulated, making it possible to safely expose workers to virtual radiation in the context of the actual work environment. The use of virtual reality for training is supported by many educational theories; constructivism and discovery learning, in particular. Educational theory describes the importance of matching the training to the task. Plant access training and radiation worker training, common forms of training in the nuclear industry, rely on computer-based training methods in most cases, which effectively transfer declarative knowledge, but are poor at transferring skills. If an activity were to be added, the training would provide personnel with the opportunity to develop skills and apply their knowledge so they could be more effective when working in the radiation environment. An experiment was developed to test immersive virtual reality's suitability for training radiation awareness. Using a mixed methodology of quantitative and qualitative measures, the subjects' performances before and after training were assessed. First, subjects completed a pre-test to measure their knowledge prior to completing any training. Next they completed unsupervised computer-based training, which consisted of a PowerPoint presentation and a PDF document. After completing a brief orientation activity in the virtual environment, one group of participants received supplemental radiation awareness training in a simulated radiation environment presented in the CAVE, while a second group, the control group, moved directly to the

  10. Comparative analysis of video processing and 3D rendering for cloud video games using different virtualization technologies

    NASA Astrophysics Data System (ADS)

    Bada, Adedayo; Alcaraz-Calero, Jose M.; Wang, Qi; Grecos, Christos

    2014-05-01

    This paper describes a comprehensive empirical performance evaluation of 3D video processing employing the physical/virtual architecture implemented in a cloud environment. Different virtualization technologies, virtual video cards and various 3D benchmarks tools have been utilized in order to analyse the optimal performance in the context of 3D online gaming applications. This study highlights 3D video rendering performance under each type of hypervisors, and other factors including network I/O, disk I/O and memory usage. Comparisons of these factors under well-known virtual display technologies such as VNC, Spice and Virtual 3D adaptors reveal the strengths and weaknesses of the various hypervisors with respect to 3D video rendering and streaming.

  11. vPresent: A cloud based 3D virtual presentation environment for interactive product customization

    NASA Astrophysics Data System (ADS)

    Nan, Xiaoming; Guo, Fei; He, Yifeng; Guan, Ling

    2013-09-01

    In modern society, many companies offer product customization services to their customers. There are two major issues in providing customized products. First, product manufacturers need to effectively present their products to the customers who may be located in any geographical area. Second, customers need to be able to provide their feedbacks on the product in real-time. However, the traditional presentation approaches cannot effectively convey sufficient information for the product or efficiently adjust product design according to customers' real-time feedbacks. In order to address these issues, we propose vPresent , a cloud based 3D virtual presentation environment, in this paper. In vPresent, the product expert can show the 3D virtual product to the remote customers and dynamically customize the product based on customers' feedbacks, while customers can provide their opinions in real time when they are viewing a vivid 3D visualization of the product. Since the proposed vPresent is a cloud based system, the customers are able to access the customized virtual products from anywhere at any time, via desktop, laptop, or even smart phone. The proposed vPresent is expected to effectively deliver 3D visual information to customers and provide an interactive design platform for the development of customized products.

  12. Novel Web-based Education Platforms for Information Communication utilizing Gamification, Virtual and Immersive Reality

    NASA Astrophysics Data System (ADS)

    Demir, I.

    2015-12-01

    Recent developments in internet technologies make it possible to manage and visualize large data on the web. Novel visualization techniques and interactive user interfaces allow users to create realistic environments, and interact with data to gain insight from simulations and environmental observations. This presentation showcase information communication interfaces, games, and virtual and immersive reality applications for supporting teaching and learning of concepts in atmospheric and hydrological sciences. The information communication platforms utilizes latest web technologies and allow accessing and visualizing large scale data on the web. The simulation system is a web-based 3D interactive learning environment for teaching hydrological and atmospheric processes and concepts. The simulation systems provides a visually striking platform with realistic terrain and weather information, and water simulation. The web-based simulation system provides an environment for students to learn about the earth science processes, and effects of development and human activity on the terrain. Users can access the system in three visualization modes including virtual reality, augmented reality, and immersive reality using heads-up display. The system provides various scenarios customized to fit the age and education level of various users.

  13. fVisiOn: glasses-free tabletop 3D display to provide virtual 3D media naturally alongside real media

    NASA Astrophysics Data System (ADS)

    Yoshida, Shunsuke

    2012-06-01

    A novel glasses-free tabletop 3D display, named fVisiOn, floats virtual 3D objects on an empty, flat, tabletop surface and enables multiple viewers to observe raised 3D images from any angle at 360° Our glasses-free 3D image reproduction method employs a combination of an optical device and an array of projectors and produces continuous horizontal parallax in the direction of a circular path located above the table. The optical device shapes a hollow cone and works as an anisotropic diffuser. The circularly arranged projectors cast numerous rays into the optical device. Each ray represents a particular ray that passes a corresponding point on a virtual object's surface and orients toward a viewing area around the table. At any viewpoint on the ring-shaped viewing area, both eyes collect fractional images from different projectors, and all the viewers around the table can perceive the scene as 3D from their perspectives because the images include binocular disparity. The entire principle is installed beneath the table, so the tabletop area remains clear. No ordinary tabletop activities are disturbed. Many people can naturally share the 3D images displayed together with real objects on the table. In our latest prototype, we employed a handmade optical device and an array of over 100 tiny projectors. This configuration reproduces static and animated 3D scenes for a 130° viewing area and allows 5-cm-tall virtual characters to play soccer and dance on the table.

  14. Heart rate variability (HRV) during virtual reality immersion.

    PubMed

    Malińska, Marzena; Zużewicz, Krystyna; Bugajska, Joanna; Grabowski, Andrzej

    2015-01-01

    The goal of the study was assessment of the hour-long training involving handling virtual environment (sVR) and watching a stereoscopic 3D movie on the mechanisms of autonomic heart rate (HR) regulation among the subjects who were not predisposed to motion sickness. In order to exclude predispositions to motion sickness, all the participants (n=19) underwent a Coriolis test. During an exposure to 3D and sVR the ECG signal was continuously recorded using the Holter method. For the twelve consecutive 5-min epochs of ECG signal, the analysis of heart rate variability (HRV) in time and frequency domains was conducted. After 30 min from the beginning of the training in handling the virtual workstation a significant increase in LF spectral power was noted. The values of the sympathovagal LF/HF index while sVR indicated a significant increase in sympathetic predominance in four time intervals, namely between the 5th and the 10th minute, between the 15th and the 20th minute, between the 35th and 40th minute and between the 55th and the 60th minute of exposure.

  15. Heart rate variability (HRV) during virtual reality immersion

    PubMed Central

    Malińska, Marzena; Zużewicz, Krystyna; Bugajska, Joanna; Grabowski, Andrzej

    2015-01-01

    The goal of the study was assessment of the hour-long training involving handling virtual environment (sVR) and watching a stereoscopic 3D movie on the mechanisms of autonomic heart rate (HR) regulation among the subjects who were not predisposed to motion sickness. In order to exclude predispositions to motion sickness, all the participants (n=19) underwent a Coriolis test. During an exposure to 3D and sVR the ECG signal was continuously recorded using the Holter method. For the twelve consecutive 5-min epochs of ECG signal, the analysis of heart rate variability (HRV) in time and frequency domains was conducted. After 30 min from the beginning of the training in handling the virtual workstation a significant increase in LF spectral power was noted. The values of the sympathovagal LF/HF index while sVR indicated a significant increase in sympathetic predominance in four time intervals, namely between the 5th and the 10th minute, between the 15th and the 20th minute, between the 35th and 40th minute and between the 55th and the 60th minute of exposure. PMID:26327262

  16. Heart rate variability (HRV) during virtual reality immersion.

    PubMed

    Malińska, Marzena; Zużewicz, Krystyna; Bugajska, Joanna; Grabowski, Andrzej

    2015-01-01

    The goal of the study was assessment of the hour-long training involving handling virtual environment (sVR) and watching a stereoscopic 3D movie on the mechanisms of autonomic heart rate (HR) regulation among the subjects who were not predisposed to motion sickness. In order to exclude predispositions to motion sickness, all the participants (n=19) underwent a Coriolis test. During an exposure to 3D and sVR the ECG signal was continuously recorded using the Holter method. For the twelve consecutive 5-min epochs of ECG signal, the analysis of heart rate variability (HRV) in time and frequency domains was conducted. After 30 min from the beginning of the training in handling the virtual workstation a significant increase in LF spectral power was noted. The values of the sympathovagal LF/HF index while sVR indicated a significant increase in sympathetic predominance in four time intervals, namely between the 5th and the 10th minute, between the 15th and the 20th minute, between the 35th and 40th minute and between the 55th and the 60th minute of exposure. PMID:26327262

  17. Depth camera-based 3D hand gesture controls with immersive tactile feedback for natural mid-air gesture interactions.

    PubMed

    Kim, Kwangtaek; Kim, Joongrock; Choi, Jaesung; Kim, Junghyun; Lee, Sangyoun

    2015-01-08

    Vision-based hand gesture interactions are natural and intuitive when interacting with computers, since we naturally exploit gestures to communicate with other people. However, it is agreed that users suffer from discomfort and fatigue when using gesture-controlled interfaces, due to the lack of physical feedback. To solve the problem, we propose a novel complete solution of a hand gesture control system employing immersive tactile feedback to the user's hand. For this goal, we first developed a fast and accurate hand-tracking algorithm with a Kinect sensor using the proposed MLBP (modified local binary pattern) that can efficiently analyze 3D shapes in depth images. The superiority of our tracking method was verified in terms of tracking accuracy and speed by comparing with existing methods, Natural Interaction Technology for End-user (NITE), 3D Hand Tracker and CamShift. As the second step, a new tactile feedback technology with a piezoelectric actuator has been developed and integrated into the developed hand tracking algorithm, including the DTW (dynamic time warping) gesture recognition algorithm for a complete solution of an immersive gesture control system. The quantitative and qualitative evaluations of the integrated system were conducted with human subjects, and the results demonstrate that our gesture control with tactile feedback is a promising technology compared to a vision-based gesture control system that has typically no feedback for the user's gesture inputs. Our study provides researchers and designers with informative guidelines to develop more natural gesture control systems or immersive user interfaces with haptic feedback.

  18. Depth Camera-Based 3D Hand Gesture Controls with Immersive Tactile Feedback for Natural Mid-Air Gesture Interactions

    PubMed Central

    Kim, Kwangtaek; Kim, Joongrock; Choi, Jaesung; Kim, Junghyun; Lee, Sangyoun

    2015-01-01

    Vision-based hand gesture interactions are natural and intuitive when interacting with computers, since we naturally exploit gestures to communicate with other people. However, it is agreed that users suffer from discomfort and fatigue when using gesture-controlled interfaces, due to the lack of physical feedback. To solve the problem, we propose a novel complete solution of a hand gesture control system employing immersive tactile feedback to the user's hand. For this goal, we first developed a fast and accurate hand-tracking algorithm with a Kinect sensor using the proposed MLBP (modified local binary pattern) that can efficiently analyze 3D shapes in depth images. The superiority of our tracking method was verified in terms of tracking accuracy and speed by comparing with existing methods, Natural Interaction Technology for End-user (NITE), 3D Hand Tracker and CamShift. As the second step, a new tactile feedback technology with a piezoelectric actuator has been developed and integrated into the developed hand tracking algorithm, including the DTW (dynamic time warping) gesture recognition algorithm for a complete solution of an immersive gesture control system. The quantitative and qualitative evaluations of the integrated system were conducted with human subjects, and the results demonstrate that our gesture control with tactile feedback is a promising technology compared to a vision-based gesture control system that has typically no feedback for the user's gesture inputs. Our study provides researchers and designers with informative guidelines to develop more natural gesture control systems or immersive user interfaces with haptic feedback. PMID:25580901

  19. Representing 3D virtual objects: interaction between visuo-spatial ability and type of exploration.

    PubMed

    Meijer, Frank; van den Broek, Egon L

    2010-03-17

    We investigated individual differences in interactively exploring 3D virtual objects. 36 participants explored 24 simple and 24 difficult objects (composed of respectively three and five Biederman geons) actively, passively, or not at all. Both their 3D mental representation of the objects and visuo-spatial ability was assessed. Results show that, regardless of the object's complexity, people with a low VSA benefit from active exploration of objects, where people with a middle or high VSA do not. These findings extend and refine earlier research on interactively learning visuo-spatial information and underline the importance to take individual differences into account. PMID:20116394

  20. Representing 3D virtual objects: interaction between visuo-spatial ability and type of exploration.

    PubMed

    Meijer, Frank; van den Broek, Egon L

    2010-03-17

    We investigated individual differences in interactively exploring 3D virtual objects. 36 participants explored 24 simple and 24 difficult objects (composed of respectively three and five Biederman geons) actively, passively, or not at all. Both their 3D mental representation of the objects and visuo-spatial ability was assessed. Results show that, regardless of the object's complexity, people with a low VSA benefit from active exploration of objects, where people with a middle or high VSA do not. These findings extend and refine earlier research on interactively learning visuo-spatial information and underline the importance to take individual differences into account.

  1. CROSS DRIVE: A New Interactive and Immersive Approach for Exploring 3D Time-Dependent Mars Atmospheric Data in Distributed Teams

    NASA Astrophysics Data System (ADS)

    Gerndt, Andreas M.; Engelke, Wito; Giuranna, Marco; Vandaele, Ann C.; Neary, Lori; Aoki, Shohei; Kasaba, Yasumasa; Garcia, Arturo; Fernando, Terrence; Roberts, David; CROSS DRIVE Team

    2016-10-01

    Atmospheric phenomena of Mars can be highly dynamic and have daily and seasonal variations. Planetary-scale wavelike disturbances, for example, are frequently observed in Mars' polar winter atmosphere. Possible sources of the wave activity were suggested to be dynamical instabilities and quasi-stationary planetary waves, i.e. waves that arise predominantly via zonally asymmetric surface properties. For a comprehensive understanding of these phenomena, single layers of altitude have to be analyzed carefully and relations between different atmospheric quantities and interaction with the surface of Mars have to be considered. The CROSS DRIVE project tries to address the presentation of those data with a global view by means of virtual reality techniques. Complex orbiter data from spectrometer and observation data from Earth are combined with global circulation models and high-resolution terrain data and images available from Mars Express or MRO instruments. Scientists can interactively extract features from those dataset and can change visualization parameters in real-time in order to emphasize findings. Stereoscopic views allow for perception of the actual 3D behavior of Mars's atmosphere. A very important feature of the visualization system is the possibility to connect distributed workspaces together. This enables discussions between distributed working groups. The workspace can scale from virtual reality systems to expert desktop applications to web-based project portals. If multiple virtual environments are connected, the 3D position of each individual user is captured and used to depict the scientist as an avatar in the virtual world. The appearance of the avatar can also scale from simple annotations to complex avatars using tele-presence technology to reconstruct the users in 3D. Any change of the feature set (annotations, cutplanes, volume rendering, etc.) within the VR is immediately exchanged between all connected users. This allows that everybody is always

  2. The Learner Characteristics, Features of Desktop 3D Virtual Reality Environments, and College Chemistry Instruction: A Structural Equation Modeling Analysis

    ERIC Educational Resources Information Center

    Merchant, Zahira; Goetz, Ernest T.; Keeney-Kennicutt, Wendy; Kwok, Oi-man; Cifuentes, Lauren; Davis, Trina J.

    2012-01-01

    We examined a model of the impact of a 3D desktop virtual reality environment on the learner characteristics (i.e. perceptual and psychological variables) that can enhance chemistry-related learning achievements in an introductory college chemistry class. The relationships between the 3D virtual reality features and the chemistry learning test as…

  3. Using the CAVE virtual-reality environment as an aid to 3-D electromagnetic field computation

    SciTech Connect

    Turner, L.R.; Levine, D.; Huang, M.; Papka, M; Kettunen, L.

    1995-08-01

    One of the major problems in three-dimensional (3-D) field computation is visualizing the resulting 3-D field distributions. A virtual-reality environment, such as the CAVE, (CAVE Automatic Virtual Environment) is helping to overcome this problem, thus making the results of computation more usable for designers and users of magnets and other electromagnetic devices. As a demonstration of the capabilities of the CAVE, the elliptical multipole wiggler (EMW), an insertion device being designed for the Advanced Photon Source (APS) now being commissioned at Argonne National Laboratory (ANL), wa made visible, along with its fields and beam orbits. Other uses of the CAVE in preprocessing and postprocessing computation for electromagnetic applications are also discussed.

  4. The potential of 3-D virtual worlds in professional nursing education.

    PubMed

    Hansen, Margaret M; Murray, Peter J; Erdley, W Scott

    2009-01-01

    Three-dimensional (3-D) virtual worlds (VWs), such as Second Life, are actively being explored for their potential use in health care and nursing professional education and even for practice. The relevance of this e-learning innovation on a large scale for teaching students and professionals is yet to be demonstrated and variables influencing adoption, such as increased knowledge, self-directed learning, and peer collaboration, by academics, and health care professionals requires empirical research. PMID:19592909

  5. Coupled auralization and virtual video for immersive multimedia displays

    NASA Astrophysics Data System (ADS)

    Henderson, Paul D.; Torres, Rendell R.; Shimizu, Yasushi; Radke, Richard; Lonsway, Brian

    2003-04-01

    The implementation of maximally-immersive interactive multimedia in exhibit spaces requires not only the presentation of realistic visual imagery but also the creation of a perceptually accurate aural experience. While conventional implementations treat audio and video problems as essentially independent, this research seeks to couple the visual sensory information with dynamic auralization in order to enhance perceptual accuracy. An implemented system has been developed for integrating accurate auralizations with virtual video techniques for both interactive presentation and multi-way communication. The current system utilizes a multi-channel loudspeaker array and real-time signal processing techniques for synthesizing the direct sound, early reflections, and reverberant field excited by a moving sound source whose path may be interactively defined in real-time or derived from coupled video tracking data. In this implementation, any virtual acoustic environment may be synthesized and presented in a perceptually-accurate fashion to many participants over a large listening and viewing area. Subject tests support the hypothesis that the cross-modal coupling of aural and visual displays significantly affects perceptual localization accuracy.

  6. The Rufous Hummingbird in hovering flight -- full-body 3D immersed boundary simulation

    NASA Astrophysics Data System (ADS)

    Ferreira de Sousa, Paulo; Luo, Haoxiang; Bocanegra Evans, Humberto

    2009-11-01

    Hummingbirds are an interesting case study for the development of micro-air vehicles since they combine the high flight stability of insects with the low metabolic power per unit of body mass of bats, during hovering flight. In this study, simulations of a full-body hummingbird in hovering flight were performed at a Reynolds number around 3600. The simulations employ a versatile sharp-interface immersed boundary method recently enhanced at our lab that can treat thin membranes and solid bodies alike. Implemented on a Cartesian mesh, the numerical method allows us to capture the vortex dynamics of the wake accurately and efficiently. The whole-body simulation will allow us to clearly identify the three general patterns of flow velocity around the body of the hummingbird referred in Altshuler et al. (Exp Fluids 46 (5), 2009). One focus of the current study is to understand the interaction between the wakes of the two wings at the end of the upstroke, and how the tail actively defects the flow to contribute to pitch stability. Another focus of the study will be to identify the pair of unconnected loops underneath each wing.

  7. Chaotic orbits tracked by a 3D asymmetric immersed solid at high Reynolds numbers using a novel Gerris-Immersed Solid (DNS) Solver

    NASA Astrophysics Data System (ADS)

    Shui, Pei; Popinet, Stéphane; Valluri, Prashant; Govindarajan, Rama

    2014-11-01

    The motion of a neutrally buoyant ellipsoidal solid with an initial momentum has been theoretically predicted to be chaotic in inviscid flow by Aref (1993). On the other hand, the particle could stop moving when the damping viscous force is strong enough. This work provides numerical evidence for 3D chaotic motion of a neutrally buoyant general ellipsoidal solid and suggests criteria for triggering this motion. The study also shows that the translational/rotational energy ratio plays the key role on the motion pattern, while the particle geometry and density aspect ratios also have some influence on the chaotic behaviour. We have developed a novel variant of the immersed solid solver under the framework of the Gerris flow package of Popinet et al. (2003). Our solid solver, the Gerris Immersed Solid Solver (GISS), is capable of handling 6 degree-of-freedom motion of particles with arbitrary geometry and number in three-dimensions and can precisely predict the hydrodynamic interactions and their effects on particle trajectories. The reliability and accuracy have been checked by a series of classical studies, testing both translational and rotational motions with a vast range of flow properties.

  8. The cognitive apprenticeship theory for the teaching of mathematics in an online 3D virtual environment

    NASA Astrophysics Data System (ADS)

    Bouta, Hara; Paraskeva, Fotini

    2013-03-01

    Research spanning two decades shows that there is a continuing development of 3D virtual worlds and investment in such environments for educational purposes. Research stresses the need for these environments to be well-designed and for suitable pedagogies to be implemented in the teaching practice in order for these worlds to be fully effective. To this end, we propose a pedagogical framework based on the cognitive apprenticeship for deriving principles and guidelines to inform the design, development and use of a 3D virtual environment. This study examines how the use of a 3D virtual world facilitates the teaching of mathematics in primary education by combining design principles and guidelines based on the Cognitive Apprenticeship Theory and the teaching methods that this theory introduces. We focus specifically on 5th and 6th grade students' engagement (behavioral, affective and cognitive) while learning fractional concepts over a period of two class sessions. Quantitative and qualitative analyses indicate considerable improvement in the engagement of the students who participated in the experiment. This paper presents the findings regarding students' cognitive engagement in the process of comprehending basic fractional concepts - notoriously hard for students to master. The findings are encouraging and suggestions are made for further research.

  9. 3D Virtual Worlds as Art Media and Exhibition Arenas: Students' Responses and Challenges in Contemporary Art Education

    ERIC Educational Resources Information Center

    Lu, Lilly

    2013-01-01

    3D virtual worlds (3D VWs) are considered one of the emerging learning spaces of the 21st century; however, few empirical studies have investigated educational applications and student learning aspects in art education. This study focused on students' responses to and challenges with 3D VWs in both aspects. The findings show that most…

  10. Effects of 3D Virtual Reality of Plate Tectonics on Fifth Grade Students' Achievement and Attitude toward Science

    ERIC Educational Resources Information Center

    Kim, Paul

    2006-01-01

    This study examines the effects of a teaching method using 3D virtual reality simulations on achievement and attitude toward science. An experiment was conducted with fifth-grade students (N = 41) to examine the effects of 3D simulations, designed to support inquiry-based science curriculum. An ANOVA analysis revealed that the 3D group scored…

  11. Mackay campus of environmental education and digital cultural construction: the application of 3D virtual reality

    NASA Astrophysics Data System (ADS)

    Chien, Shao-Chi; Chung, Yu-Wei; Lin, Yi-Hsuan; Huang, Jun-Yi; Chang, Jhih-Ting; He, Cai-Ying; Cheng, Yi-Wen

    2012-04-01

    This study uses 3D virtual reality technology to create the "Mackay campus of the environmental education and digital cultural 3D navigation system" for local historical sites in the Tamsui (Hoba) area, in hopes of providing tourism information and navigation through historical sites using a 3D navigation system. We used Auto CAD, Sketch Up, and SpaceEyes 3D software to construct the virtual reality scenes and create the school's historical sites, such as the House of Reverends, the House of Maidens, the Residence of Mackay, and the Education Hall. We used this technology to complete the environmental education and digital cultural Mackay campus . The platform we established can indeed achieve the desired function of providing tourism information and historical site navigation. The interactive multimedia style and the presentation of the information will allow users to obtain a direct information response. In addition to showing the external appearances of buildings, the navigation platform can also allow users to enter the buildings to view lifelike scenes and textual information related to the historical sites. The historical sites are designed according to their actual size, which gives users a more realistic feel. In terms of the navigation route, the navigation system does not force users along a fixed route, but instead allows users to freely control the route they would like to take to view the historical sites on the platform.

  12. Virtual 3D bladder reconstruction for augmented medical records from white light cystoscopy (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Lurie, Kristen L.; Zlatev, Dimitar V.; Angst, Roland; Liao, Joseph C.; Ellerbee, Audrey K.

    2016-02-01

    Bladder cancer has a high recurrence rate that necessitates lifelong surveillance to detect mucosal lesions. Examination with white light cystoscopy (WLC), the standard of care, is inherently subjective and data storage limited to clinical notes, diagrams, and still images. A visual history of the bladder wall can enhance clinical and surgical management. To address this clinical need, we developed a tool to transform in vivo WLC videos into virtual 3-dimensional (3D) bladder models using advanced computer vision techniques. WLC videos from rigid cystoscopies (1280 x 720 pixels) were recorded at 30 Hz followed by immediate camera calibration to control for image distortions. Video data were fed into an automated structure-from-motion algorithm that generated a 3D point cloud followed by a 3D mesh to approximate the bladder surface. The highest quality cystoscopic images were projected onto the approximated bladder surface to generate a virtual 3D bladder reconstruction. In intraoperative WLC videos from 36 patients undergoing transurethral resection of suspected bladder tumors, optimal reconstruction was achieved from frames depicting well-focused vasculature, when the bladder was maintained at constant volume with minimal debris, and when regions of the bladder wall were imaged multiple times. A significant innovation of this work is the ability to perform the reconstruction using video from a clinical procedure collected with standard equipment, thereby facilitating rapid clinical translation, application to other forms of endoscopy and new opportunities for longitudinal studies of cancer recurrence.

  13. Elderly Healthcare Monitoring Using an Avatar-Based 3D Virtual Environment

    PubMed Central

    Pouke, Matti; Häkkilä, Jonna

    2013-01-01

    Homecare systems for elderly people are becoming increasingly important due to both economic reasons as well as patients’ preferences. Sensor-based surveillance technologies are an expected future trend, but research so far has devoted little attention to the User Interface (UI) design of such systems and the user-centric design approach. In this paper, we explore the possibilities of an avatar-based 3D visualization system, which exploits wearable sensors and human activity simulations. We present a technical prototype and the evaluation of alternative concept designs for UIs based on a 3D virtual world. The evaluation was conducted with homecare providers through focus groups and an online survey. Our results show firstly that systems taking advantage of 3D virtual world visualization techniques have potential especially due to the privacy preserving and simplified information presentation style, and secondly that simple representations and glancability should be emphasized in the design. The identified key use cases highlight that avatar-based 3D presentations can be helpful if they provide an overview as well as details on demand. PMID:24351747

  14. Elderly healthcare monitoring using an avatar-based 3D virtual environment.

    PubMed

    Pouke, Matti; Häkkilä, Jonna

    2013-12-17

    Homecare systems for elderly people are becoming increasingly important due to both economic reasons as well as patients' preferences. Sensor-based surveillance technologies are an expected future trend, but research so far has devoted little attention to the User Interface (UI) design of such systems and the user-centric design approach. In this paper, we explore the possibilities of an avatar-based 3D visualization system, which exploits wearable sensors and human activity simulations. We present a technical prototype and the evaluation of alternative concept designs for UIs based on a 3D virtual world. The evaluation was conducted with homecare providers through focus groups and an online survey. Our results show firstly that systems taking advantage of 3D virtual world visualization techniques have potential especially due to the privacy preserving and simplified information presentation style, and secondly that simple representations and glancability should be emphasized in the design. The identified key use cases highlight that avatar-based 3D presentations can be helpful if they provide an overview as well as details on demand.

  15. 3D Visualization of Cultural Heritage Artefacts with Virtual Reality devices

    NASA Astrophysics Data System (ADS)

    Gonizzi Barsanti, S.; Caruso, G.; Micoli, L. L.; Covarrubias Rodriguez, M.; Guidi, G.

    2015-08-01

    Although 3D models are useful to preserve the information about historical artefacts, the potential of these digital contents are not fully accomplished until they are not used to interactively communicate their significance to non-specialists. Starting from this consideration, a new way to provide museum visitors with more information was investigated. The research is aimed at valorising and making more accessible the Egyptian funeral objects exhibited in the Sforza Castle in Milan. The results of the research will be used for the renewal of the current exhibition, at the Archaeological Museum in Milan, by making it more attractive. A 3D virtual interactive scenario regarding the "path of the dead", an important ritual in ancient Egypt, was realized to augment the experience and the comprehension of the public through interactivity. Four important artefacts were considered for this scope: two ushabty, a wooden sarcophagus and a heart scarab. The scenario was realized by integrating low-cost Virtual Reality technologies, as the Oculus Rift DK2 and the Leap Motion controller, and implementing a specific software by using Unity. The 3D models were implemented by adding responsive points of interest in relation to important symbols or features of the artefact. This allows highlighting single parts of the artefact in order to better identify the hieroglyphs and provide their translation. The paper describes the process for optimizing the 3D models, the implementation of the interactive scenario and the results of some test that have been carried out in the lab.

  16. Architecture of web services in the enhancement of real-time 3D video virtualization in cloud environment

    NASA Astrophysics Data System (ADS)

    Bada, Adedayo; Wang, Qi; Alcaraz-Calero, Jose M.; Grecos, Christos

    2016-04-01

    This paper proposes a new approach to improving the application of 3D video rendering and streaming by jointly exploring and optimizing both cloud-based virtualization and web-based delivery. The proposed web service architecture firstly establishes a software virtualization layer based on QEMU (Quick Emulator), an open-source virtualization software that has been able to virtualize system components except for 3D rendering, which is still in its infancy. The architecture then explores the cloud environment to boost the speed of the rendering at the QEMU software virtualization layer. The capabilities and inherent limitations of Virgil 3D, which is one of the most advanced 3D virtual Graphics Processing Unit (GPU) available, are analyzed through benchmarking experiments and integrated into the architecture to further speed up the rendering. Experimental results are reported and analyzed to demonstrate the benefits of the proposed approach.

  17. Fast extraction of minimal paths in 3D images and applications to virtual endoscopy.

    PubMed

    Deschamps, T; Cohen, L D

    2001-12-01

    The aim of this article is to build trajectories for virtual endoscopy inside 3D medical images, using the most automatic way. Usually the construction of this trajectory is left to the clinician who must define some points on the path manually using three orthogonal views. But for a complex structure such as the colon, those views give little information on the shape of the object of interest. The path construction in 3D images becomes a very tedious task and precise a priori knowledge of the structure is needed to determine a suitable trajectory. We propose a more automatic path tracking method to overcome those drawbacks: we are able to build a path, given only one or two end points and the 3D image as inputs. This work is based on previous work by Cohen and Kimmel [Int. J. Comp. Vis. 24 (1) (1997) 57] for extracting paths in 2D images using Fast Marching algorithm. Our original contribution is twofold. On the first hand, we present a general technical contribution which extends minimal paths to 3D images and gives new improvements of the approach that are relevant in 2D as well as in 3D to extract linear structures in images. It includes techniques to make the path extraction scheme faster and easier, by reducing the user interaction. We also develop a new method to extract a centered path in tubular structures. Synthetic and real medical images are used to illustrate each contribution. On the other hand, we show that our method can be efficiently applied to the problem of finding a centered path in tubular anatomical structures with minimum interactivity, and that this path can be used for virtual endoscopy. Results are shown in various anatomical regions (colon, brain vessels, arteries) with different 3D imaging protocols (CT, MR). PMID:11731307

  18. Virtual Boutique: a 3D modeling and content-based management approach to e-commerce

    NASA Astrophysics Data System (ADS)

    Paquet, Eric; El-Hakim, Sabry F.

    2000-12-01

    The Virtual Boutique is made out of three modules: the decor, the market and the search engine. The decor is the physical space occupied by the Virtual Boutique. It can reproduce any existing boutique. For this purpose, photogrammetry is used. A set of pictures of a real boutique or space is taken and a virtual 3D representation of this space is calculated from them. Calculations are performed with software developed at NRC. This representation consists of meshes and texture maps. The camera used in the acquisition process determines the resolution of the texture maps. Decorative elements are added like painting, computer generated objects and scanned objects. The objects are scanned with laser scanner developed at NRC. This scanner allows simultaneous acquisition of range and color information based on white laser beam triangulation. The second module, the market, is made out of all the merchandises and the manipulators, which are used to manipulate and compare the objects. The third module, the search engine, can search the inventory based on an object shown by the customer in order to retrieve similar objects base don shape and color. The items of interest are displayed in the boutique by reconfiguring the market space, which mean that the boutique can be continuously customized according to the customer's needs. The Virtual Boutique is entirely written in Java 3D and can run in mono and stereo mode and has been optimized in order to allow high quality rendering.

  19. CT virtual endoscopy and 3D stereoscopic visualisation in the evaluation of coronary stenting.

    PubMed

    Sun, Z; Lawrence-Brown

    2009-10-01

    The aim of this case report is to present the additional value provided by CT virtual endoscopy and 3D stereoscopic visualisation when compared with 2D visualisations in the assessment of coronary stenting. A 64-year old patient was treated with left coronary stenting 8 years ago and recently followed up with multidetector row CT angiography. An in-stent restenosis of the left coronary artery was suspected based on 2D axial and multiplanar reformatted images. 3D virtual endoscopy was generated to demonstrate the smooth intraluminal surface of coronary artery wall, and there was no evidence of restenosis or intraluminal irregularity. Virtual fly-through of the coronary artery was produced to examine the entire length of the coronary artery with the aim of demonstrating the intraluminal changes following placement of the coronary stent. In addition, stereoscopic views were generated to show the relationship between coronary artery branches and the coronary stent. In comparison with traditional 2D visualisations, virtual endoscopy was useful for assessment of the intraluminal appearance of the coronary artery wall following coronary stent implantation, while stereoscopic visualisation improved observers' understanding of the complex cardiac structures. Thus, both methods could be used as a complementary tool in cardiac imaging.

  20. Investigating the interaction between positions and signals of height-channel loudspeakers in reproducing immersive 3d sound

    NASA Astrophysics Data System (ADS)

    Karampourniotis, Antonios

    Since transmission capacities have significantly increased over the past few years, researchers are now able to transmit a larger amount of data, namely multichannel audio content, in the consumer applications. What has not been investigated in a systematic way yet is how to deliver the multichannel content. Specifically, researchers' attention is focused on the quest of a standardized immersive reproduction format that incorporates height loudspeakers coupled with the new high-resolution and three-dimensional (3D) media content for a comprehensive 3D experience. To better understand and utilize the immersive audio reproduction, this research focused on the (1) interaction between the positioning of height loudspeakers and the signals fed to the loudspeakers, (2) investigation of the perceptual characteristics associated with the height ambiences, and (3) the influence of inverse filtering on perceived sound quality for the realistic 3D sound reproduction. The experiment utilized the existence of two layers of loudspeakers: horizontal layer following the ITU-R BS.775 five-channel loudspeaker configuration and height layer locating a total of twelve loudspeakers at the azimuth of +/-30°, +/-50°, +/-70°, +/-90°, +/-110° and +/-130° and elevation of 30°. Eight configurations were formed, each of which selected four height-loudspeakers from twelve. In the subjective evaluation, listeners compared, ranked and described the eight randomly presented configurations of 4-channel height ambiences. The stimuli for the experiment were four nine-channel (5 channels for the horizontal and 4 for the height loudspeakers) multichannel music. Moreover, an approach of Finite Impulse Response (FIR) inverse filtering was attempted, in order to remove the particular room's acoustic influence. Another set of trained professionals was informally asked to use descriptors to characterize the newly presented multichannel music with height ambiences rendered with inverse filtering. The

  1. Web GIS in practice VII: stereoscopic 3-D solutions for online maps and virtual globes

    PubMed Central

    Boulos, Maged N Kamel; Robinson, Larry R

    2009-01-01

    Because our pupils are about 6.5 cm apart, each eye views a scene from a different angle and sends a unique image to the visual cortex, which then merges the images from both eyes into a single picture. The slight difference between the right and left images allows the brain to properly perceive the 'third dimension' or depth in a scene (stereopsis). However, when a person views a conventional 2-D (two-dimensional) image representation of a 3-D (three-dimensional) scene on a conventional computer screen, each eye receives essentially the same information. Depth in such cases can only be approximately inferred from visual clues in the image, such as perspective, as only one image is offered to both eyes. The goal of stereoscopic 3-D displays is to project a slightly different image into each eye to achieve a much truer and realistic perception of depth, of different scene planes, and of object relief. This paper presents a brief review of a number of stereoscopic 3-D hardware and software solutions for creating and displaying online maps and virtual globes (such as Google Earth) in "true 3D", with costs ranging from almost free to multi-thousand pounds sterling. A practical account is also given of the experience of the USGS BRD UMESC (United States Geological Survey's Biological Resources Division, Upper Midwest Environmental Sciences Center) in setting up a low-cost, full-colour stereoscopic 3-D system. PMID:19849837

  2. Analysis of scalability of high-performance 3D image processing platform for virtual colonoscopy

    NASA Astrophysics Data System (ADS)

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli

    2014-03-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. For this purpose, we previously developed a software platform for high-performance 3D medical image processing, called HPC 3D-MIP platform, which employs increasingly available and affordable commodity computing systems such as the multicore, cluster, and cloud computing systems. To achieve scalable high-performance computing, the platform employed size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D-MIP algorithms, supported task scheduling for efficient load distribution and balancing, and consisted of a layered parallel software libraries that allow image processing applications to share the common functionalities. We evaluated the performance of the HPC 3D-MIP platform by applying it to computationally intensive processes in virtual colonoscopy. Experimental results showed a 12-fold performance improvement on a workstation with 12-core CPUs over the original sequential implementation of the processes, indicating the efficiency of the platform. Analysis of performance scalability based on the Amdahl's law for symmetric multicore chips showed the potential of a high performance scalability of the HPC 3DMIP platform when a larger number of cores is available.

  3. Web GIS in practice VII: stereoscopic 3-D solutions for online maps and virtual globes

    USGS Publications Warehouse

    Boulos, Maged N.K.; Robinson, Larry R.

    2009-01-01

    Because our pupils are about 6.5 cm apart, each eye views a scene from a different angle and sends a unique image to the visual cortex, which then merges the images from both eyes into a single picture. The slight difference between the right and left images allows the brain to properly perceive the 'third dimension' or depth in a scene (stereopsis). However, when a person views a conventional 2-D (two-dimensional) image representation of a 3-D (three-dimensional) scene on a conventional computer screen, each eye receives essentially the same information. Depth in such cases can only be approximately inferred from visual clues in the image, such as perspective, as only one image is offered to both eyes. The goal of stereoscopic 3-D displays is to project a slightly different image into each eye to achieve a much truer and realistic perception of depth, of different scene planes, and of object relief. This paper presents a brief review of a number of stereoscopic 3-D hardware and software solutions for creating and displaying online maps and virtual globes (such as Google Earth) in "true 3D", with costs ranging from almost free to multi-thousand pounds sterling. A practical account is also given of the experience of the USGS BRD UMESC (United States Geological Survey's Biological Resources Division, Upper Midwest Environmental Sciences Center) in setting up a low-cost, full-colour stereoscopic 3-D system.

  4. Blood Pool Segmentation Results in Superior Virtual Cardiac Models than Myocardial Segmentation for 3D Printing.

    PubMed

    Farooqi, Kanwal M; Lengua, Carlos Gonzalez; Weinberg, Alan D; Nielsen, James C; Sanz, Javier

    2016-08-01

    The method of cardiac magnetic resonance (CMR) three-dimensional (3D) image acquisition and post-processing which should be used to create optimal virtual models for 3D printing has not been studied systematically. Patients (n = 19) who had undergone CMR including both 3D balanced steady-state free precession (bSSFP) imaging and contrast-enhanced magnetic resonance angiography (MRA) were retrospectively identified. Post-processing for the creation of virtual 3D models involved using both myocardial (MS) and blood pool (BP) segmentation, resulting in four groups: Group 1-bSSFP/MS, Group 2-bSSFP/BP, Group 3-MRA/MS and Group 4-MRA/BP. The models created were assessed by two raters for overall quality (1-poor; 2-good; 3-excellent) and ability to identify predefined vessels (1-5: superior vena cava, inferior vena cava, main pulmonary artery, ascending aorta and at least one pulmonary vein). A total of 76 virtual models were created from 19 patient CMR datasets. The mean overall quality scores for Raters 1/2 were 1.63 ± 0.50/1.26 ± 0.45 for Group 1, 2.12 ± 0.50/2.26 ± 0.73 for Group 2, 1.74 ± 0.56/1.53 ± 0.61 for Group 3 and 2.26 ± 0.65/2.68 ± 0.48 for Group 4. The numbers of identified vessels for Raters 1/2 were 4.11 ± 1.32/4.05 ± 1.31 for Group 1, 4.90 ± 0.46/4.95 ± 0.23 for Group 2, 4.32 ± 1.00/4.47 ± 0.84 for Group 3 and 4.74 ± 0.56/4.63 ± 0.49 for Group 4. Models created using BP segmentation (Groups 2 and 4) received significantly higher ratings than those created using MS for both overall quality and number of vessels visualized (p < 0.05), regardless of the acquisition technique. There were no significant differences between Groups 1 and 3. The ratings for Raters 1 and 2 had good correlation for overall quality (ICC = 0.63) and excellent correlation for the total number of vessels visualized (ICC = 0.77). The intra-rater reliability was good for Rater A (ICC = 0.65). Three models were successfully printed

  5. Blood Pool Segmentation Results in Superior Virtual Cardiac Models than Myocardial Segmentation for 3D Printing.

    PubMed

    Farooqi, Kanwal M; Lengua, Carlos Gonzalez; Weinberg, Alan D; Nielsen, James C; Sanz, Javier

    2016-08-01

    The method of cardiac magnetic resonance (CMR) three-dimensional (3D) image acquisition and post-processing which should be used to create optimal virtual models for 3D printing has not been studied systematically. Patients (n = 19) who had undergone CMR including both 3D balanced steady-state free precession (bSSFP) imaging and contrast-enhanced magnetic resonance angiography (MRA) were retrospectively identified. Post-processing for the creation of virtual 3D models involved using both myocardial (MS) and blood pool (BP) segmentation, resulting in four groups: Group 1-bSSFP/MS, Group 2-bSSFP/BP, Group 3-MRA/MS and Group 4-MRA/BP. The models created were assessed by two raters for overall quality (1-poor; 2-good; 3-excellent) and ability to identify predefined vessels (1-5: superior vena cava, inferior vena cava, main pulmonary artery, ascending aorta and at least one pulmonary vein). A total of 76 virtual models were created from 19 patient CMR datasets. The mean overall quality scores for Raters 1/2 were 1.63 ± 0.50/1.26 ± 0.45 for Group 1, 2.12 ± 0.50/2.26 ± 0.73 for Group 2, 1.74 ± 0.56/1.53 ± 0.61 for Group 3 and 2.26 ± 0.65/2.68 ± 0.48 for Group 4. The numbers of identified vessels for Raters 1/2 were 4.11 ± 1.32/4.05 ± 1.31 for Group 1, 4.90 ± 0.46/4.95 ± 0.23 for Group 2, 4.32 ± 1.00/4.47 ± 0.84 for Group 3 and 4.74 ± 0.56/4.63 ± 0.49 for Group 4. Models created using BP segmentation (Groups 2 and 4) received significantly higher ratings than those created using MS for both overall quality and number of vessels visualized (p < 0.05), regardless of the acquisition technique. There were no significant differences between Groups 1 and 3. The ratings for Raters 1 and 2 had good correlation for overall quality (ICC = 0.63) and excellent correlation for the total number of vessels visualized (ICC = 0.77). The intra-rater reliability was good for Rater A (ICC = 0.65). Three models were successfully printed

  6. Dynamic WIFI-Based Indoor Positioning in 3D Virtual World

    NASA Astrophysics Data System (ADS)

    Chan, S.; Sohn, G.; Wang, L.; Lee, W.

    2013-11-01

    A web-based system based on the 3DTown project was proposed using Google Earth plug-in that brings information from indoor positioning devices and real-time sensors into an integrated 3D indoor and outdoor virtual world to visualize the dynamics of urban life within the 3D context of a city. We addressed limitation of the 3DTown project with particular emphasis on video surveillance camera used for indoor tracking purposes. The proposed solution was to utilize wireless local area network (WLAN) WiFi as a replacement technology for localizing objects of interest due to the wide spread availability and large coverage area of WiFi in indoor building spaces. Indoor positioning was performed using WiFi without modifying existing building infrastructure or introducing additional access points (AP)s. A hybrid probabilistic approach was used for indoor positioning based on previously recorded WiFi fingerprint database in the Petrie Science and Engineering building at York University. In addition, we have developed a 3D building modeling module that allows for efficient reconstruction of outdoor building models to be integrated with indoor building models; a sensor module for receiving, distributing, and visualizing real-time sensor data; and a web-based visualization module for users to explore the dynamic urban life in a virtual world. In order to solve the problems in the implementation of the proposed system, we introduce approaches for integration of indoor building models with indoor positioning data, as well as real-time sensor information and visualization on the web-based system. In this paper we report the preliminary results of our prototype system, demonstrating the system's capability for implementing a dynamic 3D indoor and outdoor virtual world that is composed of discrete modules connected through pre-determined communication protocols.

  7. Human fear conditioning conducted in full immersion 3-dimensional virtual reality.

    PubMed

    Huff, Nicole C; Zeilinski, David J; Fecteau, Matthew E; Brady, Rachael; LaBar, Kevin S

    2010-01-01

    conditioning and extinction parameters to yield empirical data that can suggest better treatment options and/or analyze mechanistic hypotheses. In order to test the hypothesis that fear conditioning may be richly encoded and context specific when conducted in a fully immersive environment, we developed distinct virtual reality 3-D contexts in which participants experienced fear conditioning to virtual snakes or spiders. Auditory cues co-occurred with the CS in order to further evoke orienting responses and a feeling of "presence" in subjects. Skin conductance response served as the dependent measure of fear acquisition, memory retention and extinction. PMID:20736913

  8. Avalanche for shape and feature-based virtual screening with 3D alignment.

    PubMed

    Diller, David J; Connell, Nancy D; Welsh, William J

    2015-11-01

    This report introduces a new ligand-based virtual screening tool called Avalanche that incorporates both shape- and feature-based comparison with three-dimensional (3D) alignment between the query molecule and test compounds residing in a chemical database. Avalanche proceeds in two steps. The first step is an extremely rapid shape/feature based comparison which is used to narrow the focus from potentially millions or billions of candidate molecules and conformations to a more manageable number that are then passed to the second step. The second step is a detailed yet still rapid 3D alignment of the remaining candidate conformations to the query conformation. Using the 3D alignment, these remaining candidate conformations are scored, re-ranked and presented to the user as the top hits for further visualization and evaluation. To provide further insight into the method, the results from two prospective virtual screens are presented which show the ability of Avalanche to identify hits from chemical databases that would likely be missed by common substructure-based or fingerprint-based search methods. The Avalanche method is extended to enable patent landscaping, i.e., structural refinements to improve the patentability of hits for deployment in drug discovery campaigns. PMID:26458937

  9. Enabling Field Experiences in Introductory Geoscience Classes through the Use of Immersive Virtual Reality

    NASA Astrophysics Data System (ADS)

    Moysey, S. M.; Smith, E.; Sellers, V.; Wyant, P.; Boyer, D. M.; Mobley, C.; Brame, S.

    2015-12-01

    Although field experiences are an important aspect of geoscience education, the opportunity to provide physical world experiences to large groups of introductory students is often limited by access, logistical, and financial constraints. Our project (NSF IUSE 1504619) is investigating the use of immersive virtual reality (VR) technologies as a surrogate for real field experiences in introductory geosciences classes. We are developing a toolbox that leverages innovations in the field of VR, including the Oculus Rift and Google Cardboard, to enable every student in an introductory geology classroom the opportunity to have a first-person virtual field experience in the Grand Canyon. We have opted to structure our VR experience as an interactive game where students must explore the Canyon to accomplish a series of tasks designed to emphasize key aspects of geoscience learning. So far we have produced two demo products for the virtual field trip. The first is a standalone "Rock Box" app developed for the iPhone, which allows students to select different rock samples, examine them in 3D, and obtain basic information about the properties of each sample. The app can act as a supplement to the traditional rock box used in physical geology labs. The second product is a fully functioning VR environment for the Grand Canyon developed using satellite-based topographic and imagery data to retain real geologic features within the experience. Players can freely navigate to explore anywhere they desire within the Canyon, but are guided to points of interest where they are able to complete exercises that will be aligned with specific learning goals. To this point we have integrated elements of the "Rock Box" app within the VR environment, allowing players to examine 3D details of rock samples they encounter within the Grand Canyon. We plan to provide demos of both products and obtain user feedback during our presentation.

  10. Research on 3D virtual campus scene modeling based on 3ds Max and VRML

    NASA Astrophysics Data System (ADS)

    Kang, Chuanli; Zhou, Yanliu; Liang, Xianyue

    2015-12-01

    With the rapid development of modem technology, the digital information management and the virtual reality simulation technology has become a research hotspot. Virtual campus 3D model can not only express the real world objects of natural, real and vivid, and can expand the campus of the reality of time and space dimension, the combination of school environment and information. This paper mainly uses 3ds Max technology to create three-dimensional model of building and on campus buildings, special land etc. And then, the dynamic interactive function is realized by programming the object model in 3ds Max by VRML .This research focus on virtual campus scene modeling technology and VRML Scene Design, and the scene design process in a variety of real-time processing technology optimization strategy. This paper guarantees texture map image quality and improve the running speed of image texture mapping. According to the features and architecture of Guilin University of Technology, 3ds Max, AutoCAD and VRML were used to model the different objects of the virtual campus. Finally, the result of virtual campus scene is summarized.

  11. Virtual reality hardware for use in interactive 3D data fusion and visualization

    NASA Astrophysics Data System (ADS)

    Gourley, Christopher S.; Abidi, Mongi A.

    1997-09-01

    Virtual reality has become a tool for use in many areas of research. We have designed and built a VR system for use in range data fusion and visualization. One major VR tool is the CAVE. This is the ultimate visualization tool, but comes with a large price tag. Our design uses a unique CAVE whose graphics are powered by a desktop computer instead of a larger rack machine making it much less costly. The system consists of a screen eight feet tall by twenty-seven feet wide giving a variable field-of-view currently set at 160 degrees. A silicon graphics Indigo2 MaxImpact with the impact channel option is used for display. This gives the capability to drive three projectors at a resolution of 640 by 480 for use in displaying the virtual environment and one 640 by 480 display for a user control interface. This machine is also the first desktop package which has built-in hardware texture mapping. This feature allows us to quickly fuse the range and intensity data and other multi-sensory data. The final goal is a complete 3D texture mapped model of the environment. A dataglove, magnetic tracker, and spaceball are to be used for manipulation of the data and navigation through the virtual environment. This system gives several users the ability to interactively create 3D models from multiple range images.

  12. Visualization of CFD Results in Immersive Virtual Environments

    NASA Technical Reports Server (NTRS)

    Wasfy, Tamer M.; Noor Ahmed K.

    2001-01-01

    An object-oriented event-driven immersive virtual environment (VE) is described for the visualization of computational fluid dynamics (CFD) results. The VE incorporates the following types of primitive software objects: interface objects, support objects, geometric entities, and finite elements. The fluid domain is discretized using either a multi-block structured grid or an unstructured finite element mesh. The VE allows natural 'fly-through' visualization of the model, the CFD grid, and the model's surroundings. In order to help visualize the flow and its effects on the model, the VE incorporates the following objects: stream objects (lines, surface-restricted lines. ribbons. and volumes); colored surfaces; elevation surfaces; surface arrows; global and local iso-surfaces; vortex cores; and separation/attachment surfaces and lines. Most of these objects can be used for dynamically probing the flow. Particles and arrow animations can be displayed on top of stream objects. Primitive response quantities as well as derived quantities can be used. A recursive tree search algorithm is used for real-time point and value search in the CFD grid.

  13. An efficient 3D R-tree spatial index method for virtual geographic environments

    NASA Astrophysics Data System (ADS)

    Zhu, Qing; Gong, Jun; Zhang, Yeting

    A three-dimensional (3D) spatial index is required for real time applications of integrated organization and management in virtual geographic environments of above ground, underground, indoor and outdoor objects. Being one of the most promising methods, the R-tree spatial index has been paid increasing attention in 3D geospatial database management. Since the existing R-tree methods are usually limited by their weakness of low efficiency, due to the critical overlap of sibling nodes and the uneven size of nodes, this paper introduces the k-means clustering method and employs the 3D overlap volume, 3D coverage volume and the minimum bounding box shape value of nodes as the integrative grouping criteria. A new spatial cluster grouping algorithm and R-tree insertion algorithm is then proposed. Experimental analysis on comparative performance of spatial indexing shows that by the new method the overlap of R-tree sibling nodes is minimized drastically and a balance in the volumes of the nodes is maintained.

  14. 3D QSAR Studies, Pharmacophore Modeling and Virtual Screening on a Series of Steroidal Aromatase Inhibitors

    PubMed Central

    Xie, Huiding; Qiu, Kaixiong; Xie, Xiaoguang

    2014-01-01

    Aromatase inhibitors are the most important targets in treatment of estrogen-dependent cancers. In order to search for potent steroidal aromatase inhibitors (SAIs) with lower side effects and overcome cellular resistance, comparative molecular field analysis (CoMFA) and comparative molecular similarity indices analysis (CoMSIA) were performed on a series of SAIs to build 3D QSAR models. The reliable and predictive CoMFA and CoMSIA models were obtained with statistical results (CoMFA: q2 = 0.636, r2ncv = 0.988, r2pred = 0.658; CoMSIA: q2 = 0.843, r2ncv = 0.989, r2pred = 0.601). This 3D QSAR approach provides significant insights that can be used to develop novel and potent SAIs. In addition, Genetic algorithm with linear assignment of hypermolecular alignment of database (GALAHAD) was used to derive 3D pharmacophore models. The selected pharmacophore model contains two acceptor atoms and four hydrophobic centers, which was used as a 3D query for virtual screening against NCI2000 database. Six hit compounds were obtained and their biological activities were further predicted by the CoMFA and CoMSIA models, which are expected to design potent and novel SAIs. PMID:25405729

  15. Virtual Sculpting and 3D Printing for Young People with Disabilities.

    PubMed

    Mcloughlin, Leigh; Fryazinov, Oleg; Moseley, Mark; Sanchez, Mathieu; Adzhiev, Valery; Comninos, Peter; Pasko, Alexander

    2016-01-01

    The SHIVA project was designed to provide virtual sculpting tools for young people with complex disabilities, allowing them to engage with artistic and creative activities that they might otherwise never be able to access. Their creations are then physically built using 3D printing. To achieve this, the authors built a generic, accessible GUI and a suitable geometric modeling system and used these to produce two prototype modeling exercises. These tools were deployed in a school for students with complex disabilities and are now being used for a variety of educational and developmental purposes. This article presents the project's motivations, approach, and implementation details together with initial results, including 3D printed objects designed by young people with disabilities. PMID:26780761

  16. Virtual Sculpting and 3D Printing for Young People with Disabilities.

    PubMed

    Mcloughlin, Leigh; Fryazinov, Oleg; Moseley, Mark; Sanchez, Mathieu; Adzhiev, Valery; Comninos, Peter; Pasko, Alexander

    2016-01-01

    The SHIVA project was designed to provide virtual sculpting tools for young people with complex disabilities, allowing them to engage with artistic and creative activities that they might otherwise never be able to access. Their creations are then physically built using 3D printing. To achieve this, the authors built a generic, accessible GUI and a suitable geometric modeling system and used these to produce two prototype modeling exercises. These tools were deployed in a school for students with complex disabilities and are now being used for a variety of educational and developmental purposes. This article presents the project's motivations, approach, and implementation details together with initial results, including 3D printed objects designed by young people with disabilities.

  17. 3-D Imaging In Virtual Environment: A Scientific Clinical and Teaching Tool

    NASA Technical Reports Server (NTRS)

    Ross, Muriel D.; DeVincenzi, Donald L. (Technical Monitor)

    1996-01-01

    The advent of powerful graphics workstations and computers has led to the advancement of scientific knowledge through three-dimensional (3-D) reconstruction and imaging of biological cells and tissues. The Biocomputation Center at NASA Ames Research Center pioneered the effort to produce an entirely computerized method for reconstruction of objects from serial sections studied in a transmission electron microscope (TEM). The software developed, ROSS (Reconstruction of Serial Sections), is now being distributed to users across the United States through Space Act Agreements. The software is in widely disparate fields such as geology, botany, biology and medicine. In the Biocomputation Center, ROSS serves as the basis for development of virtual environment technologies for scientific and medical use. This report will describe the Virtual Surgery Workstation Project that is ongoing with clinicians at Stanford University Medical Center, and the role of the Visible Human data in the project.

  18. NASA Virtual Glovebox: An Immersive Virtual Desktop Environment for Training Astronauts in Life Science Experiments

    NASA Technical Reports Server (NTRS)

    Twombly, I. Alexander; Smith, Jeffrey; Bruyns, Cynthia; Montgomery, Kevin; Boyle, Richard

    2003-01-01

    The International Space Station will soon provide an unparalleled research facility for studying the near- and longer-term effects of microgravity on living systems. Using the Space Station Glovebox Facility - a compact, fully contained reach-in environment - astronauts will conduct technically challenging life sciences experiments. Virtual environment technologies are being developed at NASA Ames Research Center to help realize the scientific potential of this unique resource by facilitating the experimental hardware and protocol designs and by assisting the astronauts in training. The Virtual GloveboX (VGX) integrates high-fidelity graphics, force-feedback devices and real- time computer simulation engines to achieve an immersive training environment. Here, we describe the prototype VGX system, the distributed processing architecture used in the simulation environment, and modifications to the visualization pipeline required to accommodate the display configuration.

  19. The Evolution of Constructivist Learning Environments: Immersion in Distributed, Virtual Worlds.

    ERIC Educational Resources Information Center

    Dede, Chris

    1995-01-01

    Discusses the evolution of constructivist learning environments and examines the collaboration of simulated software models, virtual environments, and evolving mental models via immersion in artificial realities. A sidebar gives a realistic example of a student navigating through cyberspace. (JMV)

  20. The Road Less Travelled: The Journey of Immersion into the Virtual Field

    ERIC Educational Resources Information Center

    Fitzsimons, Sabrina

    2013-01-01

    This article provides an account of my experience of immersion as a third-level teacher into the three-dimensional multi-user virtual world Second Life for research purposes. An ethnographic methodology was employed. Three stages in this journey are identified: separation, transition and transformation. In presenting this journey of immersion, it…

  1. Immersive virtual reality and environmental noise assessment: An innovative audio–visual approach

    SciTech Connect

    Ruotolo, Francesco; Maffei, Luigi; Di Gabriele, Maria; Iachini, Tina; Masullo, Massimiliano; Ruggiero, Gennaro; Senese, Vincenzo Paolo

    2013-07-15

    Several international studies have shown that traffic noise has a negative impact on people's health and that people's annoyance does not depend only on noise energetic levels, but rather on multi-perceptual factors. The combination of virtual reality technology and audio rendering techniques allow us to experiment a new approach for environmental noise assessment that can help to investigate in advance the potential negative effects of noise associated with a specific project and that in turn can help designers to make educated decisions. In the present study, the audio–visual impact of a new motorway project on people has been assessed by means of immersive virtual reality technology. In particular, participants were exposed to 3D reconstructions of an actual landscape without the projected motorway (ante operam condition), and of the same landscape with the projected motorway (post operam condition). Furthermore, individuals' reactions to noise were assessed by means of objective cognitive measures (short term verbal memory and executive functions) and subjective evaluations (noise and visual annoyance). Overall, the results showed that the introduction of a projected motorway in the environment can have immediate detrimental effects of people's well-being depending on the distance from the noise source. In particular, noise due to the new infrastructure seems to exert a negative influence on short term verbal memory and to increase both visual and noise annoyance. The theoretical and practical implications of these findings are discussed. -- Highlights: ► Impact of traffic noise on people's well-being depends on multi-perceptual factors. ► A multisensory virtual reality technology is used to simulate a projected motorway. ► Effects on short-term memory and auditory and visual subjective annoyance were found. ► The closer the distance from the motorway the stronger was the effect. ► Multisensory virtual reality methodologies can be used to study

  2. Revealing context-specific conditioned fear memories with full immersion virtual reality.

    PubMed

    Huff, Nicole C; Hernandez, Jose Alba; Fecteau, Matthew E; Zielinski, David J; Brady, Rachael; Labar, Kevin S

    2011-01-01

    The extinction of conditioned fear is known to be context-specific and is often considered more contextually bound than the fear memory itself (Bouton, 2004). Yet, recent findings in rodents have challenged the notion that contextual fear retention is initially generalized. The context-specificity of a cued fear memory to the learning context has not been addressed in the human literature largely due to limitations in methodology. Here we adapt a novel technology to test the context-specificity of cued fear conditioning using full immersion 3-D virtual reality (VR). During acquisition training, healthy participants navigated through virtual environments containing dynamic snake and spider conditioned stimuli (CSs), one of which was paired with electrical wrist stimulation. During a 24-h delayed retention test, one group returned to the same context as acquisition training whereas another group experienced the CSs in a novel context. Unconditioned stimulus expectancy ratings were assayed on-line during fear acquisition as an index of contingency awareness. Skin conductance responses time-locked to CS onset were the dependent measure of cued fear, and skin conductance levels during the interstimulus interval were an index of context fear. Findings indicate that early in acquisition training, participants express contingency awareness as well as differential contextual fear, whereas differential cued fear emerged later in acquisition. During the retention test, differential cued fear retention was enhanced in the group who returned to the same context as acquisition training relative to the context shift group. The results extend recent rodent work to illustrate differences in cued and context fear acquisition and the contextual specificity of recent fear memories. Findings support the use of full immersion VR as a novel tool in cognitive neuroscience to bridge rodent models of contextual phenomena underlying human clinical disorders.

  3. Revealing Context-Specific Conditioned Fear Memories with Full Immersion Virtual Reality

    PubMed Central

    Huff, Nicole C.; Hernandez, Jose Alba; Fecteau, Matthew E.; Zielinski, David J.; Brady, Rachael; LaBar, Kevin S.

    2011-01-01

    The extinction of conditioned fear is known to be context-specific and is often considered more contextually bound than the fear memory itself (Bouton, 2004). Yet, recent findings in rodents have challenged the notion that contextual fear retention is initially generalized. The context-specificity of a cued fear memory to the learning context has not been addressed in the human literature largely due to limitations in methodology. Here we adapt a novel technology to test the context-specificity of cued fear conditioning using full immersion 3-D virtual reality (VR). During acquisition training, healthy participants navigated through virtual environments containing dynamic snake and spider conditioned stimuli (CSs), one of which was paired with electrical wrist stimulation. During a 24-h delayed retention test, one group returned to the same context as acquisition training whereas another group experienced the CSs in a novel context. Unconditioned stimulus expectancy ratings were assayed on-line during fear acquisition as an index of contingency awareness. Skin conductance responses time-locked to CS onset were the dependent measure of cued fear, and skin conductance levels during the interstimulus interval were an index of context fear. Findings indicate that early in acquisition training, participants express contingency awareness as well as differential contextual fear, whereas differential cued fear emerged later in acquisition. During the retention test, differential cued fear retention was enhanced in the group who returned to the same context as acquisition training relative to the context shift group. The results extend recent rodent work to illustrate differences in cued and context fear acquisition and the contextual specificity of recent fear memories. Findings support the use of full immersion VR as a novel tool in cognitive neuroscience to bridge rodent models of contextual phenomena underlying human clinical disorders. PMID:22069384

  4. Male sexual dysfunctions: immersive virtual reality and multimedia therapy.

    PubMed

    Optale, Gabriele; Pastore, Massimiliano; Marin, Silvia; Bordin, Diego; Nasta, Alberto; Pianon, Carlo

    2004-01-01

    The study describes a therapeutic approach using psycho-dynamic psychotherapy integrating virtual environment (VE) for resolving impotence or better erectile dysfunction (ED) of presumably psychological or mixed origin and premature ejaculation (PE). The plan for therapy consists of 12 sessions (15 if a sexual partner was involved) over a 25-week period on the ontogenetic development of male sexual identity, and the methods involved the use of a laptop PC, joystick, Virtual Reality (VR) helmet with miniature television screen showing a new specially-designed CD-ROM programs using Virtools with Windows 2000 and an audio CD. This study was composed of 30 patients, 15 (10 suffering from ED and 5 PE) plus 15 control patients (10 ED and 5 PE), that underwent the same therapeutic protocol but used an old VR helmet to interact with the old VE using a PC Pentium 133 16 Mb RAM. We also compared this study with another study we carried out on 160 men affected by sexual disorders, underwent the same therapeutic protocol, but treated using a VE created (in Superscape VRT 5.6) using always Windows 2000 with portable tools. Comparing the groups of patients affected by ED and PE, there emerged a significant positive results value without any important differences among the different VE used. However, we had a % increase of undesirable physical reactions during the more realistic 15-minute VR experience using Virtools development kit. Psychotherapy alone normally requires long periods of treatment in order to resolve sexual dysfunctions. Considering the particular way in which full-immersion VR involves the subject who experiences it (he is totally unobserved and in complete privacy), we hypothesise that this methodological approach might speed up the therapeutic psycho-dynamic process, which eludes cognitive defences and directly stimulates the subconscious, and that better results could be obtained in the treatment of these sexual disorders. This method can be used by any

  5. Dynamic 3-D virtual fixtures for minimally invasive beating heart procedures.

    PubMed

    Ren, Jing; Patel, Rajni V; McIsaac, Kenneth A; Guiraudon, Gerard; Peters, Terry M

    2008-08-01

    Two-dimensional or 3-D visual guidance is often used for minimally invasive cardiac surgery and diagnosis. This visual guidance suffers from several drawbacks such as limited field of view, loss of signal from time to time, and in some cases, difficulty of interpretation. These limitations become more evident in beating-heart procedures when the surgeon has to perform a surgical procedure in the presence of heart motion. In this paper, we propose dynamic 3-D virtual fixtures (DVFs) to augment the visual guidance system with haptic feedback, to provide the surgeon with more helpful guidance by constraining the surgeon's hand motions thereby protecting sensitive structures. DVFs can be generated from preoperative dynamic magnetic resonance (MR) or computed tomograph (CT) images and then mapped to the patient during surgery. We have validated the feasibility of the proposed method on several simulated surgical tasks using a volunteer's cardiac image dataset. Validation results show that the integration of visual and haptic guidance can permit a user to perform surgical tasks more easily and with reduced error rate. We believe this is the first work presented in the field of virtual fixtures that explicitly considers heart motion.

  6. Utilising a Collaborative Macro-Script to Enhance Student Engagement: A Mixed Method Study in a 3D Virtual Environment

    ERIC Educational Resources Information Center

    Bouta, Hara; Retalis, Symeon; Paraskeva, Fotini

    2012-01-01

    This study examines the effect of using an online 3D virtual environment in teaching Mathematics in Primary Education. In particular, it explores the extent to which student engagement--behavioral, affective and cognitive--is fostered by such tools in order to enhance collaborative learning. For the study we used a purpose-created 3D virtual…

  7. Virtual bronchoscopic approach for combining 3D CT and endoscopic video

    NASA Astrophysics Data System (ADS)

    Sherbondy, Anthony J.; Kiraly, Atilla P.; Austin, Allen L.; Helferty, James P.; Wan, Shu-Yen; Turlington, Janice Z.; Yang, Tao; Zhang, Chao; Hoffman, Eric A.; McLennan, Geoffrey; Higgins, William E.

    2000-04-01

    To improve the care of lung-cancer patients, we are devising a diagnostic paradigm that ties together three-dimensional (3D) high-resolution computed-tomographic (CT) imaging and bronchoscopy. The system expands upon the new concept of virtual endoscopy that has seen recent application to the chest, colon, and other anatomical regions. Our approach applies computer-graphics and image-processing tools to the analysis of 3D CT chest images and complementary bronchoscopic video. It assumes a two-stage assessment of a lung-cancer patient. During Stage 1 (CT assessment), the physician interacts with a number of visual and quantitative tools to evaluate the patient's 'virtual anatomy' (3D CT scan). Automatic analysis gives navigation paths through major airways and to pre-selected suspect sites. These paths provide useful guidance during Stage-1 CT assessment. While interacting with these paths and other software tools, the user builds a multimedia Case Study, capturing telling snapshot views, movies, and quantitative data. The Case Study contains a report on the CT scan and also provides planning information for subsequent bronchoscopic evaluation. During Stage 2 (bronchoscopy), the physician uses (1) the original CT data, (2) software graphical tools, (3) the Case Study, and (4) a standard bronchoscopy suite to have an augmented vision for bronchoscopic assessment and treatment. To use the two data sources (CT and bronchoscopic video) simultaneously, they must be registered. We perform this registration using both manual interaction and an automated matching approach based on mutual information. We demonstrate our overall progress to date using human CT cases and CT-video from a bronchoscopy- training device.

  8. Instructors' Perceptions of Three-Dimensional (3D) Virtual Worlds: Instructional Use, Implementation and Benefits for Adult Learners

    ERIC Educational Resources Information Center

    Stone, Sophia Jeffries

    2009-01-01

    The purpose of this dissertation research study was to explore instructors' perceptions of the educational application of three-dimensional (3D) virtual worlds in a variety of academic discipline areas and to assess the strengths and limitations this virtual environment presents for teaching adult learners. The guiding research question for this…

  9. Using a Quest in a 3D Virtual Environment for Student Interaction and Vocabulary Acquisition in Foreign Language Learning

    ERIC Educational Resources Information Center

    Kastoudi, Denise

    2011-01-01

    The gaming and interactional nature of the virtual environment of Second Life offers opportunities for language learning beyond the traditional pedagogy. This study case examined the potential of 3D virtual quest games to enhance vocabulary acquisition through interaction, negotiation of meaning and noticing. Four adult students of English at…

  10. An Examination of the Effects of Collaborative Scientific Visualization via Model-Based Reasoning on Science, Technology, Engineering, and Mathematics (STEM) Learning within an Immersive 3D World

    ERIC Educational Resources Information Center

    Soleimani, Ali

    2013-01-01

    Immersive 3D worlds can be designed to effectively engage students in peer-to-peer collaborative learning activities, supported by scientific visualization, to help with understanding complex concepts associated with learning science, technology, engineering, and mathematics (STEM). Previous research studies have shown STEM learning benefits…

  11. Building a 3D Virtual Liver: Methods for Simulating Blood Flow and Hepatic Clearance on 3D Structures.

    PubMed

    White, Diana; Coombe, Dennis; Rezania, Vahid; Tuszynski, Jack

    2016-01-01

    In this paper, we develop a spatio-temporal modeling approach to describe blood and drug flow, as well as drug uptake and elimination, on an approximation of the liver. Extending on previously developed computational approaches, we generate an approximation of a liver, which consists of a portal and hepatic vein vasculature structure, embedded in the surrounding liver tissue. The vasculature is generated via constrained constructive optimization, and then converted to a spatial grid of a selected grid size. Estimates for surrounding upscaled lobule tissue properties are then presented appropriate to the same grid size. Simulation of fluid flow and drug metabolism (hepatic clearance) are completed using discretized forms of the relevant convective-diffusive-reactive partial differential equations for these processes. This results in a single stage, uniformly consistent method to simulate equations for blood and drug flow, as well as drug metabolism, on a 3D structure representative of a liver. PMID:27649537

  12. Building a 3D Virtual Liver: Methods for Simulating Blood Flow and Hepatic Clearance on 3D Structures

    PubMed Central

    Rezania, Vahid; Tuszynski, Jack

    2016-01-01

    In this paper, we develop a spatio-temporal modeling approach to describe blood and drug flow, as well as drug uptake and elimination, on an approximation of the liver. Extending on previously developed computational approaches, we generate an approximation of a liver, which consists of a portal and hepatic vein vasculature structure, embedded in the surrounding liver tissue. The vasculature is generated via constrained constructive optimization, and then converted to a spatial grid of a selected grid size. Estimates for surrounding upscaled lobule tissue properties are then presented appropriate to the same grid size. Simulation of fluid flow and drug metabolism (hepatic clearance) are completed using discretized forms of the relevant convective-diffusive-reactive partial differential equations for these processes. This results in a single stage, uniformly consistent method to simulate equations for blood and drug flow, as well as drug metabolism, on a 3D structure representative of a liver. PMID:27649537

  13. Effect of Visuo-Motor Co-location on 3D Fitts' Task Performance in Physical and Virtual Environments

    PubMed Central

    Fu, Michael J.; Hershberger, Andrew D.; Sano, Kumiko; Çavuşoğlu, M. Cenk

    2013-01-01

    Given the ease that humans have with using a keyboard and mouse in typical, non-colocated computer interaction, many studies have investigated the value of co-locating the visual field and motor workspaces using immersive display modalities. Significant understanding has been gained by previous work comparing physical tasks against virtual tasks, visuo-motor co-location versus non-colocation, and even visuo-motor rotational misalignments in virtual environments (VEs). However, few studies have explored all of these paradigms in context with each other and it is difficult to perform inter-study comparisons because of the variation in tested motor tasks. Therefore, using a stereoscopic fish tank display setup, the goal for the current study was to characterize human performance of a 3D Fitts' point-to-point reaching task using a stylus-based haptic interface in the physical, co-located/non-colocated, and rotated VE visualization conditions.Five performance measures – throughput, initial movement error, corrective movements, and peak velocity – were measured and used to evaluate task performance. These measures were studied in 22 subjects (11 male, 11 female, ages 20–32) performing a 3D variant of Fitts' serial task under 10 task conditions: physical, co-located VE, non-colocated VE, and rotated VEs from 45–315° in 45° increments. Hypotheses All performance measures in the co-located VE were expected to reflect significantly reduced task performance over the real condition, but also reflect increased performance over the non-colocated VE condition. For rotational misalignments, all performance measures were expected to reflect highest performance at 0°, reduce to lowest performance at 90° and rise again to a local maximum at 180° (symmetric about 0°). Results All performance measures showed that the co-located VE condition resulted in significantly lower task performance than the physical condition and higher mean performance than the non-colocated VE

  14. Combinatorial Pharmacophore-Based 3D-QSAR Analysis and Virtual Screening of FGFR1 Inhibitors

    PubMed Central

    Zhou, Nannan; Xu, Yuan; Liu, Xian; Wang, Yulan; Peng, Jianlong; Luo, Xiaomin; Zheng, Mingyue; Chen, Kaixian; Jiang, Hualiang

    2015-01-01

    The fibroblast growth factor/fibroblast growth factor receptor (FGF/FGFR) signaling pathway plays crucial roles in cell proliferation, angiogenesis, migration, and survival. Aberration in FGFRs correlates with several malignancies and disorders. FGFRs have proved to be attractive targets for therapeutic intervention in cancer, and it is of high interest to find FGFR inhibitors with novel scaffolds. In this study, a combinatorial three-dimensional quantitative structure-activity relationship (3D-QSAR) model was developed based on previously reported FGFR1 inhibitors with diverse structural skeletons. This model was evaluated for its prediction performance on a diverse test set containing 232 FGFR inhibitors, and it yielded a SD value of 0.75 pIC50 units from measured inhibition affinities and a Pearson’s correlation coefficient R2 of 0.53. This result suggests that the combinatorial 3D-QSAR model could be used to search for new FGFR1 hit structures and predict their potential activity. To further evaluate the performance of the model, a decoy set validation was used to measure the efficiency of the model by calculating EF (enrichment factor). Based on the combinatorial pharmacophore model, a virtual screening against SPECS database was performed. Nineteen novel active compounds were successfully identified, which provide new chemical starting points for further structural optimization of FGFR1 inhibitors. PMID:26110383

  15. Combinatorial Pharmacophore-Based 3D-QSAR Analysis and Virtual Screening of FGFR1 Inhibitors.

    PubMed

    Zhou, Nannan; Xu, Yuan; Liu, Xian; Wang, Yulan; Peng, Jianlong; Luo, Xiaomin; Zheng, Mingyue; Chen, Kaixian; Jiang, Hualiang

    2015-06-11

    The fibroblast growth factor/fibroblast growth factor receptor (FGF/FGFR) signaling pathway plays crucial roles in cell proliferation, angiogenesis, migration, and survival. Aberration in FGFRs correlates with several malignancies and disorders. FGFRs have proved to be attractive targets for therapeutic intervention in cancer, and it is of high interest to find FGFR inhibitors with novel scaffolds. In this study, a combinatorial three-dimensional quantitative structure-activity relationship (3D-QSAR) model was developed based on previously reported FGFR1 inhibitors with diverse structural skeletons. This model was evaluated for its prediction performance on a diverse test set containing 232 FGFR inhibitors, and it yielded a SD value of 0.75 pIC50 units from measured inhibition affinities and a Pearson's correlation coefficient R2 of 0.53. This result suggests that the combinatorial 3D-QSAR model could be used to search for new FGFR1 hit structures and predict their potential activity. To further evaluate the performance of the model, a decoy set validation was used to measure the efficiency of the model by calculating EF (enrichment factor). Based on the combinatorial pharmacophore model, a virtual screening against SPECS database was performed. Nineteen novel active compounds were successfully identified, which provide new chemical starting points for further structural optimization of FGFR1 inhibitors.

  16. 3D modeling of the Strasbourg's Cathedral basements for interdisciplinary research and virtual visits

    NASA Astrophysics Data System (ADS)

    Landes, T.; Kuhnle, G.; Bruna, R.

    2015-08-01

    On the occasion of the millennium celebration of Strasbourg Cathedral, a transdisciplinary research group composed of archaeologists, surveyors, architects, art historians and a stonemason revised the 1966-1972 excavations under the St. Lawrence's Chapel of the Cathedral having remains of Roman and medieval masonry. The 3D modeling of the Chapel has been realized based on the combination of conventional surveying techniques for the network creation, laser scanning for the model creation and photogrammetric techniques for the texturing of a few parts. According to the requirements and the end-user of the model, the level of detail and level of accuracy have been adapted and assessed for every floor. The basement has been acquired and modeled with more details and a higher accuracy than the other parts. Thanks to this modeling work, archaeologists can confront their assumptions to those of other disciplines by simulating constructions of other worship edifices on the massive stones composing the basement. The virtual reconstructions provided evidence in support of these assumptions and served for communication via virtual visits.

  17. Design and implementation of a 3D ocean virtual reality and visualization engine

    NASA Astrophysics Data System (ADS)

    Chen, Ge; Li, Bo; Tian, Fenglin; Ji, Pengbo; Li, Wenqing

    2012-12-01

    In this study, a 3D virtual reality and visualization engine for rendering the ocean, named VV-Ocean, is designed for marine applications. The design goals of VV-Ocean aim at high fidelity simulation of ocean environment, visualization of massive and multidimensional marine data, and imitation of marine lives. VV-Ocean is composed of five modules, i.e. memory management module, resources management module, scene management module, rendering process management module and interaction management module. There are three core functions in VV-Ocean: reconstructing vivid virtual ocean scenes, visualizing real data dynamically in real time, imitating and simulating marine lives intuitively. Based on VV-Ocean, we establish a sea-land integration platform which can reproduce drifting and diffusion processes of oil spilling from sea bottom to surface. Environment factors such as ocean current and wind field have been considered in this simulation. On this platform oil spilling process can be abstracted as movements of abundant oil particles. The result shows that oil particles blend with water well and the platform meets the requirement for real-time and interactive rendering. VV-Ocean can be widely used in ocean applications such as demonstrating marine operations, facilitating maritime communications, developing ocean games, reducing marine hazards, forecasting the weather over oceans, serving marine tourism, and so on. Finally, further technological improvements of VV-Ocean are discussed.

  18. Immersive video

    NASA Astrophysics Data System (ADS)

    Moezzi, Saied; Katkere, Arun L.; Jain, Ramesh C.

    1996-03-01

    Interactive video and television viewers should have the power to control their viewing position. To make this a reality, we introduce the concept of Immersive Video, which employs computer vision and computer graphics technologies to provide remote users a sense of complete immersion when viewing an event. Immersive Video uses multiple videos of an event, captured from different perspectives, to generate a full 3D digital video of that event. That is accomplished by assimilating important information from each video stream into a comprehensive, dynamic, 3D model of the environment. Using this 3D digital video, interactive viewers can then move around the remote environment and observe the events taking place from any desired perspective. Our Immersive Video System currently provides interactive viewing and `walkthrus' of staged karate demonstrations, basketball games, dance performances, and typical campus scenes. In its full realization, Immersive Video will be a paradigm shift in visual communication which will revolutionize television and video media, and become an integral part of future telepresence and virtual reality systems.

  19. 3D-ANTLERS: Virtual Reconstruction and Three-Dimensional Measurement

    NASA Astrophysics Data System (ADS)

    Barba, S.; Fiorillo, F.; De Feo, E.

    2013-02-01

    . In the ARTEC digital mock-up for example, it shows the ability to select the individual frames, already polygonal and geo-referenced at the time of capture; however, it is not possible to make an automated texturization differently from the low-cost environment which allows to produce a good graphics' definition. Once the final 3D models were obtained, we have proceeded to do a geometric and graphic comparison of the results. Therefore, in order to provide an accuracy requirement and an assessment for the 3D reconstruction we have taken into account the following benchmarks: cost, captured points, noise (local and global), shadows and holes, operability, degree of definition, quality and accuracy. Subsequently, these studies carried out in an empirical way on the virtual reconstructions, a 3D documentation was codified with a procedural method endorsing the use of terrestrial sensors for the documentation of antlers. The results thus pursued were compared with the standards set by the current provisions (see "Manual de medición" of Government of Andalusia-Spain); to date, in fact, the identification is based on data such as length, volume, colour, texture, openness, tips, structure, etc. Data, which is currently only appreciated with traditional instruments, such as tape measure, would be well represented by a process of virtual reconstruction and cataloguing.

  20. Interactive Learning Environment: Web-based Virtual Hydrological Simulation System using Augmented and Immersive Reality

    NASA Astrophysics Data System (ADS)

    Demir, I.

    2014-12-01

    Recent developments in internet technologies make it possible to manage and visualize large data on the web. Novel visualization techniques and interactive user interfaces allow users to create realistic environments, and interact with data to gain insight from simulations and environmental observations. The hydrological simulation system is a web-based 3D interactive learning environment for teaching hydrological processes and concepts. The simulation systems provides a visually striking platform with realistic terrain information, and water simulation. Students can create or load predefined scenarios, control environmental parameters, and evaluate environmental mitigation alternatives. The web-based simulation system provides an environment for students to learn about the hydrological processes (e.g. flooding and flood damage), and effects of development and human activity in the floodplain. The system utilizes latest web technologies and graphics processing unit (GPU) for water simulation and object collisions on the terrain. Users can access the system in three visualization modes including virtual reality, augmented reality, and immersive reality using heads-up display. The system provides various scenarios customized to fit the age and education level of various users. This presentation provides an overview of the web-based flood simulation system, and demonstrates the capabilities of the system for various visualization and interaction modes.

  1. Teaching Literature in Virtual Worlds: Immersive Learning in English Studies

    ERIC Educational Resources Information Center

    Webb, Allen, Ed.

    2011-01-01

    What are the realities and possibilities of utilizing on-line virtual worlds as teaching tools for specific literary works? Through engaging and surprising stories from classrooms where virtual worlds are in use, this book invites readers to understand and participate in this emerging and valuable pedagogy. It examines the experience of high…

  2. The Pixelated Professor: Faculty in Immersive Virtual Worlds

    ERIC Educational Resources Information Center

    Blackmon, Stephanie

    2015-01-01

    Online environments, particularly virtual worlds, can sometimes complicate issues of self expression. For example, the faculty member who loves punk rock has an opportunity, through hairstyle and attire choices in the virtual world, to share that part of herself with students. However, deciding to share that part of the self can depend on a number…

  3. Virtual Worlds; Real Learning: Design Principles for Engaging Immersive Environments

    NASA Technical Reports Server (NTRS)

    Wu (u. Sjarpm)

    2012-01-01

    The EMDT master's program at Full Sail University embarked on a small project to use a virtual environment to teach graduate students. The property used for this project has evolved our several iterations and has yielded some basic design principles and pedagogy for virtual spaces. As a result, students are emerging from the program with a better grasp of future possibilities.

  4. Using Immersive Virtual Reality for Electrical Substation Training

    ERIC Educational Resources Information Center

    Tanaka, Eduardo H.; Paludo, Juliana A.; Cordeiro, Carlúcio S.; Domingues, Leonardo R.; Gadbem, Edgar V.; Euflausino, Adriana

    2015-01-01

    Usually, distribution electricians are called upon to solve technical problems found in electrical substations. In this project, we apply problem-based learning to a training program for electricians, with the help of a virtual reality environment that simulates a real substation. Using this virtual substation, users may safely practice maneuvers…

  5. Numerical simulation of X-wing type biplane flapping wings in 3D using the immersed boundary method.

    PubMed

    Tay, W B; van Oudheusden, B W; Bijl, H

    2014-09-01

    The numerical simulation of an insect-sized 'X-wing' type biplane flapping wing configuration is performed in 3D using an immersed boundary method solver at Reynolds numbers equal to 1000 (1 k) and 5 k, based on the wing's root chord length. This X-wing type flapping configuration draws its inspiration from Delfly, a bio-inspired ornithopter MAV which has two pairs of wings flapping in anti-phase in a biplane configuration. The objective of the present investigation is to assess the aerodynamic performance when the original Delfly flapping wing micro-aerial vehicle (FMAV) is reduced to the size of an insect. Results show that the X-wing configuration gives more than twice the average thrust compared with only flapping the upper pair of wings of the X-wing. However, the X-wing's average thrust is only 40% that of the upper wing flapping at twice the stroke angle. Despite this, the increased stability which results from the smaller lift and moment variation of the X-wing configuration makes it more suited for sharp image capture and recognition. These advantages make the X-wing configuration an attractive alternative design for insect-sized FMAVS compared to the single wing configuration. In the Reynolds number comparison, the vorticity iso-surface plot at a Reynolds number of 5 k revealed smaller, finer vortical structures compared to the simulation at 1 k, due to vortices' breakup. In comparison, the force output difference is much smaller between Re = 1 k and 5 k. Increasing the body inclination angle generates a uniform leading edge vortex instead of a conical one along the wingspan, giving higher lift. Understanding the force variation as the body inclination angle increases will allow FMAV designers to optimize the thrust and lift ratio for higher efficiency under different operational requirements. Lastly, increasing the spanwise flexibility of the wings increases the thrust slightly but decreases the efficiency. The thrust result is similar to one of the

  6. Numerical simulation of X-wing type biplane flapping wings in 3D using the immersed boundary method.

    PubMed

    Tay, W B; van Oudheusden, B W; Bijl, H

    2014-09-01

    The numerical simulation of an insect-sized 'X-wing' type biplane flapping wing configuration is performed in 3D using an immersed boundary method solver at Reynolds numbers equal to 1000 (1 k) and 5 k, based on the wing's root chord length. This X-wing type flapping configuration draws its inspiration from Delfly, a bio-inspired ornithopter MAV which has two pairs of wings flapping in anti-phase in a biplane configuration. The objective of the present investigation is to assess the aerodynamic performance when the original Delfly flapping wing micro-aerial vehicle (FMAV) is reduced to the size of an insect. Results show that the X-wing configuration gives more than twice the average thrust compared with only flapping the upper pair of wings of the X-wing. However, the X-wing's average thrust is only 40% that of the upper wing flapping at twice the stroke angle. Despite this, the increased stability which results from the smaller lift and moment variation of the X-wing configuration makes it more suited for sharp image capture and recognition. These advantages make the X-wing configuration an attractive alternative design for insect-sized FMAVS compared to the single wing configuration. In the Reynolds number comparison, the vorticity iso-surface plot at a Reynolds number of 5 k revealed smaller, finer vortical structures compared to the simulation at 1 k, due to vortices' breakup. In comparison, the force output difference is much smaller between Re = 1 k and 5 k. Increasing the body inclination angle generates a uniform leading edge vortex instead of a conical one along the wingspan, giving higher lift. Understanding the force variation as the body inclination angle increases will allow FMAV designers to optimize the thrust and lift ratio for higher efficiency under different operational requirements. Lastly, increasing the spanwise flexibility of the wings increases the thrust slightly but decreases the efficiency. The thrust result is similar to one of the

  7. NanTroSEIZE in 3-D: Creating a Virtual Research Experience in Undergraduate Geoscience Courses

    NASA Astrophysics Data System (ADS)

    Reed, D. L.; Bangs, N. L.; Moore, G. F.; Tobin, H.

    2009-12-01

    Marine research programs, both large and small, have increasingly added a web-based component to facilitate outreach to K-12 and the public, in general. These efforts have included, among other activities, information-rich websites, ship-to-shore communication with scientists during expeditions, blogs at sea, clips on YouTube, and information about daily shipboard activities. Our objective was to leverage a portion of the vast collection of data acquired through the NSF-MARGINS program to create a learning tool with a long lifespan for use in undergraduate geoscience courses. We have developed a web-based virtual expedition, NanTroSEIZE in 3-D, based on a seismic survey associated with the NanTroSEIZE program of NSF-MARGINS and IODP to study the properties of the plate boundary fault system in the upper limit of the seismogenic zone off Japan. The virtual voyage can be used in undergraduate classes at anytime, since it is not directly tied to the finite duration of a specific seagoing project. The website combines text, graphics, audio and video to place learning in an experiential framework as students participate on the expedition and carry out research. Students learn about the scientific background of the program, especially the critical role of international collaboration, and meet the chief scientists before joining the sea-going expedition. Students are presented with the principles of 3-D seismic imaging, data processing and interpretation while mapping and identifying the active faults that were the likely sources of devastating earthquakes and tsunamis in Japan in 1944 and 1948. They also learn about IODP drilling that began in 2007 and will extend through much of the next decade. The website is being tested in undergraduate classes in fall 2009 and will be distributed through the NSF-MARGINS website (http://www.nsf-margins.org/) and the MARGINS Mini-lesson section of the Science Education Resource Center (SERC) (http

  8. Evaluation of human behavior in collision avoidance: a study inside immersive virtual reality.

    PubMed

    Ouellette, Michel; Chagnon, Miguel; Faubert, Jocelyn

    2009-04-01

    During our daily displacements, we should consider the individuals advancing toward us in order to avoid a possible collision with our congeneric. We developed an experimental design in a virtual immersion room, which allows us to evaluate human capacities for avoiding collisions with other people. In addition, the design allows participants to interact naturally inside this immersive virtual reality setup when a pedestrian is moving toward them, creating a possible risk of collision. Results suggest that the performance is associated with visual and motor capacities and could be adjusted by cognitive social perception. PMID:19250010

  9. Assessing the precision of gaze following using a stereoscopic 3D virtual reality setting.

    PubMed

    Atabaki, Artin; Marciniak, Karolina; Dicke, Peter W; Thier, Peter

    2015-07-01

    Despite the ecological importance of gaze following, little is known about the underlying neuronal processes, which allow us to extract gaze direction from the geometric features of the eye and head of a conspecific. In order to understand the neuronal mechanisms underlying this ability, a careful description of the capacity and the limitations of gaze following at the behavioral level is needed. Previous studies of gaze following, which relied on naturalistic settings have the disadvantage of allowing only very limited control of potentially relevant visual features guiding gaze following, such as the contrast of iris and sclera, the shape of the eyelids and--in the case of photographs--they lack depth. Hence, in order to get full control of potentially relevant features we decided to study gaze following of human observers guided by the gaze of a human avatar seen stereoscopically. To this end we established a stereoscopic 3D virtual reality setup, in which we tested human subjects' abilities to detect at which target a human avatar was looking at. Following the gaze of the avatar showed all the features of the gaze following of a natural person, namely a substantial degree of precision associated with a consistent pattern of systematic deviations from the target. Poor stereo vision affected performance surprisingly little (only in certain experimental conditions). Only gaze following guided by targets at larger downward eccentricities exhibited a differential effect of the presence or absence of accompanying movements of the avatar's eyelids and eyebrows. PMID:25982719

  10. Assessing endocranial variations in great apes and humans using 3D data from virtual endocasts.

    PubMed

    Bienvenu, Thibaut; Guy, Franck; Coudyzer, Walter; Gilissen, Emmanuel; Roualdès, Georges; Vignaud, Patrick; Brunet, Michel

    2011-06-01

    Modern humans are characterized by their large, complex, and specialized brain. Human brain evolution can be addressed through direct evidence provided by fossil hominid endocasts (i.e. paleoneurology), or through indirect evidence of extant species comparative neurology. Here we use the second approach, providing an extant comparative framework for hominid paleoneurological studies. We explore endocranial size and shape differences among great apes and humans, as well as between sexes. We virtually extracted 72 endocasts, sampling all extant great ape species and modern humans, and digitized 37 landmarks on each for 3D generalized Procrustes analysis. All species can be differentiated by their endocranial shape. Among great apes, endocranial shapes vary from short (orangutans) to long (gorillas), perhaps in relation to different facial orientations. Endocranial shape differences among African apes are partly allometric. Major endocranial traits distinguishing humans from great apes are endocranial globularity, reflecting neurological reorganization, and features linked to structural responses to posture and bipedal locomotion. Human endocasts are also characterized by posterior location of foramina rotunda relative to optic canals, which could be correlated to lesser subnasal prognathism compared to living great apes. Species with larger brains (gorillas and humans) display greater sexual dimorphism in endocranial size, while sexual dimorphism in endocranial shape is restricted to gorillas, differences between males and females being at least partly due to allometry. Our study of endocranial variations in extant great apes and humans provides a new comparative dataset for studies of fossil hominid endocasts.

  11. The effects of task difficulty on visual search strategy in virtual 3D displays.

    PubMed

    Pomplun, Marc; Garaas, Tyler W; Carrasco, Marisa

    2013-01-01

    Analyzing the factors that determine our choice of visual search strategy may shed light on visual behavior in everyday situations. Previous results suggest that increasing task difficulty leads to more systematic search paths. Here we analyze observers' eye movements in an "easy" conjunction search task and a "difficult" shape search task to study visual search strategies in stereoscopic search displays with virtual depth induced by binocular disparity. Standard eye-movement variables, such as fixation duration and initial saccade latency, as well as new measures proposed here, such as saccadic step size, relative saccadic selectivity, and x-y target distance, revealed systematic effects on search dynamics in the horizontal-vertical plane throughout the search process. We found that in the "easy" task, observers start with the processing of display items in the display center immediately after stimulus onset and subsequently move their gaze outwards, guided by extrafoveally perceived stimulus color. In contrast, the "difficult" task induced an initial gaze shift to the upper-left display corner, followed by a systematic left-right and top-down search process. The only consistent depth effect was a trend of initial saccades in the easy task with smallest displays to the items closest to the observer. The results demonstrate the utility of eye-movement analysis for understanding search strategies and provide a first step toward studying search strategies in actual 3D scenarios. PMID:23986539

  12. Toward virtual anatomy: a stereoscopic 3-D interactive multimedia computer program for cranial osteology.

    PubMed

    Trelease, R B

    1996-01-01

    Advances in computer visualization and user interface technologies have enabled development of "virtual reality" programs that allow users to perceive and to interact with objects in artificial three-dimensional environments. Such technologies were used to create an image database and program for studying the human skull, a specimen that has become increasingly expensive and scarce. Stereoscopic image pairs of a museum-quality skull were digitized from multiple views. For each view, the stereo pairs were interlaced into a single, field-sequential stereoscopic picture using an image processing program. The resulting interlaced image files are organized in an interactive multimedia program. At run-time, gray-scale 3-D images are displayed on a large-screen computer monitor and observed through liquid-crystal shutter goggles. Users can then control the program and change views with a mouse and cursor to point-and-click on screen-level control words ("buttons"). For each view of the skull, an ID control button can be used to overlay pointers and captions for important structures. Pointing and clicking on "hidden buttons" overlying certain structures triggers digitized audio spoken word descriptions or mini lectures.

  13. The effects of task difficulty on visual search strategy in virtual 3D displays.

    PubMed

    Pomplun, Marc; Garaas, Tyler W; Carrasco, Marisa

    2013-08-28

    Analyzing the factors that determine our choice of visual search strategy may shed light on visual behavior in everyday situations. Previous results suggest that increasing task difficulty leads to more systematic search paths. Here we analyze observers' eye movements in an "easy" conjunction search task and a "difficult" shape search task to study visual search strategies in stereoscopic search displays with virtual depth induced by binocular disparity. Standard eye-movement variables, such as fixation duration and initial saccade latency, as well as new measures proposed here, such as saccadic step size, relative saccadic selectivity, and x-y target distance, revealed systematic effects on search dynamics in the horizontal-vertical plane throughout the search process. We found that in the "easy" task, observers start with the processing of display items in the display center immediately after stimulus onset and subsequently move their gaze outwards, guided by extrafoveally perceived stimulus color. In contrast, the "difficult" task induced an initial gaze shift to the upper-left display corner, followed by a systematic left-right and top-down search process. The only consistent depth effect was a trend of initial saccades in the easy task with smallest displays to the items closest to the observer. The results demonstrate the utility of eye-movement analysis for understanding search strategies and provide a first step toward studying search strategies in actual 3D scenarios.

  14. Inclusion of Immersive Virtual Learning Environments and Visual Control Systems to Support the Learning of Students with Asperger Syndrome

    ERIC Educational Resources Information Center

    Lorenzo, Gonzalo; Pomares, Jorge; Lledo, Asuncion

    2013-01-01

    This paper presents the use of immersive virtual reality systems in the educational intervention with Asperger students. The starting points of this study are features of these students' cognitive style that requires an explicit teaching style supported by visual aids and highly structured environments. The proposed immersive virtual reality…

  15. Building virtual reality fMRI paradigms: a framework for presenting immersive virtual environments.

    PubMed

    Mueller, Charles; Luehrs, Michael; Baecke, Sebastian; Adolf, Daniela; Luetzkendorf, Ralf; Luchtmann, Michael; Bernarding, Johannes

    2012-08-15

    The advantage of using a virtual reality (VR) paradigm in fMRI is the possibility to interact with highly realistic environments. This extends the functions of standard fMRI paradigms, where the volunteer usually has a passive role, for example, watching a simple movie paradigm without any stimulus interactions. From that point of view the combined usage of VR and real-time fMRI offers great potential to identify underlying cognitive mechanisms such as spatial navigation, attention, semantic and episodic memory, as well as neurofeedback paradigms. However, the design and the implementation of a VR stimulus paradigm as well as the integration into an existing MR scanner framework are very complex processes. To support the modeling and usage of VR stimuli we developed and implemented a VR stimulus application based on C++. This software allows the fast and easy presentation of VR environments for fMRI studies without any additional expert knowledge. Furthermore, it provides for the reception of real-time data analysis values a bidirectional communication interface. In addition, the internal plugin interface enables users to extend the functionality of the software with custom programmed C++ plugins. The VR stimulus framework was tested in several performance tests and a spatial navigation study. According to the post-experimental interview, all subjects described immersive experiences and a high attentional load inside the artifical environment. Results from other VR spatial memory studies confirm the neuronal activation that was detected in parahippocampal areas, cuneus, and occipital regions.

  16. A 3-D Virtual Reality Model of the Sun and the Moon for E-Learning at Elementary Schools

    ERIC Educational Resources Information Center

    Sun, Koun-Tem; Lin, Ching-Ling; Wang, Sheng-Min

    2010-01-01

    The relative positions of the sun, moon, and earth, their movements, and their relationships are abstract and difficult to understand astronomical concepts in elementary school science. This study proposes a three-dimensional (3-D) virtual reality (VR) model named the "Sun and Moon System." This e-learning resource was designed by combining…

  17. The Effect of 3D Virtual Learning Environment on Secondary School Third Grade Students' Attitudes toward Mathematics

    ERIC Educational Resources Information Center

    Simsek, Irfan

    2016-01-01

    With this research, in Second Life environment which is a three dimensional online virtual world, it is aimed to reveal the effects of student attitudes toward mathematics courses and design activities which will enable the third grade students of secondary school (primary education seventh grade) to see the 3D objects in mathematics courses in a…

  18. A new high-aperture glycerol immersion objective lens and its application to 3D-fluorescence microscopy.

    PubMed

    Martini, N; Bewersdorf, J; Hell, S W

    2002-05-01

    High-resolution light microscopy of glycerol-mounted biological specimens is performed almost exclusively with oil immersion lenses. The reason is that the index of refraction of the oil and the cover slip of approximately 1.51 is close to that of approximately 1.45 of the glycerol mountant, so that refractive index mismatch-induced spherical aberrations are tolerable to some extent. Here we report the application of novel cover glass-corrected glycerol immersion lenses of high numerical aperture (NA) and the avoidance of these aberrations. The new lenses feature a semi-aperture angle of 68.5 degrees, which is slightly larger than that of the diffraction-limited 1.4 NA oil immersion lenses. The glycerol lenses are corrected for a quartz cover glass of 220 microm thickness and for a 80% glycerol-water immersion solution. Featuring an aberration correction collar, the lens can adapt to glycerol concentrations ranging between 72% and 88%, to slight variations of the temperature, and to the cover glass thickness. As the refractive index mismatch-induced aberrations are particularly important to quantitative confocal fluorescence microscopy, we investigated the axial sectioning ability and the axial chromatic aberrations in such a microscope as well as the image brightness as a function of the penetration depth. Whereas there is a significant decrease in image brightness associated with oil immersion, this decrease is absent with the glycerol immersion system. In addition, we show directly the compression of the optic axis in the case of oil immersion and its absence in the glycerol system. The unique advantages of these new lenses in high-resolution microscopy with two coherently used opposing lenses, such as 4 Pi-microscopy, are discussed. PMID:12000554

  19. Fusion of image and laser-scanning data in a large-scale 3D virtual environment

    NASA Astrophysics Data System (ADS)

    Shih, Jhih-Syuan; Lin, Ta-Te

    2013-05-01

    Construction of large-scale 3D virtual environment is important in many fields such as robotic navigation, urban planning, transportation, and remote sensing, etc. Laser scanning approach is the most common method used in constructing 3D models. This paper proposes an automatic method to fuse image and laser-scanning data in a large-scale 3D virtual environment. The system comprises a laser-scanning device installed on a robot platform and the software for data fusion and visualization. The algorithms of data fusion and scene integration are presented. Experiments were performed for the reconstruction of outdoor scenes to test and demonstrate the functionality of the system. We also discuss the efficacy of the system and technical problems involved in this proposed method.

  20. A cone-beam CT based technique to augment the 3D virtual skull model with a detailed dental surface.

    PubMed

    Swennen, G R J; Mommaerts, M Y; Abeloos, J; De Clercq, C; Lamoral, P; Neyt, N; Casselman, J; Schutyser, F

    2009-01-01

    Cone-beam computed tomography (CBCT) is used for maxillofacial imaging. 3D virtual planning of orthognathic and facial orthomorphic surgery requires detailed visualisation of the interocclusal relationship. This study aimed to introduce and evaluate the use of a double CBCT scan procedure with a modified wax bite wafer to augment the 3D virtual skull model with a detailed dental surface. The impressions of the dental arches and the wax bite wafer were scanned for ten patient separately using a high resolution standardized CBCT scanning protocol. Surface-based rigid registration using ICP (iterative closest points) was used to fit the virtual models on the wax bite wafer. Automatic rigid point-based registration of the wax bite wafer on the patient scan was performed to implement the digital virtual dental arches into the patient's skull model. Probability error histograms showed errors of < or =0.22 mm (25% percentile), < or =0.44 mm (50% percentile) and < or =1.09 mm (90% percentile) for ICP surface matching. The mean registration error for automatic point-based rigid registration was 0.18+/-0.10 mm (range 0.13-0.26 mm). The results show the potential for a double CBCT scan procedure with a modified wax bite wafer to set-up a 3D virtual augmented model of the skull with detailed dental surface.

  1. Measuring Flow Experience in an Immersive Virtual Environment for Collaborative Learning

    ERIC Educational Resources Information Center

    van Schaik, P.; Martin, S.; Vallance, M.

    2012-01-01

    In contexts other than immersive virtual environments, theoretical and empirical work has identified flow experience as a major factor in learning and human-computer interaction. Flow is defined as a "holistic sensation that people feel when they act with total involvement". We applied the concept of flow to modeling the experience of…

  2. Correcting Distance Estimates by Interacting With Immersive Virtual Environments: Effects of Task and Available Sensory Information

    ERIC Educational Resources Information Center

    Waller, David; Richardson, Adam R.

    2008-01-01

    The tendency to underestimate egocentric distances in immersive virtual environments (VEs) is not well understood. However, previous research (A. R. Richardson & D. Waller, 2007) has demonstrated that a brief period of interaction with the VE prior to making distance judgments can effectively eliminate subsequent underestimation. Here the authors…

  3. The Utility of Using Immersive Virtual Environments for the Assessment of Science Inquiry Learning

    ERIC Educational Resources Information Center

    Code, Jillianne; Clarke-Midura, Jody; Zap, Nick; Dede, Chris

    2013-01-01

    Determining the effectiveness of any educational technology depends upon teachers' and learners' perception of the functional utility of that tool for teaching, learning, and assessment. The Virtual Performance project at Harvard University is developing and studying the feasibility of using immersive technology to develop performance…

  4. Development of real-time motion capture system for 3D on-line games linked with virtual character

    NASA Astrophysics Data System (ADS)

    Kim, Jong Hyeong; Ryu, Young Kee; Cho, Hyung Suck

    2004-10-01

    Motion tracking method is being issued as essential part of the entertainment, medical, sports, education and industry with the development of 3-D virtual reality. Virtual human character in the digital animation and game application has been controlled by interfacing devices; mouse, joysticks, midi-slider, and so on. Those devices could not enable virtual human character to move smoothly and naturally. Furthermore, high-end human motion capture systems in commercial market are expensive and complicated. In this paper, we proposed a practical and fast motion capturing system consisting of optic sensors, and linked the data with 3-D game character with real time. The prototype experiment setup is successfully applied to a boxing game which requires very fast movement of human character.

  5. ARENA - A Collaborative Immersive Environment for Virtual Fieldwork

    NASA Astrophysics Data System (ADS)

    Kwasnitschka, T.

    2012-12-01

    Whenever a geoscientific study area is not readily accessible, as is the case on the deep seafloor, it is difficult to apply traditional but effective methods of fieldwork, which often require physical presence of the observer. The Artificial Research Environment for Networked Analysis (ARENA), developed at GEOMAR | Helmholtz Centre for Ocean Research Kiel within the Cluster of Excellence "The Future Ocean", provides a backend solution to robotic research on the seafloor by means of an immersive simulation environment for marine research: A hemispherical screen of 6m diameter covering the entire lower hemisphere surrounds a group of up to four researchers at once. A variety of open source (e.g. Microsoft Research World Wide Telescope) and commercial software platforms allow the interaction with e.g. in-situ recorded video, vector maps, terrain, textured geometry, point cloud and volumetric data in four dimensions. Data can be put into a holistic, georeferenced context and viewed on scales stretching from centimeters to global. Several input devices from joysticks to gestures and vocalized commands allow interaction with the simulation, depending on individual preference. Annotations added to the dataset during the simulation session catalyze the following quantitative evaluation. Both the special simulator design, making data perception a group experience, and the ability to connect remote instances or scaled down versions of ARENA over the Internet are significant advantages over established immersive simulation environments.

  6. Magnetic resonance virtual histology for embryos: 3D atlases for automated high-throughput phenotyping.

    PubMed

    Cleary, Jon O; Modat, Marc; Norris, Francesca C; Price, Anthony N; Jayakody, Sujatha A; Martinez-Barbera, Juan Pedro; Greene, Nicholas D E; Hawkes, David J; Ordidge, Roger J; Scambler, Peter J; Ourselin, Sebastien; Lythgoe, Mark F

    2011-01-15

    Ambitious international efforts are underway to produce gene-knockout mice for each of the 25,000 mouse genes, providing a new platform to study mammalian development and disease. Robust, large-scale methods for morphological assessment of prenatal mice will be essential to this work. Embryo phenotyping currently relies on histological techniques but these are not well suited to large volume screening. The qualitative nature of these approaches also limits the potential for detailed group analysis. Advances in non-invasive imaging techniques such as magnetic resonance imaging (MRI) may surmount these barriers. We present a high-throughput approach to generate detailed virtual histology of the whole embryo, combined with the novel use of a whole-embryo atlas for automated phenotypic assessment. Using individual 3D embryo MRI histology, we identified new pituitary phenotypes in Hesx1 mutant mice. Subsequently, we used advanced computational techniques to produce a whole-body embryo atlas from 6 CD-1 embryos, creating an average image with greatly enhanced anatomical detail, particularly in CNS structures. This methodology enabled unsupervised assessment of morphological differences between CD-1 embryos and Chd7 knockout mice (n=5 Chd7(+/+) and n=8 Chd7(+/-), C57BL/6 background). Using a new atlas generated from these three groups, quantitative organ volumes were automatically measured. We demonstrated a difference in mean brain volumes between Chd7(+/+) and Chd7(+/-) mice (42.0 vs. 39.1mm(3), p<0.05). Differences in whole-body, olfactory and normalised pituitary gland volumes were also found between CD-1 and Chd7(+/+) mice (C57BL/6 background). Our work demonstrates the feasibility of combining high-throughput embryo MRI with automated analysis techniques to distinguish novel mouse phenotypes. PMID:20656039

  7. Hsp90 inhibitors, part 1: definition of 3-D QSAutogrid/R models as a tool for virtual screening.

    PubMed

    Ballante, Flavio; Caroli, Antonia; Wickersham, Richard B; Ragno, Rino

    2014-03-24

    The multichaperone heat shock protein (Hsp) 90 complex mediates the maturation and stability of a variety of oncogenic signaling proteins. For this reason, Hsp90 has emerged as a promising target for anticancer drug development. Herein, we describe a complete computational procedure for building several 3-D QSAR models used as a ligand-based (LB) component of a comprehensive ligand-based (LB) and structure-based (SB) virtual screening (VS) protocol to identify novel molecular scaffolds of Hsp90 inhibitors. By the application of the 3-D QSAutogrid/R method, eight SB PLS 3-D QSAR models were generated, leading to a final multiprobe (MP) 3-D QSAR pharmacophoric model capable of recognizing the most significant chemical features for Hsp90 inhibition. Both the monoprobe and multiprobe models were optimized, cross-validated, and tested against an external test set. The obtained statistical results confirmed the models as robust and predictive to be used in a subsequent VS.

  8. Workshop Report on Virtual Worlds and Immersive Environments

    NASA Technical Reports Server (NTRS)

    Langhoff, Stephanie R.; Cowan-Sharp, Jessy; Dodson, Karen E.; Damer, Bruce; Ketner, Bob

    2009-01-01

    The workshop revolved around three framing ideas or scenarios about the evolution of virtual environments: 1. Remote exploration: The ability to create high fidelity environments rendered from external data or models such that exploration, design and analysis that is truly interoperable with the physical world can take place within them. 2. We all get to go: The ability to engage anyone in being a part of or contributing to an experience (such as a space mission), no matter their training or location. It is the creation of a new paradigm for education, outreach, and the conduct of science in society that is truly participatory. 3. Become the data: A vision of a future where boundaries between the physical and the virtual have ceased to be meaningful. What would this future look like? Is this plausible? Is it desirable? Why and why not?

  9. A Randomized, Controlled Trial of Immersive Virtual Reality Analgesia during Physical Therapy for Pediatric Burn Injuries

    PubMed Central

    Schmitt, Yuko S.; Hoffman, Hunter G.; Blough, David K.; Patterson, David R.; Jensen, Mark P.; Soltani, Maryam; Carrougher, Gretchen J.; Nakamura, Dana; Sharar, Sam R.

    2010-01-01

    This randomized, controlled, within-subjects (crossover design) study examined the effects of immersive virtual reality as an adjunctive analgesic technique for hospitalized pediatric burn inpatients undergoing painful physical therapy. Fifty-four subjects (6–19 years old) performed range-of-motion exercises under a therapist’s direction for one to five days. During each session, subjects spent equivalent time in both the virtual reality and the control conditions (treatment order randomized and counterbalanced). Graphic rating scale scores assessing the sensory, affective, and cognitive components of pain were obtained for each treatment condition. Secondary outcomes assessed subjects’ perception of the virtual reality experience and maximum range-of-motion. Results showed that on study day one, subjects reported significant decreases (27–44%) in pain ratings during virtual reality. They also reported improved affect (“fun”) during virtual reality. The analgesia and affect improvements were maintained with repeated virtual reality use over multiple therapy sessions. Maximum range-of-motion was not different between treatment conditions, but was significantly greater after the second treatment condition (regardless of treatment order). These results suggest that immersive virtual reality is an effective nonpharmacologic, adjunctive pain reduction technique in the pediatric burn population undergoing painful rehabilitation therapy. The magnitude of the analgesic effect is clinically meaningful and is maintained with repeated use. PMID:20692769

  10. Special Section: New Ways to Detect Colon Cancer 3-D virtual screening now being used

    MedlinePlus

    ... tech medical fields of biomedical visualization, computer graphics, virtual reality, and multimedia. The year was 1994. Kaufman's "two- ... organ, like the colon—and view it in virtual reality." Later, he and his team used it with ...

  11. Toward real-time endoscopically-guided robotic navigation based on a 3D virtual surgical field model

    NASA Astrophysics Data System (ADS)

    Gong, Yuanzheng; Hu, Danying; Hannaford, Blake; Seibel, Eric J.

    2015-03-01

    The challenge is to accurately guide the surgical tool within the three-dimensional (3D) surgical field for roboticallyassisted operations such as tumor margin removal from a debulked brain tumor cavity. The proposed technique is 3D image-guided surgical navigation based on matching intraoperative video frames to a 3D virtual model of the surgical field. A small laser-scanning endoscopic camera was attached to a mock minimally-invasive surgical tool that was manipulated toward a region of interest (residual tumor) within a phantom of a debulked brain tumor. Video frames from the endoscope provided features that were matched to the 3D virtual model, which were reconstructed earlier by raster scanning over the surgical field. Camera pose (position and orientation) is recovered by implementing a constrained bundle adjustment algorithm. Navigational error during the approach to fluorescence target (residual tumor) is determined by comparing the calculated camera pose to the measured camera pose using a micro-positioning stage. From these preliminary results, computation efficiency of the algorithm in MATLAB code is near real-time (2.5 sec for each estimation of pose), which can be improved by implementation in C++. Error analysis produced 3-mm distance error and 2.5 degree of orientation error on average. The sources of these errors come from 1) inaccuracy of the 3D virtual model, generated on a calibrated RAVEN robotic platform with stereo tracking; 2) inaccuracy of endoscope intrinsic parameters, such as focal length; and 3) any endoscopic image distortion from scanning irregularities. This work demonstrates feasibility of micro-camera 3D guidance of a robotic surgical tool.

  12. InSPAL: A Novel Immersive Virtual Learning Programme.

    PubMed

    Byrne, Julia; Ip, Horace H S; Shuk-Ying Lau, Kate; Chen Li, Richard; Tso, Amy; Choi, Catherine

    2015-01-01

    In this paper we introduce The Interactive Sensory Program for Affective Learning (InSPAL) a pioneering virtual learning programme designed for the severely intellectually disabled (SID) students, who are having cognitive deficiencies and other sensory-motor handicaps, and thus need more help and attention in overcoming their learning difficulties. Through combining and integrating interactive media and virtual reality technology with the principles of art therapy and relevant pedagogical techniques, InSPAL aims to strengthen SID students' pre-learning abilities, promote their self-awareness, decrease behavioral interferences with learning as well as social interaction, enhance their communication and thus promote their quality of life. Results of our study show that students who went through our programme were more focused, and the ability to do things more independently increased by 15%. Moreover, 50% of the students showed a marked improvement in the ability to raise their hands in response, thus increasing their communication skills. The use of therapeutic interventions enabled a better control to the body, mind and emotions, resulting a greater performance and better participation.

  13. InSPAL: A Novel Immersive Virtual Learning Programme.

    PubMed

    Byrne, Julia; Ip, Horace H S; Shuk-Ying Lau, Kate; Chen Li, Richard; Tso, Amy; Choi, Catherine

    2015-01-01

    In this paper we introduce The Interactive Sensory Program for Affective Learning (InSPAL) a pioneering virtual learning programme designed for the severely intellectually disabled (SID) students, who are having cognitive deficiencies and other sensory-motor handicaps, and thus need more help and attention in overcoming their learning difficulties. Through combining and integrating interactive media and virtual reality technology with the principles of art therapy and relevant pedagogical techniques, InSPAL aims to strengthen SID students' pre-learning abilities, promote their self-awareness, decrease behavioral interferences with learning as well as social interaction, enhance their communication and thus promote their quality of life. Results of our study show that students who went through our programme were more focused, and the ability to do things more independently increased by 15%. Moreover, 50% of the students showed a marked improvement in the ability to raise their hands in response, thus increasing their communication skills. The use of therapeutic interventions enabled a better control to the body, mind and emotions, resulting a greater performance and better participation. PMID:26799893

  14. 'My Virtual Dream': Collective Neurofeedback in an Immersive Art Environment.

    PubMed

    Kovacevic, Natasha; Ritter, Petra; Tays, William; Moreno, Sylvain; McIntosh, Anthony Randal

    2015-01-01

    While human brains are specialized for complex and variable real world tasks, most neuroscience studies reduce environmental complexity, which limits the range of behaviours that can be explored. Motivated to overcome this limitation, we conducted a large-scale experiment with electroencephalography (EEG) based brain-computer interface (BCI) technology as part of an immersive multi-media science-art installation. Data from 523 participants were collected in a single night. The exploratory experiment was designed as a collective computer game where players manipulated mental states of relaxation and concentration with neurofeedback targeting modulation of relative spectral power in alpha and beta frequency ranges. Besides validating robust time-of-night effects, gender differences and distinct spectral power patterns for the two mental states, our results also show differences in neurofeedback learning outcome. The unusually large sample size allowed us to detect unprecedented speed of learning changes in the power spectrum (~ 1 min). Moreover, we found that participants' baseline brain activity predicted subsequent neurofeedback beta training, indicating state-dependent learning. Besides revealing these training effects, which are relevant for BCI applications, our results validate a novel platform engaging art and science and fostering the understanding of brains under natural conditions.

  15. 'My Virtual Dream': Collective Neurofeedback in an Immersive Art Environment.

    PubMed

    Kovacevic, Natasha; Ritter, Petra; Tays, William; Moreno, Sylvain; McIntosh, Anthony Randal

    2015-01-01

    While human brains are specialized for complex and variable real world tasks, most neuroscience studies reduce environmental complexity, which limits the range of behaviours that can be explored. Motivated to overcome this limitation, we conducted a large-scale experiment with electroencephalography (EEG) based brain-computer interface (BCI) technology as part of an immersive multi-media science-art installation. Data from 523 participants were collected in a single night. The exploratory experiment was designed as a collective computer game where players manipulated mental states of relaxation and concentration with neurofeedback targeting modulation of relative spectral power in alpha and beta frequency ranges. Besides validating robust time-of-night effects, gender differences and distinct spectral power patterns for the two mental states, our results also show differences in neurofeedback learning outcome. The unusually large sample size allowed us to detect unprecedented speed of learning changes in the power spectrum (~ 1 min). Moreover, we found that participants' baseline brain activity predicted subsequent neurofeedback beta training, indicating state-dependent learning. Besides revealing these training effects, which are relevant for BCI applications, our results validate a novel platform engaging art and science and fostering the understanding of brains under natural conditions. PMID:26154513

  16. The Immersive Virtual Reality Experience: A Typology of Users Revealed Through Multiple Correspondence Analysis Combined with Cluster Analysis Technique.

    PubMed

    Rosa, Pedro J; Morais, Diogo; Gamito, Pedro; Oliveira, Jorge; Saraiva, Tomaz

    2016-03-01

    Immersive virtual reality is thought to be advantageous by leading to higher levels of presence. However, and despite users getting actively involved in immersive three-dimensional virtual environments that incorporate sound and motion, there are individual factors, such as age, video game knowledge, and the predisposition to immersion, that may be associated with the quality of virtual reality experience. Moreover, one particular concern for users engaged in immersive virtual reality environments (VREs) is the possibility of side effects, such as cybersickness. The literature suggests that at least 60% of virtual reality users report having felt symptoms of cybersickness, which reduces the quality of the virtual reality experience. The aim of this study was thus to profile the right user to be involved in a VRE through head-mounted display. To examine which user characteristics are associated with the most effective virtual reality experience (lower cybersickness), a multiple correspondence analysis combined with cluster analysis technique was performed. Results revealed three distinct profiles, showing that the PC gamer profile is more associated with higher levels of virtual reality effectiveness, that is, higher predisposition to be immersed and reduced cybersickness symptoms in the VRE than console gamer and nongamer. These findings can be a useful orientation in clinical practice and future research as they help identify which users are more predisposed to benefit from immersive VREs.

  17. The Immersive Virtual Reality Experience: A Typology of Users Revealed Through Multiple Correspondence Analysis Combined with Cluster Analysis Technique.

    PubMed

    Rosa, Pedro J; Morais, Diogo; Gamito, Pedro; Oliveira, Jorge; Saraiva, Tomaz

    2016-03-01

    Immersive virtual reality is thought to be advantageous by leading to higher levels of presence. However, and despite users getting actively involved in immersive three-dimensional virtual environments that incorporate sound and motion, there are individual factors, such as age, video game knowledge, and the predisposition to immersion, that may be associated with the quality of virtual reality experience. Moreover, one particular concern for users engaged in immersive virtual reality environments (VREs) is the possibility of side effects, such as cybersickness. The literature suggests that at least 60% of virtual reality users report having felt symptoms of cybersickness, which reduces the quality of the virtual reality experience. The aim of this study was thus to profile the right user to be involved in a VRE through head-mounted display. To examine which user characteristics are associated with the most effective virtual reality experience (lower cybersickness), a multiple correspondence analysis combined with cluster analysis technique was performed. Results revealed three distinct profiles, showing that the PC gamer profile is more associated with higher levels of virtual reality effectiveness, that is, higher predisposition to be immersed and reduced cybersickness symptoms in the VRE than console gamer and nongamer. These findings can be a useful orientation in clinical practice and future research as they help identify which users are more predisposed to benefit from immersive VREs. PMID:26985781

  18. Calculation of the virtual current in an electromagnetic flow meter with one bubble using 3D model.

    PubMed

    Zhang, Xiao-Zhang; Li, Yantao

    2004-04-01

    Based on the theory of electromagnetic induction flow measurement, the Laplace equation in a complicated three-dimensional (3D) domain is solved by an alternating method. Virtual current potentials are obtained for an electromagnetic flow meter with one spherical bubble inside. The solutions are used to investigate the effects of bubble size and bubble position on the virtual current. Comparisons are done among the cases of 2D and 3D models, and of point electrode and large electrode. The results show that the 2D model overestimates the effect, while large electrodes are least sensitive to the bubble. This paper offers fundamentals for the study of the behavior of an electromagnetic flow meter in multiphase flow. For application, the results provide a possible way to estimate errors of the flow meter caused by multiphase flow.

  19. Using virtual reality technology and hand tracking technology to create software for training surgical skills in 3D game

    NASA Astrophysics Data System (ADS)

    Zakirova, A. A.; Ganiev, B. A.; Mullin, R. I.

    2015-11-01

    The lack of visible and approachable ways of training surgical skills is one of the main problems in medical education. Existing simulation training devices are not designed to teach students, and are not available due to the high cost of the equipment. Using modern technologies such as virtual reality and hands movements fixation technology we want to create innovative method of learning the technics of conducting operations in 3D game format, which can make education process interesting and effective. Creating of 3D format virtual simulator will allow to solve several conceptual problems at once: opportunity of practical skills improvement unlimited by the time without the risk for patient, high realism of environment in operational and anatomic body structures, using of game mechanics for information perception relief and memorization of methods acceleration, accessibility of this program.

  20. Building a virtual archive using brain architecture and Web 3D to deliver neuropsychopharmacology content over the Internet.

    PubMed

    Mongeau, R; Casu, M A; Pani, L; Pillolla, G; Lianas, L; Giachetti, A

    2008-05-01

    The vast amount of heterogeneous data generated in various fields of neurosciences such as neuropsychopharmacology can hardly be classified using traditional databases. We present here the concept of a virtual archive, spatially referenced over a simplified 3D brain map and accessible over the Internet. A simple prototype (available at http://aquatics.crs4.it/neuropsydat3d) has been realized using current Web-based virtual reality standards and technologies. It illustrates how primary literature or summary information can easily be retrieved through hyperlinks mapped onto a 3D schema while navigating through neuroanatomy. Furthermore, 3D navigation and visualization techniques are used to enhance the representation of brain's neurotransmitters, pathways and the involvement of specific brain areas in any particular physiological or behavioral functions. The system proposed shows how the use of a schematic spatial organization of data, widely exploited in other fields (e.g. Geographical Information Systems) can be extremely useful to develop efficient tools for research and teaching in neurosciences. PMID:18262677

  1. Building a virtual archive using brain architecture and Web 3D to deliver neuropsychopharmacology content over the Internet.

    PubMed

    Mongeau, R; Casu, M A; Pani, L; Pillolla, G; Lianas, L; Giachetti, A

    2008-05-01

    The vast amount of heterogeneous data generated in various fields of neurosciences such as neuropsychopharmacology can hardly be classified using traditional databases. We present here the concept of a virtual archive, spatially referenced over a simplified 3D brain map and accessible over the Internet. A simple prototype (available at http://aquatics.crs4.it/neuropsydat3d) has been realized using current Web-based virtual reality standards and technologies. It illustrates how primary literature or summary information can easily be retrieved through hyperlinks mapped onto a 3D schema while navigating through neuroanatomy. Furthermore, 3D navigation and visualization techniques are used to enhance the representation of brain's neurotransmitters, pathways and the involvement of specific brain areas in any particular physiological or behavioral functions. The system proposed shows how the use of a schematic spatial organization of data, widely exploited in other fields (e.g. Geographical Information Systems) can be extremely useful to develop efficient tools for research and teaching in neurosciences.

  2. Cultivating Imagination: Development and Pilot Test of a Therapeutic Use of an Immersive Virtual Reality CAVE

    PubMed Central

    Brennan, Patricia Flatley; Nicolalde, F. Daniel; Ponto, Kevin; Kinneberg, Megan; Freese, Vito; Paz, Dana

    2013-01-01

    As informatics applications grow from being data collection tools to platforms for action, the boundary between what constitutes informatics applications and therapeutic interventions begins to blur. Emerging computer-driven technologies such as virtual reality (VR) and mHealth apps may serve as clinical interventions. As part of a larger project intended to provide complements to cognitive behavioral approaches to health behavior change, an interactive scenario was designed to permit unstructured play inside an immersive 6-sided VR CAVE. In this pilot study we examined the technical and functional performance of the CAVE scenario, human tolerance of immersive CAVE experiences, and explored human imagination and the manner in which activity in the CAVE scenarios varied by an individual’s level of imagination. Nine adult volunteers participated in a pilot-and-feasibility study. Participants tolerated 15 minute long exposure to the scenarios, and navigated through the virtual world. Relationship between personal characteristics and behaviors are reported and explored. PMID:24551327

  3. A Methodology for Elaborating Activities for Higher Education in 3D Virtual Worlds

    ERIC Educational Resources Information Center

    Bravo, Javier; García-Magariño, Iván

    2015-01-01

    Distance education started being limited in comparison to traditional education. Distance teachers and educational organizations have overcome most of these limits, but some other limits still remain as challenges. One of these challenges is to collaboratively learn concepts in an immersive way, similarly to the education "in situ".…

  4. Efficient Unstructured Cartesian/Immersed-Boundary Method with Local Mesh Refinement to Simulate Flows in Complex 3D Geometries

    NASA Astrophysics Data System (ADS)

    de Zelicourt, Diane; Ge, Liang; Sotiropoulos, Fotis; Yoganathan, Ajit

    2008-11-01

    Image-guided computational fluid dynamics has recently gained attention as a tool for predicting the outcome of different surgical scenarios. Cartesian Immersed-Boundary methods constitute an attractive option to tackle the complexity of real-life anatomies. However, when such methods are applied to the branching, multi-vessel configurations typically encountered in cardiovascular anatomies the majority of the grid nodes of the background Cartesian mesh end up lying outside the computational domain, increasing the memory and computational overhead without enhancing the numerical resolution in the region of interest. To remedy this situation, the method presented here superimposes local mesh refinement onto an unstructured Cartesian grid formulation. A baseline unstructured Cartesian mesh is generated by eliminating all nodes that reside in the exterior of the flow domain from the grid structure, and is locally refined in the vicinity of the immersed-boundary. The potential of the method is demonstrated by carrying out systematic mesh refinement studies for internal flow problems ranging in complexity from a 90 deg pipe bend to an actual, patient-specific anatomy reconstructed from magnetic resonance.

  5. Studying social interactions through immersive virtual environment technology: virtues, pitfalls, and future challenges

    PubMed Central

    Bombari, Dario; Schmid Mast, Marianne; Canadas, Elena; Bachmann, Manuel

    2015-01-01

    The goal of the present review is to explain how immersive virtual environment technology (IVET) can be used for the study of social interactions and how the use of virtual humans in immersive virtual environments can advance research and application in many different fields. Researchers studying individual differences in social interactions are typically interested in keeping the behavior and the appearance of the interaction partner constant across participants. With IVET researchers have full control over the interaction partners, can standardize them while still keeping the simulation realistic. Virtual simulations are valid: growing evidence shows that indeed studies conducted with IVET can replicate some well-known findings of social psychology. Moreover, IVET allows researchers to subtly manipulate characteristics of the environment (e.g., visual cues to prime participants) or of the social partner (e.g., his/her race) to investigate their influences on participants’ behavior and cognition. Furthermore, manipulations that would be difficult or impossible in real life (e.g., changing participants’ height) can be easily obtained with IVET. Beside the advantages for theoretical research, we explore the most recent training and clinical applications of IVET, its integration with other technologies (e.g., social sensing) and future challenges for researchers (e.g., making the communication between virtual humans and participants smoother). PMID:26157414

  6. Immersive virtual reality platform for medical training: a "killer-application".

    PubMed

    2000-01-01

    The Medical Readiness Trainer (MRT) integrates fully immersive Virtual Reality (VR), highly advanced medical simulation technologies, and medical data to enable unprecedented medical education and training. The flexibility offered by the MRT environment serves as a practical teaching tool today and in the near future the will serve as an ideal vehicle for facilitating the transition to the next level of medical practice, i.e., telepresence and next generation Internet-based collaborative learning. PMID:10977542

  7. An integrated multidisciplinary re-evaluation of the geothermal system at Valles Caldera, New Mexico, using an immersive three-dimensional (3D) visualization environment

    NASA Astrophysics Data System (ADS)

    Fowler, A.; Bennett, S. E.; Wildgoose, M.; Cantwell, C.; Elliott, A. J.

    2012-12-01

    We describe an approach to explore the spatial relationships of a geothermal resource by examining diverse geological, geophysical, and geochemical data sets using the immersive 3-dimensional (3D) visualization capabilities of the UC Davis Keck Center for Active Visualization in the Earth Sciences (KeckCAVES). The KeckCAVES is a facility where stereoscopic images are projected onto four, surfaces (three walls and a floor), which the user perceives as a seamless 3D image of the data. The user can manipulate and interact with the data, allowing a more intuitive interpretation of data set relationships than is possible with traditional 2-dimensional techniques. We incorporate multiple data sets of the geothermal system at Valles Caldera, New Mexico: topography, lithology, faults, temperature, alteration mineralogy, and magnetotellurics. With the ability to rapidly and intuitively observe data relationships, we are able to efficiently and rapidly draw conclusions about the subsurface architecture of the Valles Caldera geothermal system. We identify two high-temperature anomalies, one that corresponds with normal faults along the western caldera ring fracture, and one that with the resurgent dome. A cold-temperature anomaly identified adjacent to the resurgent dome high-temperature anomaly appears to relate to a fault controlled graben valley that acts as a recharge zone, likely funneling cold meteoric water into the subsurface along normal faults observed on published maps and cross sections. These high-temperature anomalies broadly correspond to subsurface regions where previous magnetotelluric studies have identified low apparent resistivity. Existing hot springs in the Sulfur Springs area correspond to the only location where our modeled 100°C isotherm intersects the ground surface. Correlation between the first occurrence of key alteration minerals (pyrite, chlorite, epidote) in previously drilled boreholes and our temperature model vary, with chlorite showing a

  8. The effect of degree of immersion upon learning performance in virtual reality simulations for medical education.

    PubMed

    Gutiérrez, Fátima; Pierce, Jennifer; Vergara, Víctor M; Coulter, Robert; Saland, Linda; Caudell, Thomas P; Goldsmith, Timothy E; Alverson, Dale C

    2007-01-01

    Simulations are being used in education and training to enhance understanding, improve performance, and assess competence. However, it is important to measure the performance of these simulations as learning and training tools. This study examined and compared knowledge acquisition using a knowledge structure design. The subjects were first-year medical students at The University of New Mexico School of Medicine. One group used a fully immersed virtual reality (VR) environment using a head mounted display (HMD) and another group used a partially immersed (computer screen) VR environment. The study aims were to determine whether there were significant differences between the two groups as measured by changes in knowledge structure before and after the VR simulation experience. The results showed that both groups benefited from the VR simulation training as measured by the significant increased similarity to the expert knowledge network after the training experience. However, the immersed group showed a significantly higher gain than the partially immersed group. This study demonstrated a positive effect of VR simulation on learning as reflected by improvements in knowledge structure but an enhanced effect of full-immersion using a HMD vs. a screen-based VR system.

  9. Immersed boundary Eulerian-Lagrangian 3D simulation of pyroclastic density currents: numerical scheme and experimental validation

    NASA Astrophysics Data System (ADS)

    Doronzo, Domenico Maria; de Tullio, Marco; Pascazio, Giuseppe; Dellino, Pierfrancesco

    2010-05-01

    Pyroclastic density currents are ground hugging, hot, gas-particle flows representing the most hazardous events of explosive volcanism. Their impact on structures is a function of dynamic pressure, which expresses the lateral load that such currents exert over buildings. In this paper we show how analog experiments can be matched with numerical simulations for capturing the essential physics of the multiphase flow. We used an immersed boundary scheme for the mesh generation, which helped in reconstructing the steep velocity and particle concentration gradients near the ground surface. Results show that the calculated values of dynamic pressure agree reasonably with the experimental measurements. These outcomes encourage future application of our method for the assessment of the impact of pyroclastic density currents at the natural scale.

  10. Brave New (Interactive) Worlds: A Review of the Design Affordances and Constraints of Two 3D Virtual Worlds as Interactive Learning Environments

    ERIC Educational Resources Information Center

    Dickey, Michele D.

    2005-01-01

    Three-dimensional virtual worlds are an emerging medium currently being used in both traditional classrooms and for distance education. Three-dimensional (3D) virtual worlds are a combination of desk-top interactive Virtual Reality within a chat environment. This analysis provides an overview of Active Worlds Educational Universe and Adobe…

  11. Using a 3D Virtual Supermarket to Measure Food Purchase Behavior: A Validation Study

    PubMed Central

    Jiang, Yannan; Steenhuis, Ingrid Hendrika Margaretha; Ni Mhurchu, Cliona

    2015-01-01

    Background There is increasing recognition that supermarkets are an important environment for health-promoting interventions such as fiscal food policies or front-of-pack nutrition labeling. However, due to the complexities of undertaking such research in the real world, well-designed randomized controlled trials on these kinds of interventions are lacking. The Virtual Supermarket is a 3-dimensional computerized research environment designed to enable experimental studies in a supermarket setting without the complexity or costs normally associated with undertaking such research. Objective The primary objective was to validate the Virtual Supermarket by comparing virtual and real-life food purchasing behavior. A secondary objective was to obtain participant feedback on perceived sense of “presence” (the subjective experience of being in one place or environment even if physically located in another) in the Virtual Supermarket. Methods Eligible main household shoppers (New Zealand adults aged ≥18 years) were asked to conduct 3 shopping occasions in the Virtual Supermarket over 3 consecutive weeks, complete the validated Presence Questionnaire Items Stems, and collect their real supermarket grocery till receipts for that same period. Proportional expenditure (NZ$) and the proportion of products purchased over 18 major food groups were compared between the virtual and real supermarkets. Data were analyzed using repeated measures mixed models. Results A total of 123 participants consented to take part in the study. In total, 69.9% (86/123) completed 1 shop in the Virtual Supermarket, 64.2% (79/123) completed 2 shops, 60.2% (74/123) completed 3 shops, and 48.8% (60/123) returned their real supermarket till receipts. The 4 food groups with the highest relative expenditures were the same for the virtual and real supermarkets: fresh fruit and vegetables (virtual estimate: 14.3%; real: 17.4%), bread and bakery (virtual: 10.0%; real: 8.2%), dairy (virtual: 19.1%; real

  12. Effects of Exercise in Immersive Virtual Environments on Cortical Neural Oscillations and Mental State.

    PubMed

    Vogt, Tobias; Herpers, Rainer; Askew, Christopher D; Scherfgen, David; Strüder, Heiko K; Schneider, Stefan

    2015-01-01

    Virtual reality environments are increasingly being used to encourage individuals to exercise more regularly, including as part of treatment those with mental health or neurological disorders. The success of virtual environments likely depends on whether a sense of presence can be established, where participants become fully immersed in the virtual environment. Exposure to virtual environments is associated with physiological responses, including cortical activation changes. Whether the addition of a real exercise within a virtual environment alters sense of presence perception, or the accompanying physiological changes, is not known. In a randomized and controlled study design, moderate-intensity Exercise (i.e., self-paced cycling) and No-Exercise (i.e., automatic propulsion) trials were performed within three levels of virtual environment exposure. Each trial was 5 minutes in duration and was followed by posttrial assessments of heart rate, perceived sense of presence, EEG, and mental state. Changes in psychological strain and physical state were generally mirrored by neural activation patterns. Furthermore, these changes indicated that exercise augments the demands of virtual environment exposures and this likely contributed to an enhanced sense of presence. PMID:26366305

  13. Effects of Exercise in Immersive Virtual Environments on Cortical Neural Oscillations and Mental State

    PubMed Central

    Vogt, Tobias; Herpers, Rainer; Askew, Christopher D.; Scherfgen, David; Strüder, Heiko K.; Schneider, Stefan

    2015-01-01

    Virtual reality environments are increasingly being used to encourage individuals to exercise more regularly, including as part of treatment those with mental health or neurological disorders. The success of virtual environments likely depends on whether a sense of presence can be established, where participants become fully immersed in the virtual environment. Exposure to virtual environments is associated with physiological responses, including cortical activation changes. Whether the addition of a real exercise within a virtual environment alters sense of presence perception, or the accompanying physiological changes, is not known. In a randomized and controlled study design, moderate-intensity Exercise (i.e., self-paced cycling) and No-Exercise (i.e., automatic propulsion) trials were performed within three levels of virtual environment exposure. Each trial was 5 minutes in duration and was followed by posttrial assessments of heart rate, perceived sense of presence, EEG, and mental state. Changes in psychological strain and physical state were generally mirrored by neural activation patterns. Furthermore, these changes indicated that exercise augments the demands of virtual environment exposures and this likely contributed to an enhanced sense of presence. PMID:26366305

  14. Effects of Exercise in Immersive Virtual Environments on Cortical Neural Oscillations and Mental State.

    PubMed

    Vogt, Tobias; Herpers, Rainer; Askew, Christopher D; Scherfgen, David; Strüder, Heiko K; Schneider, Stefan

    2015-01-01

    Virtual reality environments are increasingly being used to encourage individuals to exercise more regularly, including as part of treatment those with mental health or neurological disorders. The success of virtual environments likely depends on whether a sense of presence can be established, where participants become fully immersed in the virtual environment. Exposure to virtual environments is associated with physiological responses, including cortical activation changes. Whether the addition of a real exercise within a virtual environment alters sense of presence perception, or the accompanying physiological changes, is not known. In a randomized and controlled study design, moderate-intensity Exercise (i.e., self-paced cycling) and No-Exercise (i.e., automatic propulsion) trials were performed within three levels of virtual environment exposure. Each trial was 5 minutes in duration and was followed by posttrial assessments of heart rate, perceived sense of presence, EEG, and mental state. Changes in psychological strain and physical state were generally mirrored by neural activation patterns. Furthermore, these changes indicated that exercise augments the demands of virtual environment exposures and this likely contributed to an enhanced sense of presence.

  15. Accident or homicide--virtual crime scene reconstruction using 3D methods.

    PubMed

    Buck, Ursula; Naether, Silvio; Räss, Beat; Jackowski, Christian; Thali, Michael J

    2013-02-10

    The analysis and reconstruction of forensically relevant events, such as traffic accidents, criminal assaults and homicides are based on external and internal morphological findings of the injured or deceased person. For this approach high-tech methods are gaining increasing importance in forensic investigations. The non-contact optical 3D digitising system GOM ATOS is applied as a suitable tool for whole body surface and wound documentation and analysis in order to identify injury-causing instruments and to reconstruct the course of event. In addition to the surface documentation, cross-sectional imaging methods deliver medical internal findings of the body. These 3D data are fused into a whole body model of the deceased. Additional to the findings of the bodies, the injury inflicting instruments and incident scene is documented in 3D. The 3D data of the incident scene, generated by 3D laser scanning and photogrammetry, is also included into the reconstruction. Two cases illustrate the methods. In the fist case a man was shot in his bedroom and the main question was, if the offender shot the man intentionally or accidentally, as he declared. In the second case a woman was hit by a car, driving backwards into a garage. It was unclear if the driver drove backwards once or twice, which would indicate that he willingly injured and killed the woman. With this work, we demonstrate how 3D documentation, data merging and animation enable to answer reconstructive questions regarding the dynamic development of patterned injuries, and how this leads to a real data based reconstruction of the course of event. PMID:22727689

  16. Accident or homicide--virtual crime scene reconstruction using 3D methods.

    PubMed

    Buck, Ursula; Naether, Silvio; Räss, Beat; Jackowski, Christian; Thali, Michael J

    2013-02-10

    The analysis and reconstruction of forensically relevant events, such as traffic accidents, criminal assaults and homicides are based on external and internal morphological findings of the injured or deceased person. For this approach high-tech methods are gaining increasing importance in forensic investigations. The non-contact optical 3D digitising system GOM ATOS is applied as a suitable tool for whole body surface and wound documentation and analysis in order to identify injury-causing instruments and to reconstruct the course of event. In addition to the surface documentation, cross-sectional imaging methods deliver medical internal findings of the body. These 3D data are fused into a whole body model of the deceased. Additional to the findings of the bodies, the injury inflicting instruments and incident scene is documented in 3D. The 3D data of the incident scene, generated by 3D laser scanning and photogrammetry, is also included into the reconstruction. Two cases illustrate the methods. In the fist case a man was shot in his bedroom and the main question was, if the offender shot the man intentionally or accidentally, as he declared. In the second case a woman was hit by a car, driving backwards into a garage. It was unclear if the driver drove backwards once or twice, which would indicate that he willingly injured and killed the woman. With this work, we demonstrate how 3D documentation, data merging and animation enable to answer reconstructive questions regarding the dynamic development of patterned injuries, and how this leads to a real data based reconstruction of the course of event.

  17. Effects of 3D virtual haptics force feedback on brand personality perception: the mediating role of physical presence in advergames.

    PubMed

    Jin, Seung-A Annie

    2010-06-01

    This study gauged the effects of force feedback in the Novint Falcon haptics system on the sensory and cognitive dimensions of a virtual test-driving experience. First, in order to explore the effects of tactile stimuli with force feedback on users' sensory experience, feelings of physical presence (the extent to which virtual physical objects are experienced as actual physical objects) were measured after participants used the haptics interface. Second, to evaluate the effects of force feedback on the cognitive dimension of consumers' virtual experience, this study investigated brand personality perception. The experiment utilized the Novint Falcon haptics controller to induce immersive virtual test-driving through tactile stimuli. The author designed a two-group (haptics stimuli with force feedback versus no force feedback) comparison experiment (N = 238) by manipulating the level of force feedback. Users in the force feedback condition were exposed to tactile stimuli involving various force feedback effects (e.g., terrain effects, acceleration, and lateral forces) while test-driving a rally car. In contrast, users in the control condition test-drove the rally car using the Novint Falcon but were not given any force feedback. Results of ANOVAs indicated that (a) users exposed to force feedback felt stronger physical presence than those in the no force feedback condition, and (b) users exposed to haptics stimuli with force feedback perceived the brand personality of the car to be more rugged than those in the control condition. Managerial implications of the study for product trial in the business world are discussed.

  18. Cortical correlate of spatial presence in 2D and 3D interactive virtual reality: an EEG study.

    PubMed

    Kober, Silvia Erika; Kurzmann, Jürgen; Neuper, Christa

    2012-03-01

    The present study is the first that examined neuronal underpinnings of spatial presence using multi-channel EEG in an interactive virtual reality (VR). We compared two VR-systems: a highly immersive Single-Wall-VR-system (three-dimensional view, large screen) and a less immersive Desktop-VR-system (two-dimensional view, small screen). Twenty-nine participants performed a spatial navigation task in a virtual maze and had to state their sensation of "being there" on a 5-point rating scale. Task-related power decrease/increase (TRPD/TRPI) in the Alpha band (8-12Hz) and coherence analyses in different frequency bands were used to analyze the EEG data. The Single-Wall-VR-system caused a more intense presence experience than the Desktop-VR-system. This increased feeling of presence in the Single-Wall-VR-condition was accompanied by an increased parietal TRPD in the Alpha band, which is associated with cortical activation. The lower presence experience in the Desktop-VR-group was accompanied by a stronger functional connectivity between frontal and parietal brain regions indicating that the communication between these two brain areas is crucial for the presence experience. Hence, we found a positive relationship between presence and parietal brain activation and a negative relationship between presence and frontal brain activation in an interactive VR-paradigm, supporting the results of passive non-interactive VR-studies.

  19. Crowd behaviour during high-stress evacuations in an immersive virtual environment

    PubMed Central

    Kapadia, Mubbasir; Thrash, Tyler; Sumner, Robert W.; Gross, Markus; Helbing, Dirk; Hölscher, Christoph

    2016-01-01

    Understanding the collective dynamics of crowd movements during stressful emergency situations is central to reducing the risk of deadly crowd disasters. Yet, their systematic experimental study remains a challenging open problem due to ethical and methodological constraints. In this paper, we demonstrate the viability of shared three-dimensional virtual environments as an experimental platform for conducting crowd experiments with real people. In particular, we show that crowds of real human subjects moving and interacting in an immersive three-dimensional virtual environment exhibit typical patterns of real crowds as observed in real-life crowded situations. These include the manifestation of social conventions and the emergence of self-organized patterns during egress scenarios. High-stress evacuation experiments conducted in this virtual environment reveal movements characterized by mass herding and dangerous overcrowding as they occur in crowd disasters. We describe the behavioural mechanisms at play under such extreme conditions and identify critical zones where overcrowding may occur. Furthermore, we show that herding spontaneously emerges from a density effect without the need to assume an increase of the individual tendency to imitate peers. Our experiments reveal the promise of immersive virtual environments as an ethical, cost-efficient, yet accurate platform for exploring crowd behaviour in high-risk situations with real human subjects. PMID:27605166

  20. Crowd behaviour during high-stress evacuations in an immersive virtual environment.

    PubMed

    Moussaïd, Mehdi; Kapadia, Mubbasir; Thrash, Tyler; Sumner, Robert W; Gross, Markus; Helbing, Dirk; Hölscher, Christoph

    2016-09-01

    Understanding the collective dynamics of crowd movements during stressful emergency situations is central to reducing the risk of deadly crowd disasters. Yet, their systematic experimental study remains a challenging open problem due to ethical and methodological constraints. In this paper, we demonstrate the viability of shared three-dimensional virtual environments as an experimental platform for conducting crowd experiments with real people. In particular, we show that crowds of real human subjects moving and interacting in an immersive three-dimensional virtual environment exhibit typical patterns of real crowds as observed in real-life crowded situations. These include the manifestation of social conventions and the emergence of self-organized patterns during egress scenarios. High-stress evacuation experiments conducted in this virtual environment reveal movements characterized by mass herding and dangerous overcrowding as they occur in crowd disasters. We describe the behavioural mechanisms at play under such extreme conditions and identify critical zones where overcrowding may occur. Furthermore, we show that herding spontaneously emerges from a density effect without the need to assume an increase of the individual tendency to imitate peers. Our experiments reveal the promise of immersive virtual environments as an ethical, cost-efficient, yet accurate platform for exploring crowd behaviour in high-risk situations with real human subjects. PMID:27605166

  1. Proteopedia: A Collaborative, Virtual 3D Web-Resource for Protein and Biomolecule Structure and Function

    ERIC Educational Resources Information Center

    Hodis, Eran; Prilusky, Jaime, Sussman, Joel L.

    2010-01-01

    Protein structures are hard to represent on paper. They are large, complex, and three-dimensional (3D)--four-dimensional if conformational changes count! Unlike most of their substrates, which can easily be drawn out in full chemical formula, drawing every atom in a protein would usually be a mess. Simplifications like showing only the surface of…

  2. Source fields reconstruction with 3D mapping by means of the virtual acoustic volume concept

    NASA Astrophysics Data System (ADS)

    Forget, S.; Totaro, N.; Guyader, J. L.; Schaeffer, M.

    2016-10-01

    This paper presents the theoretical framework of the virtual acoustic volume concept and two related inverse Patch Transfer Functions (iPTF) identification methods (called u-iPTF and m-iPTF depending on the chosen boundary conditions for the virtual volume). They are based on the application of Green's identity on an arbitrary closed virtual volume defined around the source. The reconstruction of sound source fields combines discrete acoustic measurements performed at accessible positions around the source with the modal behavior of the chosen virtual acoustic volume. The mode shapes of the virtual volume can be computed by a Finite Element solver to handle the geometrical complexity of the source. As a result, it is possible to identify all the acoustic source fields at the real surface of an irregularly shaped structure and irrespective of its acoustic environment. The m-iPTF method is introduced for the first time in this paper. Conversely to the already published u-iPTF method, the m-iPTF method needs only acoustic pressure and avoids particle velocity measurements. This paper is focused on its validation, both with numerical computations and by experiments on a baffled oil pan.

  3. Immersion factors affecting perception and behaviour in a virtual reality power wheelchair simulator.

    PubMed

    Alshaer, Abdulaziz; Regenbrecht, Holger; O'Hare, David

    2017-01-01

    Virtual Reality based driving simulators are increasingly used to train and assess users' abilities to operate vehicles in a controlled and safe way. For the development of those simulators it is important to identify and evaluate design factors affecting perception, behaviour, and driving performance. In an exemplary power wheelchair simulator setting we identified the three immersion factors display type (head-mounted display v monitor), ability to freely change the field of view (FOV), and the visualisation of the user's avatar as potentially affecting perception and behaviour. In a study with 72 participants we found all three factors affected the participants' sense of presence in the virtual environment. In particular the display type significantly affected both perceptual and behavioural measures whereas FOV only affected behavioural measures. Our findings could guide future Virtual Reality simulator designers to evoke targeted user behaviours and perceptions. PMID:27633192

  4. A Fully Immersive Set-Up for Remote Interaction and Neurorehabilitation Based on Virtual Body Ownership

    PubMed Central

    Perez-Marcos, Daniel; Solazzi, Massimiliano; Steptoe, William; Oyekoya, Oyewole; Frisoli, Antonio; Weyrich, Tim; Steed, Anthony; Tecchia, Franco; Slater, Mel; Sanchez-Vives, Maria V.

    2012-01-01

    Although telerehabilitation systems represent one of the most technologically appealing clinical solutions for the immediate future, they still present limitations that prevent their standardization. Here we propose an integrated approach that includes three key and novel factors: (a) fully immersive virtual environments, including virtual body representation and ownership; (b) multimodal interaction with remote people and virtual objects including haptic interaction; and (c) a physical representation of the patient at the hospital through embodiment agents (e.g., as a physical robot). The importance of secure and rapid communication between the nodes is also stressed and an example implemented solution is described. Finally, we discuss the proposed approach with reference to the existing literature and systems. PMID:22787454

  5. Virtually compliant: Immersive video gaming increases conformity to false computer judgments.

    PubMed

    Weger, Ulrich W; Loughnan, Stephen; Sharma, Dinkar; Gonidis, Lazaros

    2015-08-01

    Real-life encounters with face-to-face contact are on the decline in a world in which many routine tasks are delegated to virtual characters-a development that bears both opportunities and risks. Interacting with such virtual-reality beings is particularly common during role-playing videogames, in which we incarnate into the virtual reality of an avatar. Video gaming is known to lead to the training and development of real-life skills and behaviors; hence, in the present study we sought to explore whether role-playing video gaming primes individuals' identification with a computer enough to increase computer-related social conformity. Following immersive video gaming, individuals were indeed more likely to give up their own best judgment and to follow the vote of computers, especially when the stimulus context was ambiguous. Implications for human-computer interactions and for our understanding of the formation of identity and self-concept are discussed.

  6. Immersion factors affecting perception and behaviour in a virtual reality power wheelchair simulator.

    PubMed

    Alshaer, Abdulaziz; Regenbrecht, Holger; O'Hare, David

    2017-01-01

    Virtual Reality based driving simulators are increasingly used to train and assess users' abilities to operate vehicles in a controlled and safe way. For the development of those simulators it is important to identify and evaluate design factors affecting perception, behaviour, and driving performance. In an exemplary power wheelchair simulator setting we identified the three immersion factors display type (head-mounted display v monitor), ability to freely change the field of view (FOV), and the visualisation of the user's avatar as potentially affecting perception and behaviour. In a study with 72 participants we found all three factors affected the participants' sense of presence in the virtual environment. In particular the display type significantly affected both perceptual and behavioural measures whereas FOV only affected behavioural measures. Our findings could guide future Virtual Reality simulator designers to evoke targeted user behaviours and perceptions.

  7. Design and application of real-time visual attention model for the exploration of 3D virtual environments.

    PubMed

    Hillaire, Sébastien; Lécuyer, Anatole; Regia-Corte, Tony; Cozot, Rémi; Royan, Jérôme; Breton, Gaspard

    2012-03-01

    This paper studies the design and application of a novel visual attention model designed to compute user's gaze position automatically, i.e., without using a gaze-tracking system. The model we propose is specifically designed for real-time first-person exploration of 3D virtual environments. It is the first model adapted to this context which can compute in real time a continuous gaze point position instead of a set of 3D objects potentially observed by the user. To do so, contrary to previous models which use a mesh-based representation of visual objects, we introduce a representation based on surface-elements. Our model also simulates visual reflexes and the cognitive processes which take place in the brain such as the gaze behavior associated to first-person navigation in the virtual environment. Our visual attention model combines both bottom-up and top-down components to compute a continuous gaze point position on screen that hopefully matches the user's one. We conducted an experiment to study and compare the performance of our method with a state-of-the-art approach. Our results are found significantly better with sometimes more than 100 percent of accuracy gained. This suggests that computing a gaze point in a 3D virtual environment in real time is possible and is a valid approach, compared to object-based approaches. Finally, we expose different applications of our model when exploring virtual environments. We present different algorithms which can improve or adapt the visual feedback of virtual environments based on gaze information. We first propose a level-of-detail approach that heavily relies on multiple-texture sampling. We show that it is possible to use the gaze information of our visual attention model to increase visual quality where the user is looking, while maintaining a high-refresh rate. Second, we introduce the use of the visual attention model in three visual effects inspired by the human visual system namely: depth-of-field blur, camera

  8. Design and application of real-time visual attention model for the exploration of 3D virtual environments.

    PubMed

    Hillaire, Sébastien; Lécuyer, Anatole; Regia-Corte, Tony; Cozot, Rémi; Royan, Jérôme; Breton, Gaspard

    2012-03-01

    This paper studies the design and application of a novel visual attention model designed to compute user's gaze position automatically, i.e., without using a gaze-tracking system. The model we propose is specifically designed for real-time first-person exploration of 3D virtual environments. It is the first model adapted to this context which can compute in real time a continuous gaze point position instead of a set of 3D objects potentially observed by the user. To do so, contrary to previous models which use a mesh-based representation of visual objects, we introduce a representation based on surface-elements. Our model also simulates visual reflexes and the cognitive processes which take place in the brain such as the gaze behavior associated to first-person navigation in the virtual environment. Our visual attention model combines both bottom-up and top-down components to compute a continuous gaze point position on screen that hopefully matches the user's one. We conducted an experiment to study and compare the performance of our method with a state-of-the-art approach. Our results are found significantly better with sometimes more than 100 percent of accuracy gained. This suggests that computing a gaze point in a 3D virtual environment in real time is possible and is a valid approach, compared to object-based approaches. Finally, we expose different applications of our model when exploring virtual environments. We present different algorithms which can improve or adapt the visual feedback of virtual environments based on gaze information. We first propose a level-of-detail approach that heavily relies on multiple-texture sampling. We show that it is possible to use the gaze information of our visual attention model to increase visual quality where the user is looking, while maintaining a high-refresh rate. Second, we introduce the use of the visual attention model in three visual effects inspired by the human visual system namely: depth-of-field blur, camera

  9. Virtually supportive: A feasibility pilot study of an online support group for dementia caregivers in a 3D virtual environment

    PubMed Central

    O’Connor, Mary-Frances; Arizmendi, Brian J.; Kaszniak, Alfred W.

    2014-01-01

    Caregiver support groups effectively reduce stress from caring for someone with dementia. These same demands can prevent participation in a group. The present feasibility study investigated a virtual online caregiver support group to bring the support group into the home. While online groups have been shown to be helpful, submissions to a message board (vs. live conversation) can feel impersonal. By using avatars, participants interacted via real-time chat in a virtual environment in an 8-week support group. Data indicated lower levels of perceived stress, depression and loneliness across participants. Importantly, satisfaction reports also indicate that caregivers overcame the barriers to participation, and had a strong sense of the group’s presence. This study provides the framework for an accessible and low cost online support group for a dementia caregiver. The study demonstrates the feasibility of interactive group in a virtual environment for engaging members in meaningful interaction. PMID:24984911

  10. Drumming in immersive virtual reality: the body shapes the way we play.

    PubMed

    Kilteni, Konstantina; Bergstrom, Ilias; Slater, Mel

    2013-04-01

    It has been shown that it is possible to generate perceptual illusions of ownership in immersive virtual reality (IVR) over a virtual body seen from first person perspective, in other words over a body that visually substitutes the person's real body. This can occur even when the virtual body is quite different in appearance from the person's real body. However, investigation of the psychological, behavioral and attitudinal consequences of such body transformations remains an interesting problem with much to be discovered. Thirty six Caucasian people participated in a between-groups experiment where they played a West-African Djembe hand drum while immersed in IVR and with a virtual body that substituted their own. The virtual hand drum was registered with a physical drum. They were alongside a virtual character that played a drum in a supporting, accompanying role. In a baseline condition participants were represented only by plainly shaded white hands, so that they were able merely to play. In the experimental condition they were represented either by a casually dressed dark-skinned virtual body (Casual Dark-Skinned - CD) or by a formal suited light-skinned body (Formal Light-Skinned - FL). Although participants of both groups experienced a strong body ownership illusion towards the virtual body, only those with the CD representation showed significant increases in their movement patterns for drumming compared to the baseline condition and compared with those embodied in the FL body. Moreover, the stronger the illusion of body ownership in the CD condition, the greater this behavioral change. A path analysis showed that the observed behavioral changes were a function of the strength of the illusion of body ownership towards the virtual body and its perceived appropriateness for the drumming task. These results demonstrate that full body ownership illusions can lead to substantial behavioral and possibly cognitive changes depending on the appearance of the virtual

  11. Taking Science Online: Evaluating Presence and Immersion through a Laboratory Experience in a Virtual Learning Environment for Entomology Students

    ERIC Educational Resources Information Center

    Annetta, Leonard; Klesath, Marta; Meyer, John

    2009-01-01

    A 3-D virtual field trip was integrated into an online college entomology course and developed as a trial for the possible incorporation of future virtual environments to supplement online higher education laboratories. This article provides an explanation of the rationale behind creating the virtual experience, the Bug Farm; the method and…

  12. Identification of potential influenza virus endonuclease inhibitors through virtual screening based on the 3D-QSAR model.

    PubMed

    Kim, J; Lee, C; Chong, Y

    2009-01-01

    Influenza endonucleases have appeared as an attractive target of antiviral therapy for influenza infection. With the purpose of designing a novel antiviral agent with enhanced biological activities against influenza endonuclease, a three-dimensional quantitative structure-activity relationships (3D-QSAR) model was generated based on 34 influenza endonuclease inhibitors. The comparative molecular similarity index analysis (CoMSIA) with a steric, electrostatic and hydrophobic (SEH) model showed the best correlative and predictive capability (q(2) = 0.763, r(2) = 0.969 and F = 174.785), which provided a pharmacophore composed of the electronegative moiety as well as the bulky hydrophobic group. The CoMSIA model was used as a pharmacophore query in the UNITY search of the ChemDiv compound library to give virtual active compounds. The 3D-QSAR model was then used to predict the activity of the selected compounds, which identified three compounds as the most likely inhibitor candidates.

  13. "The Evolution of e-Learning in the Context of 3D Virtual Worlds"

    ERIC Educational Resources Information Center

    Kotsilieris, Theodore; Dimopoulou, Nikoletta

    2013-01-01

    Information and Communication Technologies (ICT) offer new approaches towards knowledge acquisition and collaboration through distance learning processes. Web-based Learning Management Systems (LMS) have transformed the way that education is conducted nowadays. At the same time, the adoption of Virtual Worlds in the educational process is of great…

  14. Determinants of Presence in 3D Virtual Worlds: A Structural Equation Modelling Analysis

    ERIC Educational Resources Information Center

    Chow, Meyrick

    2016-01-01

    There is a growing body of evidence that feeling present in virtual environments contributes to effective learning. Presence is a psychological state of the user; hence, it is generally agreed that individual differences in user characteristics can lead to different experiences of presence. Despite the fact that user characteristics can play a…

  15. The Use of 3D Virtual Learning Environments in Training Foreign Language Pre-Service Teachers

    ERIC Educational Resources Information Center

    Can, Tuncer; Simsek, Irfan

    2015-01-01

    The recent developments in computer and Internet technologies and in three dimensional modelling necessitates the new approaches and methods in the education field and brings new opportunities to the higher education. The Internet and virtual learning environments have changed the learning opportunities by diversifying the learning options not…

  16. Virtual Presence and the Mind's Eye in 3-D Online Communities

    NASA Astrophysics Data System (ADS)

    Beacham, R. C.; Denard, H.; Baker, D.

    2011-09-01

    Digital technologies have introduced fundamental changes in the forms, content, and media of communication. Indeed, some have suggested we are in the early stages of a seismic shift comparable to that in antiquity with the transition from a primarily oral culture to one based upon writing. The digital transformation is rapidly displacing the long-standing hegemony of text, and restoring in part social, bodily, oral and spatial elements, but in radically reconfigured forms and formats. Contributing to and drawing upon such changes and possibilities, scholars and those responsible for sites preserving or displaying cultural heritage, have undertaken projects to explore the properties and potential of the online communities enabled by "Virtual Worlds" and related platforms for teaching, collaboration, publication, and new modes of disciplinary research. Others, keenly observing and evaluating such work, are poised to contribute to it. It is crucial that leadership be provided to ensure that serious and sustained investigation be undertaken by scholars who have experience, and achievements, in more traditional forms of research, and who perceive the emerging potential of Virtual World work to advance their investigations. The Virtual Museums Transnational Network will seek to engage such scholars and provide leadership in this emerging and immensely attractive new area of cultural heritage exploration and experience. This presentation reviews examples of the current "state of the art" in heritage based Virtual World initiatives, looking at the new modes of social interaction and experience enabled by such online communities, and some of the achievements and future aspirations of this work.

  17. Collaboration and Knowledge Sharing Using 3D Virtual World on "Second Life"

    ERIC Educational Resources Information Center

    Rahim, Noor Faridah A.

    2013-01-01

    A collaborative and knowledge sharing virtual activity on "Second Life" using a learner-centred teaching methodology was initiated between Temasek Polytechnic and The Hong Kong Polytechnic University (HK PolyU) in the October 2011 semester. This paper highlights the author's experience in designing and implementing this e-learning…

  18. A 3D Planetary Neocartographic Tool in Education: A Game on Virtual Moon and Mars Globes

    NASA Astrophysics Data System (ADS)

    Hargitai, H.; Simonné-Dombóvári, E.; Gede, M.

    2012-03-01

    The paper describes the educational use of online virtual globes of Mars and the Moon. The game uses topographic globes of Mars (MOLA) and the Moon (LRO DTM) that includes IAU nomenclature + informal names. Students have to position the points described.

  19. The Input-Interface of Webcam Applied in 3D Virtual Reality Systems

    ERIC Educational Resources Information Center

    Sun, Huey-Min; Cheng, Wen-Lin

    2009-01-01

    Our research explores a virtual reality application based on Web camera (Webcam) input-interface. The interface can replace with the mouse to control direction intention of a user by the method of frame difference. We divide a frame into nine grids from Webcam and make use of the background registration to compute the moving object. In order to…

  20. Generic precise augmented reality guiding system and its calibration method based on 3D virtual model.

    PubMed

    Liu, Miao; Yang, Shourui; Wang, Zhangying; Huang, Shujun; Liu, Yue; Niu, Zhenqi; Zhang, Xiaoxuan; Zhu, Jigui; Zhang, Zonghua

    2016-05-30

    Augmented reality system can be applied to provide precise guidance for various kinds of manual works. The adaptability and guiding accuracy of such systems are decided by the computational model and the corresponding calibration method. In this paper, a novel type of augmented reality guiding system and the corresponding designing scheme are proposed. Guided by external positioning equipment, the proposed system can achieve high relative indication accuracy in a large working space. Meanwhile, the proposed system is realized with a digital projector and the general back projection model is derived with geometry relationship between digitized 3D model and the projector in free space. The corresponding calibration method is also designed for the proposed system to obtain the parameters of projector. To validate the proposed back projection model, the coordinate data collected by a 3D positioning equipment is used to calculate and optimize the extrinsic parameters. The final projecting indication accuracy of the system is verified with subpixel pattern projecting technique.

  1. Towards a Transcription System of Sign Language for 3D Virtual Agents

    NASA Astrophysics Data System (ADS)

    Do Amaral, Wanessa Machado; de Martino, José Mario

    Accessibility is a growing concern in computer science. Since virtual information is mostly presented visually, it may seem that access for deaf people is not an issue. However, for prelingually deaf individuals, those who were deaf since before acquiring and formally learn a language, written information is often of limited accessibility than if presented in signing. Further, for this community, signing is their language of choice, and reading text in a spoken language is akin to using a foreign language. Sign language uses gestures and facial expressions and is widely used by deaf communities. To enabling efficient production of signed content on virtual environment, it is necessary to make written records of signs. Transcription systems have been developed to describe sign languages in written form, but these systems have limitations. Since they were not originally designed with computer animation in mind, in general, the recognition and reproduction of signs in these systems is an easy task only to those who deeply know the system. The aim of this work is to develop a transcription system to provide signed content in virtual environment. To animate a virtual avatar, a transcription system requires explicit enough information, such as movement speed, signs concatenation, sequence of each hold-and-movement and facial expressions, trying to articulate close to reality. Although many important studies in sign languages have been published, the transcription problem remains a challenge. Thus, a notation to describe, store and play signed content in virtual environments offers a multidisciplinary study and research tool, which may help linguistic studies to understand the sign languages structure and grammar.

  2. Model-based 3D human shape estimation from silhouettes for virtual fitting

    NASA Astrophysics Data System (ADS)

    Saito, Shunta; Kouchi, Makiko; Mochimaru, Masaaki; Aoki, Yoshimitsu

    2014-03-01

    We propose a model-based 3D human shape reconstruction system from two silhouettes. Firstly, we synthesize a deformable body model from 3D human shape database consists of a hundred whole body mesh models. Each mesh model is homologous, so that it has the same topology and same number of vertices among all models. We perform principal component analysis (PCA) on the database and synthesize an Active Shape Model (ASM). ASM allows changing the body type of the model with a few parameters. The pose changing of our model can be achieved by reconstructing the skeleton structures from implanted joints of the model. By applying pose changing after body type deformation, our model can represents various body types and any pose. We apply the model to the problem of 3D human shape reconstruction from front and side silhouette. Our approach is simply comparing the contours between the model's and input silhouettes', we then use only torso part contour of the model to reconstruct whole shape. We optimize the model parameters by minimizing the difference between corresponding silhouettes by using a stochastic, derivative-free non-linear optimization method, CMA-ES.

  3. Integrated VR platform for 3D and image-based models: a step toward interactive image-based virtual environments

    NASA Astrophysics Data System (ADS)

    Yoon, Jayoung; Kim, Gerard J.

    2003-04-01

    Traditionally, three dimension models have been used for building virtual worlds, and a data structure called the "scene graph" is often employed to organize these 3D objects in the virtual space. On the other hand, image-based rendering has recently been suggested as a probable alternative VR platform for its photo-realism, however, due to limited interactivity, it has only been used for simple navigation systems. To combine the merits of these two approaches to object/scene representations, this paper proposes for a scene graph structure in which both 3D models and various image-based scenes/objects can be defined, traversed, and rendered together. In fact, as suggested by Shade et al., these different representations can be used as different LOD's for a given object. For instance, an object might be rendered using a 3D model at close range, a billboard at an intermediate range, and as part of an environment map at far range. The ultimate objective of this mixed platform is to breath more interactivity into the image based rendered VE's by employing 3D models as well. There are several technical challenges in devising such a platform: designing scene graph nodes for various types of image based techniques, establishing criteria for LOD/representation selection, handling their transitions, implementing appropriate interaction schemes, and correctly rendering the overall scene. Currently, we have extended the scene graph structure of the Sense8's WorldToolKit, to accommodate new node types for environment maps billboards, moving textures and sprites, "Tour-into-the-Picture" structure, and view interpolated objects. As for choosing the right LOD level, the usual viewing distance and image space criteria are used, however, the switching between the image and 3D model occurs at a distance from the user where the user starts to perceive the object's internal depth. Also, during interaction, regardless of the viewing distance, a 3D representation would be used, it if

  4. Immersed Boundary Models for Quantifying Flow-Induced Mechanical Stimuli on Stem Cells Seeded on 3D Scaffolds in Perfusion Bioreactors

    PubMed Central

    Smeets, Bart; Odenthal, Tim; Luyten, Frank P.; Ramon, Herman; Papantoniou, Ioannis; Geris, Liesbet

    2016-01-01

    Perfusion bioreactors regulate flow conditions in order to provide cells with oxygen, nutrients and flow-associated mechanical stimuli. Locally, these flow conditions can vary depending on the scaffold geometry, cellular confluency and amount of extra cellular matrix deposition. In this study, a novel application of the immersed boundary method was introduced in order to represent a detailed deformable cell attached to a 3D scaffold inside a perfusion bioreactor and exposed to microscopic flow. The immersed boundary model permits the prediction of mechanical effects of the local flow conditions on the cell. Incorporating stiffness values measured with atomic force microscopy and micro-flow boundary conditions obtained from computational fluid dynamics simulations on the entire scaffold, we compared cell deformation, cortical tension, normal and shear pressure between different cell shapes and locations. We observed a large effect of the precise cell location on the local shear stress and we predicted flow-induced cortical tensions in the order of 5 pN/μm, at the lower end of the range reported in literature. The proposed method provides an interesting tool to study perfusion bioreactors processes down to the level of the individual cell’s micro-environment, which can further aid in the achievement of robust bioprocess control for regenerative medicine applications. PMID:27658116

  5. Active Learning through the Use of Virtual Environments

    ERIC Educational Resources Information Center

    Mayrose, James

    2012-01-01

    Immersive Virtual Reality (VR) has seen explosive growth over the last decade. Immersive VR attempts to give users the sensation of being fully immersed in a synthetic environment by providing them with 3D hardware, and allowing them to interact with objects in virtual worlds. The technology is extremely effective for learning and exploration, and…

  6. WeaVR: a self-contained and wearable immersive virtual environment simulation system.

    PubMed

    Hodgson, Eric; Bachmann, Eric R; Vincent, David; Zmuda, Michael; Waller, David; Calusdian, James

    2015-03-01

    We describe WeaVR, a computer simulation system that takes virtual reality technology beyond specialized laboratories and research sites and makes it available in any open space, such as a gymnasium or a public park. Novel hardware and software systems enable HMD-based immersive virtual reality simulations to be conducted in any arbitrary location, with no external infrastructure and little-to-no setup or site preparation. The ability of the WeaVR system to provide realistic motion-tracked navigation for users, to improve the study of large-scale navigation, and to generate usable behavioral data is shown in three demonstrations. First, participants navigated through a full-scale virtual grocery store while physically situated in an open grass field. Trajectory data are presented for both normal tracking and for tracking during the use of redirected walking that constrained users to a predefined area. Second, users followed a straight path within a virtual world for distances of up to 2 km while walking naturally and being redirected to stay within the field, demonstrating the ability of the system to study large-scale navigation by simulating virtual worlds that are potentially unlimited in extent. Finally, the portability and pedagogical implications of this system were demonstrated by taking it to a regional high school for live use by a computer science class on their own school campus. PMID:24737097

  7. WeaVR: a self-contained and wearable immersive virtual environment simulation system.

    PubMed

    Hodgson, Eric; Bachmann, Eric R; Vincent, David; Zmuda, Michael; Waller, David; Calusdian, James

    2015-03-01

    We describe WeaVR, a computer simulation system that takes virtual reality technology beyond specialized laboratories and research sites and makes it available in any open space, such as a gymnasium or a public park. Novel hardware and software systems enable HMD-based immersive virtual reality simulations to be conducted in any arbitrary location, with no external infrastructure and little-to-no setup or site preparation. The ability of the WeaVR system to provide realistic motion-tracked navigation for users, to improve the study of large-scale navigation, and to generate usable behavioral data is shown in three demonstrations. First, participants navigated through a full-scale virtual grocery store while physically situated in an open grass field. Trajectory data are presented for both normal tracking and for tracking during the use of redirected walking that constrained users to a predefined area. Second, users followed a straight path within a virtual world for distances of up to 2 km while walking naturally and being redirected to stay within the field, demonstrating the ability of the system to study large-scale navigation by simulating virtual worlds that are potentially unlimited in extent. Finally, the portability and pedagogical implications of this system were demonstrated by taking it to a regional high school for live use by a computer science class on their own school campus.

  8. Generic precise augmented reality guiding system and its calibration method based on 3D virtual model.

    PubMed

    Liu, Miao; Yang, Shourui; Wang, Zhangying; Huang, Shujun; Liu, Yue; Niu, Zhenqi; Zhang, Xiaoxuan; Zhu, Jigui; Zhang, Zonghua

    2016-05-30

    Augmented reality system can be applied to provide precise guidance for various kinds of manual works. The adaptability and guiding accuracy of such systems are decided by the computational model and the corresponding calibration method. In this paper, a novel type of augmented reality guiding system and the corresponding designing scheme are proposed. Guided by external positioning equipment, the proposed system can achieve high relative indication accuracy in a large working space. Meanwhile, the proposed system is realized with a digital projector and the general back projection model is derived with geometry relationship between digitized 3D model and the projector in free space. The corresponding calibration method is also designed for the proposed system to obtain the parameters of projector. To validate the proposed back projection model, the coordinate data collected by a 3D positioning equipment is used to calculate and optimize the extrinsic parameters. The final projecting indication accuracy of the system is verified with subpixel pattern projecting technique. PMID:27410124

  9. Comparative brain morphology of Neotropical parrots (Aves, Psittaciformes) inferred from virtual 3D endocasts.

    PubMed

    Carril, Julieta; Tambussi, Claudia Patricia; Degrange, Federico Javier; Benitez Saldivar, María Juliana; Picasso, Mariana Beatriz Julieta

    2016-08-01

    Psittaciformes are a very diverse group of non-passerine birds, with advanced cognitive abilities and highly developed locomotor and feeding behaviours. Using computed tomography and three-dimensional (3D) visualization software, the endocasts of 14 extant Neotropical parrots were reconstructed, with the aim of analysing, comparing and exploring the morphology of the brain within the clade. A 3D geomorphometric analysis was performed, and the encephalization quotient (EQ) was calculated. Brain morphology character states were traced onto a Psittaciformes tree in order to facilitate interpretation of morphological traits in a phylogenetic context. Our results indicate that: (i) there are two conspicuously distinct brain morphologies, one considered walnut type (quadrangular and wider than long) and the other rounded (narrower and rostrally tapered); (ii) Psittaciformes possess a noticeable notch between hemisphaeria that divides the bulbus olfactorius; (iii) the plesiomorphic and most frequently observed characteristics of Neotropical parrots are a rostrally tapered telencephalon in dorsal view, distinctly enlarged dorsal expansion of the eminentia sagittalis and conspicuous fissura mediana; (iv) there is a positive correlation between body mass and brain volume; (v) psittacids are characterized by high EQ values that suggest high brain volumes in relation to their body masses; and (vi) the endocranial morphology of the Psittaciformes as a whole is distinctive relative to other birds. This new knowledge of brain morphology offers much potential for further insight in paleoneurological, phylogenetic and evolutionary studies.

  10. A 3D immersed finite element method with non-homogeneous interface flux jump for applications in particle-in-cell simulations of plasma-lunar surface interactions

    NASA Astrophysics Data System (ADS)

    Han, Daoru; Wang, Pu; He, Xiaoming; Lin, Tao; Wang, Joseph

    2016-09-01

    Motivated by the need to handle complex boundary conditions efficiently and accurately in particle-in-cell (PIC) simulations, this paper presents a three-dimensional (3D) linear immersed finite element (IFE) method with non-homogeneous flux jump conditions for solving electrostatic field involving complex boundary conditions using structured meshes independent of the interface. This method treats an object boundary as part of the simulation domain and solves the electric field at the boundary as an interface problem. In order to resolve charging on a dielectric surface, a new 3D linear IFE basis function is designed for each interface element to capture the electric field jump on the interface. Numerical experiments are provided to demonstrate the optimal convergence rates in L2 and H1 norms of the IFE solution. This new IFE method is integrated into a PIC method for simulations involving charging of a complex dielectric surface in a plasma. A numerical study of plasma-surface interactions at the lunar terminator is presented to demonstrate the applicability of the new method.

  11. Development of microgravity, full body functional reach envelope using 3-D computer graphic models and virtual reality technology

    NASA Technical Reports Server (NTRS)

    Lindsey, Patricia F.

    1994-01-01

    In microgravity conditions mobility is greatly enhanced and body stability is difficult to achieve. Because of these difficulties, optimum placement and accessibility of objects and controls can be critical to required tasks on board shuttle flights or on the proposed space station. Anthropometric measurement of the maximum reach of occupants of a microgravity environment provide knowledge about maximum functional placement for tasking situations. Calculations for a full body, functional reach envelope for microgravity environments are imperative. To this end, three dimensional computer modeled human figures, providing a method of anthropometric measurement, were used to locate the data points that define the full body, functional reach envelope. Virtual reality technology was utilized to enable an occupant of the microgravity environment to experience movement within the reach envelope while immersed in a simulated microgravity environment.

  12. Making Web3D Less Scary: Toward Easy-to-Use Web3D e-Learning Content Development Tools for Educators

    ERIC Educational Resources Information Center

    de Byl, Penny

    2009-01-01

    Penny de Byl argues that one of the biggest challenges facing educators today is the integration of rich and immersive three-dimensional environments with existing teaching and learning materials. To empower educators with the ability to embrace emerging Web3D technologies, the Advanced Learning and Immersive Virtual Environment (ALIVE) research…

  13. Effects of 3D virtual haptics force feedback on brand personality perception: the mediating role of physical presence in advergames.

    PubMed

    Jin, Seung-A Annie

    2010-06-01

    This study gauged the effects of force feedback in the Novint Falcon haptics system on the sensory and cognitive dimensions of a virtual test-driving experience. First, in order to explore the effects of tactile stimuli with force feedback on users' sensory experience, feelings of physical presence (the extent to which virtual physical objects are experienced as actual physical objects) were measured after participants used the haptics interface. Second, to evaluate the effects of force feedback on the cognitive dimension of consumers' virtual experience, this study investigated brand personality perception. The experiment utilized the Novint Falcon haptics controller to induce immersive virtual test-driving through tactile stimuli. The author designed a two-group (haptics stimuli with force feedback versus no force feedback) comparison experiment (N = 238) by manipulating the level of force feedback. Users in the force feedback condition were exposed to tactile stimuli involving various force feedback effects (e.g., terrain effects, acceleration, and lateral forces) while test-driving a rally car. In contrast, users in the control condition test-drove the rally car using the Novint Falcon but were not given any force feedback. Results of ANOVAs indicated that (a) users exposed to force feedback felt stronger physical presence than those in the no force feedback condition, and (b) users exposed to haptics stimuli with force feedback perceived the brand personality of the car to be more rugged than those in the control condition. Managerial implications of the study for product trial in the business world are discussed. PMID:20557250

  14. A Combined Pharmacophore Modeling, 3D QSAR and Virtual Screening Studies on Imidazopyridines as B-Raf Inhibitors

    PubMed Central

    Xie, Huiding; Chen, Lijun; Zhang, Jianqiang; Xie, Xiaoguang; Qiu, Kaixiong; Fu, Jijun

    2015-01-01

    B-Raf kinase is an important target in treatment of cancers. In order to design and find potent B-Raf inhibitors (BRIs), 3D pharmacophore models were created using the Genetic Algorithm with Linear Assignment of Hypermolecular Alignment of Database (GALAHAD). The best pharmacophore model obtained which was used in effective alignment of the data set contains two acceptor atoms, three donor atoms and three hydrophobes. In succession, comparative molecular field analysis (CoMFA) and comparative molecular similarity indices analysis (CoMSIA) were performed on 39 imidazopyridine BRIs to build three dimensional quantitative structure-activity relationship (3D QSAR) models based on both pharmacophore and docking alignments. The CoMSIA model based on the pharmacophore alignment shows the best result (q2 = 0.621, r2pred = 0.885). This 3D QSAR approach provides significant insights that are useful for designing potent BRIs. In addition, the obtained best pharmacophore model was used for virtual screening against the NCI2000 database. The hit compounds were further filtered with molecular docking, and their biological activities were predicted using the CoMSIA model, and three potential BRIs with new skeletons were obtained. PMID:26035757

  15. A video, text, and speech-driven realistic 3-d virtual head for human-machine interface.

    PubMed

    Yu, Jun; Wang, Zeng-Fu

    2015-05-01

    A multiple inputs-driven realistic facial animation system based on 3-D virtual head for human-machine interface is proposed. The system can be driven independently by video, text, and speech, thus can interact with humans through diverse interfaces. The combination of parameterized model and muscular model is used to obtain a tradeoff between computational efficiency and high realism of 3-D facial animation. The online appearance model is used to track 3-D facial motion from video in the framework of particle filtering, and multiple measurements, i.e., pixel color value of input image and Gabor wavelet coefficient of illumination ratio image, are infused to reduce the influence of lighting and person dependence for the construction of online appearance model. The tri-phone model is used to reduce the computational consumption of visual co-articulation in speech synchronized viseme synthesis without sacrificing any performance. The objective and subjective experiments show that the system is suitable for human-machine interaction. PMID:25122851

  16. A video, text, and speech-driven realistic 3-d virtual head for human-machine interface.

    PubMed

    Yu, Jun; Wang, Zeng-Fu

    2015-05-01

    A multiple inputs-driven realistic facial animation system based on 3-D virtual head for human-machine interface is proposed. The system can be driven independently by video, text, and speech, thus can interact with humans through diverse interfaces. The combination of parameterized model and muscular model is used to obtain a tradeoff between computational efficiency and high realism of 3-D facial animation. The online appearance model is used to track 3-D facial motion from video in the framework of particle filtering, and multiple measurements, i.e., pixel color value of input image and Gabor wavelet coefficient of illumination ratio image, are infused to reduce the influence of lighting and person dependence for the construction of online appearance model. The tri-phone model is used to reduce the computational consumption of visual co-articulation in speech synchronized viseme synthesis without sacrificing any performance. The objective and subjective experiments show that the system is suitable for human-machine interaction.

  17. Techniques for Revealing 3d Hidden Archeological Features: Morphological Residual Models as Virtual-Polynomial Texture Maps

    NASA Astrophysics Data System (ADS)

    Pires, H.; Martínez Rubio, J.; Elorza Arana, A.

    2015-02-01

    The recent developments in 3D scanning technologies are not been accompanied by visualization interfaces. We are still using the same types of visual codes as when maps and drawings were made by hand. The available information in 3D scanning data sets is not being fully exploited by current visualization techniques. In this paper we present recent developments regarding the use of 3D scanning data sets for revealing invisible information from archaeological sites. These sites are affected by a common problem, decay processes, such as erosion, that never ceases its action and endangers the persistence of last vestiges of some peoples and cultures. Rock art engravings, or epigraphical inscriptions, are among the most affected by these processes because they are, due to their one nature, carved at the surface of rocks often exposed to climatic agents. The study and interpretation of these motifs and texts is strongly conditioned by the degree of conservation of the imprints left by our ancestors. Every single detail in the remaining carvings can make a huge difference in the conclusions taken by specialists. We have selected two case-studies severely affected by erosion to present the results of the on-going work dedicated to explore in new ways the information contained in 3D scanning data sets. A new method for depicting subtle morphological features in the surface of objects or sites has been developed. It allows to contrast human patterns still present at the surface but invisible to naked eye or by any other archaeological inspection technique. It was called Morphological Residual Model (MRM) because of its ability to contrast the shallowest morphological details, to which we refer as residuals, contained in the wider forms of the backdrop. Afterwards, we have simulated the process of building Polynomial Texture Maps - a widespread technique that as been contributing to archaeological studies for some years - in a 3D virtual environment using the results of MRM

  18. Simulating Navigation with Virtual 3d Geovisualizations - a Focus on Memory Related Factors

    NASA Astrophysics Data System (ADS)

    Lokka, I.; Çöltekin, A.

    2016-06-01

    The use of virtual environments (VE) for navigation-related studies, such as spatial cognition and path retrieval has been widely adopted in cognitive psychology and related fields. What motivates the use of VEs for such studies is that, as opposed to real-world, we can control for the confounding variables in simulated VEs. When simulating a geographic environment as a virtual world with the intention to train navigational memory in humans, an effective and efficient visual design is important to facilitate the amount of recall. However, it is not yet clear what amount of information should be included in such visual designs intended to facilitate remembering: there can be too little or too much of it. Besides the amount of information or level of detail, the types of visual features (`elements' in a visual scene) that should be included in the representations to create memorable scenes and paths must be defined. We analyzed the literature in cognitive psychology, geovisualization and information visualization, and identified the key factors for studying and evaluating geovisualization designs for their function to support and strengthen human navigational memory. The key factors we identified are: i) the individual abilities and age of the users, ii) the level of realism (LOR) included in the representations and iii) the context in which the navigation is performed, thus specific tasks within a case scenario. Here we present a concise literature review and our conceptual development for follow-up experiments.

  19. 3D chromosome rendering from Hi-C data using virtual reality

    NASA Astrophysics Data System (ADS)

    Zhu, Yixin; Selvaraj, Siddarth; Weber, Philip; Fang, Jennifer; Schulze, Jürgen P.; Ren, Bing

    2015-01-01

    Most genome browsers display DNA linearly, using single-dimensional depictions that are useful to examine certain epigenetic mechanisms such as DNA methylation. However, these representations are insufficient to visualize intrachromosomal interactions and relationships between distal genome features. Relationships between DNA regions may be difficult to decipher or missed entirely if those regions are distant in one dimension but could be spatially proximal when mapped to three-dimensional space. For example, the visualization of enhancers folding over genes is only fully expressed in three-dimensional space. Thus, to accurately understand DNA behavior during gene expression, a means to model chromosomes is essential. Using coordinates generated from Hi-C interaction frequency data, we have created interactive 3D models of whole chromosome structures and its respective domains. We have also rendered information on genomic features such as genes, CTCF binding sites, and enhancers. The goal of this article is to present the procedure, findings, and conclusions of our models and renderings.

  20. Immersive Virtual Environment Technology to Supplement Environmental Perception, Preference and Behavior Research: A Review with Applications

    PubMed Central

    Smith, Jordan W.

    2015-01-01

    Immersive virtual environment (IVE) technology offers a wide range of potential benefits to research focused on understanding how individuals perceive and respond to built and natural environments. In an effort to broaden awareness and use of IVE technology in perception, preference and behavior research, this review paper describes how IVE technology can be used to complement more traditional methods commonly applied in public health research. The paper also describes a relatively simple workflow for creating and displaying 360° virtual environments of built and natural settings and presents two freely-available and customizable applications that scientists from a variety of disciplines, including public health, can use to advance their research into human preferences, perceptions and behaviors related to built and natural settings. PMID:26378565

  1. "Active" and "passive" learning of three-dimensional object structure within an immersive virtual reality environment.

    PubMed

    James, K H; Humphrey, G K; Vilis, T; Corrie, B; Baddour, R; Goodale, M A

    2002-08-01

    We used a fully immersive virtual reality environment to study whether actively interacting with objects would effect subsequent recognition, when compared with passively observing the same objects. We found that when participants learned object structure by actively rotating the objects, the objects were recognized faster during a subsequent recognition task than when object structure was learned through passive observation. We also found that participants focused their study time during active exploration on a limited number of object views, while ignoring other views. Overall, our results suggest that allowing active exploration of an object during initial learning can facilitate recognition of that object, perhaps owing to the control that the participant has over the object views upon which they can focus. The virtual reality environment is ideal for studying such processes, allowing realistic interaction with objects while maintaining experimenter control. PMID:12395554

  2. Height, social comparison, and paranoia: an immersive virtual reality experimental study.

    PubMed

    Freeman, Daniel; Evans, Nicole; Lister, Rachel; Antley, Angus; Dunn, Graham; Slater, Mel

    2014-08-30

    Mistrust of others may build upon perceptions of the self as vulnerable, consistent with an association of paranoia with perceived lower social rank. Height is a marker of social status and authority. Therefore we tested the effect of manipulating height, as a proxy for social rank, on paranoia. Height was manipulated within an immersive virtual reality simulation. Sixty females who reported paranoia experienced a virtual reality train ride twice: at their normal and reduced height. Paranoia and social comparison were assessed. Reducing a person's height resulted in more negative views of the self in comparison with other people and increased levels of paranoia. The increase in paranoia was fully mediated by changes in social comparison. The study provides the first demonstration that reducing height in a social situation increases the occurrence of paranoia. The findings indicate that negative social comparison is a cause of mistrust.

  3. Immersive Virtual Environment Technology to Supplement Environmental Perception, Preference and Behavior Research: A Review with Applications.

    PubMed

    Smith, Jordan W

    2015-09-11

    Immersive virtual environment (IVE) technology offers a wide range of potential benefits to research focused on understanding how individuals perceive and respond to built and natural environments. In an effort to broaden awareness and use of IVE technology in perception, preference and behavior research, this review paper describes how IVE technology can be used to complement more traditional methods commonly applied in public health research. The paper also describes a relatively simple workflow for creating and displaying 360° virtual environments of built and natural settings and presents two freely-available and customizable applications that scientists from a variety of disciplines, including public health, can use to advance their research into human preferences, perceptions and behaviors related to built and natural settings.

  4. Enhancing Scientific Collaboration, Transparency, and Public Access: Utilizing the Second Life Platform to Convene a Scientific Conference in 3-D Virtual Space

    NASA Astrophysics Data System (ADS)

    McGee, B. W.

    2006-12-01

    Recent studies reveal a general mistrust of science as well as a distorted perception of the scientific method by the public at-large. Concurrently, the number of science undergraduate and graduate students is in decline. By taking advantage of emergent technologies not only for direct public outreach but also to enhance public accessibility to the science process, it may be possible to both begin a reversal of popular scientific misconceptions and to engage a new generation of scientists. The Second Life platform is a 3-D virtual world produced and operated by Linden Research, Inc., a privately owned company instituted to develop new forms of immersive entertainment. Free and downloadable to the public, Second Life offers an imbedded physics engine, streaming audio and video capability, and unlike other "multiplayer" software, the objects and inhabitants of Second Life are entirely designed and created by its users, providing an open-ended experience without the structure of a traditional video game. Already, educational institutions, virtual museums, and real-world businesses are utilizing Second Life for teleconferencing, pre-visualization, and distance education, as well as to conduct traditional business. However, the untapped potential of Second Life lies in its versatility, where the limitations of traditional scientific meeting venues do not exist, and attendees need not be restricted by prohibitive travel costs. It will be shown that the Second Life system enables scientific authors and presenters at a "virtual conference" to display figures and images at full resolution, employ audio-visual content typically not available to conference organizers, and to perform demonstrations or premier three-dimensional renderings of objects, processes, or information. An enhanced presentation like those possible with Second Life would be more engaging to non- scientists, and such an event would be accessible to the general users of Second Life, who could have an

  5. Inspiring Equal Contribution and Opportunity in a 3D Multi-User Virtual Environment: Bringing Together Men Gamers and Women Non-Gamers in Second Life[R

    ERIC Educational Resources Information Center

    deNoyelles, Aimee; Seo, Kay Kyeong-Ju

    2012-01-01

    A 3D multi-user virtual environment holds promise to support and enhance student online learning communities due to its ability to promote global synchronous interaction and collaboration, rich multisensory experience and expression, and elaborate design capabilities. Second Life[R], a multi-user virtual environment intended for adult users 18 and…

  6. Cross-Cultural Discussions in a 3D Virtual Environment and Their Affordances for Learners' Motivation and Foreign Language Discussion Skills

    ERIC Educational Resources Information Center

    Jauregi, Kristi; Kuure, Leena; Bastian, Pim; Reinhardt, Dennis; Koivisto, Tuomo

    2015-01-01

    Within the European TILA project a case study was carried out where pupils from schools in Finland and the Netherlands engaged in debating sessions using the 3D virtual world of OpenSim once a week for a period of 5 weeks. The case study had two main objectives: (1) to study the impact that the discussion tasks undertaken in a virtual environment…

  7. Stamping Line Optimization Using Genetic Algorithms and Virtual 3D Line Simulation

    NASA Astrophysics Data System (ADS)

    García-Sedano, Javier A.; Bernardo, Jon Alzola; González, Asier González; de Gauna, Óscar Berasategui Ruiz; de Mendivil, Rafael Yuguero González

    This paper describes the use of a genetic algorithm (GA) in order to optimize the trajectory followed by industrial robots (IRs) in stamping lines. The objective is to generate valid paths or trajectories without collisions in order to minimize the cycle time required to complete all the operations in an individual stamping cell of the line. A commercial software tool is used to simulate the virtual trajectories and potential collisions, taking into account the specific geometries of the different parts involved: robot arms, columns, dies and manipulators. Then, a genetic algorithm is proposed to optimize trajectories. Both systems, the GA and the simulator, communicate as client - server in order to evaluate solutions proposed by the GA. The novelty of the idea is to consider the geometry of the specific components to adjust robot paths to optimize cycle time in a given stamping cell.

  8. An exploratory fNIRS study with immersive virtual reality: a new method for technical implementation.

    PubMed

    Seraglia, Bruno; Gamberini, Luciano; Priftis, Konstantinos; Scatturin, Pietro; Martinelli, Massimiliano; Cutini, Simone

    2011-01-01

    For over two decades Virtual Reality (VR) has been used as a useful tool in several fields, from medical and psychological treatments, to industrial and military applications. Only in recent years researchers have begun to study the neural correlates that subtend VR experiences. Even if the functional Magnetic Resonance Imaging (fMRI) is the most common and used technique, it suffers several limitations and problems. Here we present a methodology that involves the use of a new and growing brain imaging technique, functional Near-infrared Spectroscopy (fNIRS), while participants experience immersive VR. In order to allow a proper fNIRS probe application, a custom-made VR helmet was created. To test the adapted helmet, a virtual version of the line bisection task was used. Participants could bisect the lines in a virtual peripersonal or extrapersonal space, through the manipulation of a Nintendo Wiimote ® controller in order for the participants to move a virtual laser pointer. Although no neural correlates of the dissociation between peripersonal and extrapersonal space were found, a significant hemodynamic activity with respect to the baseline was present in the right parietal and occipital areas. Both advantages and disadvantages of the presented methodology are discussed.

  9. Individual reactions to a multisensory immersive virtual environment: the impact of a wind farm on individuals.

    PubMed

    Ruotolo, Francesco; Senese, Vincenzo Paolo; Ruggiero, Gennaro; Maffei, Luigi; Masullo, Massimiliano; Iachini, Tina

    2012-08-01

    The aim of this study was to assess the impact of a wind farm on individuals by means of an audio-visual methodology that tried to simulate biologically plausible individual-environment interactions. To disentangle the effects of auditory and visual components on cognitive performances and subjective evaluations, unimodal (Audio or Video) and bimodal (Audio + Video) approaches were compared. Participants were assigned to three experimental conditions that reproduced a wind farm by means of an immersive virtual reality system: bimodal condition, reproducing scenarios with both acoustic and visual stimuli; unimodal visual condition, with only visual stimuli; unimodal auditory condition, with only auditory stimuli. While immersed in the virtual scenarios, participants performed tasks assessing verbal fluency, short-term verbal memory, backward counting, and distance estimations (egocentric: how far is the turbine from you?; allocentric: how far is the turbine from the target?). Afterwards, participants reported their degree of visual and noise annoyance. The results revealed that the presence of a visual scenario as compared to the only availability of auditory stimuli may exert a negative effect on resource-demanding cognitive tasks but a positive effect on perceived noise annoyance. This supports the idea that humans perceive the environment holistically and that auditory and visual features are processed in close interaction. PMID:22806673

  10. Individual reactions to a multisensory immersive virtual environment: the impact of a wind farm on individuals.

    PubMed

    Ruotolo, Francesco; Senese, Vincenzo Paolo; Ruggiero, Gennaro; Maffei, Luigi; Masullo, Massimiliano; Iachini, Tina

    2012-08-01

    The aim of this study was to assess the impact of a wind farm on individuals by means of an audio-visual methodology that tried to simulate biologically plausible individual-environment interactions. To disentangle the effects of auditory and visual components on cognitive performances and subjective evaluations, unimodal (Audio or Video) and bimodal (Audio + Video) approaches were compared. Participants were assigned to three experimental conditions that reproduced a wind farm by means of an immersive virtual reality system: bimodal condition, reproducing scenarios with both acoustic and visual stimuli; unimodal visual condition, with only visual stimuli; unimodal auditory condition, with only auditory stimuli. While immersed in the virtual scenarios, participants performed tasks assessing verbal fluency, short-term verbal memory, backward counting, and distance estimations (egocentric: how far is the turbine from you?; allocentric: how far is the turbine from the target?). Afterwards, participants reported their degree of visual and noise annoyance. The results revealed that the presence of a visual scenario as compared to the only availability of auditory stimuli may exert a negative effect on resource-demanding cognitive tasks but a positive effect on perceived noise annoyance. This supports the idea that humans perceive the environment holistically and that auditory and visual features are processed in close interaction.

  11. Combining Immersive Virtual Worlds and Virtual Learning Environments into an Integrated System for Hosting and Supporting Virtual Conferences

    NASA Astrophysics Data System (ADS)

    Polychronis, Nikolaos; Patrikakis, Charalampos; Voulodimos, Athanasios

    In this paper, a proposal for hosting and supporting virtual conferences based on the use of state of the art web technologies and computer mediated education software is presented. The proposed system consists of a virtual conference venue hosted in Second Life platform, targeted at hosting synchronous conference sessions, and of a web space created with the use of the e-learning platform Moodle, targeted at serving the needs of asynchronous communication, as well as user and content management. The use of Sloodle (the next generation of Moodle software incorporating virtual world supporting capabilities), which up to now has been used only in traditional education, enables the combination of the virtual conference venue and the conference supporting site into an integrated system that allows for the conduction of successful and cost-effective virtual conferences.

  12. a Hand-Free Solution for the Interaction in AN Immersive Virtual Environment: the Case of the Agora of Segesta

    NASA Astrophysics Data System (ADS)

    Olivito, R.; Taccola, E.; Albertini, N.

    2015-02-01

    The paper illustrates the project of an interdisciplinary team composed of archaeologists and researchers of the Scuola Normale Superiore and the University of Pisa. The synergy between these Centres has recently allowed for a more articulated 3D simulation of the agora of Segesta. Here, the archaeological excavations have brought to light the remains of a huge public building (stoa) of the Late-Hellenistic Period. Computer graphics and image-based modeling have been used to monitor, document and record the different phases of the excavation activity (layers, findings, wall structures) and to create a 3D model of the whole site. In order to increase as much as possible the level of interaction, all the models can be managed by an application specially designed for an immersive virtual environment (CAVE-like system). By using hands tracking sensor (Leap) in a non-standard way, the application allows for a completely hand-free interaction with the simulation of the agora of Segesta and the different phases of the fieldwork activities. More specifically, the operator can use simple hand gestures to activate a natural interface, scroll and visualize the perfectly overlapped models of the archaeological layers, pop up the models of single meaningful objects discovered during the excavation, and obtain all the relative metadata (stored in a dedicated server) which are visualizable on external devices (e.g. tablets or monitors) without further wearable devices. All these functions are contextualized within the whole simulation of the agora, so that it is possible to verify old interpretations and enhance new ones in real-time, simulating within the CAVE the whole archaeological investigation, going over the different phases of the excavation in a more rapid way, getting information which could have been ignored during the fieldwork, and verifying, even ex-post, issues not correctly documented during the fieldwork. The opportunity to physically interact with the 3D model

  13. Web-Based Immersive Virtual Patient Simulators: Positive Effect on Clinical Reasoning in Medical Education

    PubMed Central

    Heiermann, Nadine; Plum, Patrick Sven; Wahba, Roger; Chang, De-Hua; Maus, Martin; Chon, Seung-Hun; Hoelscher, Arnulf H; Stippel, Dirk Ludger

    2015-01-01

    Background Clinical reasoning is based on the declarative and procedural knowledge of workflows in clinical medicine. Educational approaches such as problem-based learning or mannequin simulators support learning of procedural knowledge. Immersive patient simulators (IPSs) go one step further as they allow an illusionary immersion into a synthetic world. Students can freely navigate an avatar through a three-dimensional environment, interact with the virtual surroundings, and treat virtual patients. By playful learning with IPS, medical workflows can be repetitively trained and internalized. As there are only a few university-driven IPS with a profound amount of medical knowledge available, we developed a university-based IPS framework. Our simulator is free to use and combines a high degree of immersion with in-depth medical content. By adding disease-specific content modules, the simulator framework can be expanded depending on the curricular demands. However, these new educational tools compete with the traditional teaching Objective It was our aim to develop an educational content module that teaches clinical and therapeutic workflows in surgical oncology. Furthermore, we wanted to examine how the use of this module affects student performance. Methods The new module was based on the declarative and procedural learning targets of the official German medical examination regulations. The module was added to our custom-made IPS named ALICE (Artificial Learning Interface for Clinical Education). ALICE was evaluated on 62 third-year students. Results Students showed a high degree of motivation when using the simulator as most of them had fun using it. ALICE showed positive impact on clinical reasoning as there was a significant improvement in determining the correct therapy after using the simulator. ALICE positively impacted the rise in declarative knowledge as there was improvement in answering multiple-choice questions before and after simulator use. Conclusions

  14. 3D graphics, virtual reality, and motion-onset visual evoked potentials in neurogaming.

    PubMed

    Beveridge, R; Wilson, S; Coyle, D

    2016-01-01

    A brain-computer interface (BCI) offers movement-free control of a computer application and is achieved by reading and translating the cortical activity of the brain into semantic control signals. Motion-onset visual evoked potentials (mVEP) are neural potentials employed in BCIs and occur when motion-related stimuli are attended visually. mVEP dynamics are correlated with the position and timing of the moving stimuli. To investigate the feasibility of utilizing the mVEP paradigm with video games of various graphical complexities including those of commercial quality, we conducted three studies over four separate sessions comparing the performance of classifying five mVEP responses with variations in graphical complexity and style, in-game distractions, and display parameters surrounding mVEP stimuli. To investigate the feasibility of utilizing contemporary presentation modalities in neurogaming, one of the studies compared mVEP classification performance when stimuli were presented using the oculus rift virtual reality headset. Results from 31 independent subjects were analyzed offline. The results show classification performances ranging up to 90% with variations in conditions in graphical complexity having limited effect on mVEP performance; thus, demonstrating the feasibility of using the mVEP paradigm within BCI-based neurogaming. PMID:27590974

  15. 3D graphics, virtual reality, and motion-onset visual evoked potentials in neurogaming.

    PubMed

    Beveridge, R; Wilson, S; Coyle, D

    2016-01-01

    A brain-computer interface (BCI) offers movement-free control of a computer application and is achieved by reading and translating the cortical activity of the brain into semantic control signals. Motion-onset visual evoked potentials (mVEP) are neural potentials employed in BCIs and occur when motion-related stimuli are attended visually. mVEP dynamics are correlated with the position and timing of the moving stimuli. To investigate the feasibility of utilizing the mVEP paradigm with video games of various graphical complexities including those of commercial quality, we conducted three studies over four separate sessions comparing the performance of classifying five mVEP responses with variations in graphical complexity and style, in-game distractions, and display parameters surrounding mVEP stimuli. To investigate the feasibility of utilizing contemporary presentation modalities in neurogaming, one of the studies compared mVEP classification performance when stimuli were presented using the oculus rift virtual reality headset. Results from 31 independent subjects were analyzed offline. The results show classification performances ranging up to 90% with variations in conditions in graphical complexity having limited effect on mVEP performance; thus, demonstrating the feasibility of using the mVEP paradigm within BCI-based neurogaming.

  16. Multisensory Stimulation Can Induce an Illusion of Larger Belly Size in Immersive Virtual Reality

    PubMed Central

    Normand, Jean-Marie; Giannopoulos, Elias; Spanlang, Bernhard; Slater, Mel

    2011-01-01

    Background Body change illusions have been of great interest in recent years for the understanding of how the brain represents the body. Appropriate multisensory stimulation can induce an illusion of ownership over a rubber or virtual arm, simple types of out-of-the-body experiences, and even ownership with respect to an alternate whole body. Here we use immersive virtual reality to investigate whether the illusion of a dramatic increase in belly size can be induced in males through (a) first person perspective position (b) synchronous visual-motor correlation between real and virtual arm movements, and (c) self-induced synchronous visual-tactile stimulation in the stomach area. Methodology Twenty two participants entered into a virtual reality (VR) delivered through a stereo head-tracked wide field-of-view head-mounted display. They saw from a first person perspective a virtual body substituting their own that had an inflated belly. For four minutes they repeatedly prodded their real belly with a rod that had a virtual counterpart that they saw in the VR. There was a synchronous condition where their prodding movements were synchronous with what they felt and saw and an asynchronous condition where this was not the case. The experiment was repeated twice for each participant in counter-balanced order. Responses were measured by questionnaire, and also a comparison of before and after self-estimates of belly size produced by direct visual manipulation of the virtual body seen from the first person perspective. Conclusions The results show that first person perspective of a virtual body that substitutes for the own body in virtual reality, together with synchronous multisensory stimulation can temporarily produce changes in body representation towards the larger belly size. This was demonstrated by (a) questionnaire results, (b) the difference between the self-estimated belly size, judged from a first person perspective, after and before the experimental

  17. Level of Immersion in Virtual Environments Impacts the Ability to Assess and Teach Social Skills in Autism Spectrum Disorder

    PubMed Central

    Bugnariu, Nicoleta L.

    2016-01-01

    Abstract Virtual environments (VEs) may be useful for delivering social skills interventions to individuals with autism spectrum disorder (ASD). Immersive VEs provide opportunities for individuals with ASD to learn and practice skills in a controlled replicable setting. However, not all VEs are delivered using the same technology, and the level of immersion differs across settings. We group studies into low-, moderate-, and high-immersion categories by examining five aspects of immersion. In doing so, we draw conclusions regarding the influence of this technical manipulation on the efficacy of VEs as a tool for assessing and teaching social skills. We also highlight ways in which future studies can advance our understanding of how manipulating aspects of immersion may impact intervention success. PMID:26919157

  18. Level of Immersion in Virtual Environments Impacts the Ability to Assess and Teach Social Skills in Autism Spectrum Disorder.

    PubMed

    Miller, Haylie L; Bugnariu, Nicoleta L

    2016-04-01

    Virtual environments (VEs) may be useful for delivering social skills interventions to individuals with autism spectrum disorder (ASD). Immersive VEs provide opportunities for individuals with ASD to learn and practice skills in a controlled replicable setting. However, not all VEs are delivered using the same technology, and the level of immersion differs across settings. We group studies into low-, moderate-, and high-immersion categories by examining five aspects of immersion. In doing so, we draw conclusions regarding the influence of this technical manipulation on the efficacy of VEs as a tool for assessing and teaching social skills. We also highlight ways in which future studies can advance our understanding of how manipulating aspects of immersion may impact intervention success.

  19. Development of a system based on 3D vision, interactive virtual environments, ergonometric signals and a humanoid for stroke rehabilitation.

    PubMed

    Ibarra Zannatha, Juan Manuel; Tamayo, Alejandro Justo Malo; Sánchez, Angel David Gómez; Delgado, Jorge Enrique Lavín; Cheu, Luis Eduardo Rodríguez; Arévalo, Wilson Alexander Sierra

    2013-11-01

    This paper presents a stroke rehabilitation (SR) system for the upper limbs, developed as an interactive virtual environment (IVE) based on a commercial 3D vision system (a Microsoft Kinect), a humanoid robot (an Aldebaran's Nao), and devices producing ergonometric signals. In one environment, the rehabilitation routines, developed by specialists, are presented to the patient simultaneously by the humanoid and an avatar inside the IVE. The patient follows the rehabilitation task, while his avatar copies his gestures that are captured by the Kinect 3D vision system. The information of the patient movements, together with the signals obtained from the ergonometric measurement devices, is used also to supervise and to evaluate the rehabilitation progress. The IVE can also present an RGB image of the patient. In another environment, that uses the same base elements, four game routines--Touch the balls 1 and 2, Simon says, and Follow the point--are used for rehabilitation. These environments are designed to create a positive influence in the rehabilitation process, reduce costs, and engage the patient. PMID:23827333

  20. Development of a system based on 3D vision, interactive virtual environments, ergonometric signals and a humanoid for stroke rehabilitation.

    PubMed

    Ibarra Zannatha, Juan Manuel; Tamayo, Alejandro Justo Malo; Sánchez, Angel David Gómez; Delgado, Jorge Enrique Lavín; Cheu, Luis Eduardo Rodríguez; Arévalo, Wilson Alexander Sierra

    2013-11-01

    This paper presents a stroke rehabilitation (SR) system for the upper limbs, developed as an interactive virtual environment (IVE) based on a commercial 3D vision system (a Microsoft Kinect), a humanoid robot (an Aldebaran's Nao), and devices producing ergonometric signals. In one environment, the rehabilitation routines, developed by specialists, are presented to the patient simultaneously by the humanoid and an avatar inside the IVE. The patient follows the rehabilitation task, while his avatar copies his gestures that are captured by the Kinect 3D vision system. The information of the patient movements, together with the signals obtained from the ergonometric measurement devices, is used also to supervise and to evaluate the rehabilitation progress. The IVE can also present an RGB image of the patient. In another environment, that uses the same base elements, four game routines--Touch the balls 1 and 2, Simon says, and Follow the point--are used for rehabilitation. These environments are designed to create a positive influence in the rehabilitation process, reduce costs, and engage the patient.

  1. Pharmacophore modeling, virtual screening and 3D-QSAR studies of 5-tetrahydroquinolinylidine aminoguanidine derivatives as sodium hydrogen exchanger inhibitors.

    PubMed

    Bhatt, Hardik G; Patel, Paresh K

    2012-06-01

    Sodium hydrogen exchanger (SHE) inhibitor is one of the most important targets in treatment of myocardial ischemia. In the course of our research into new types of non-acylguanidine, SHE inhibitory activities of 5-tetrahydroquinolinylidine aminoguanidine derivatives were used to build pharmacophore and 3D-QSAR models. Genetic Algorithm Similarity Program (GASP) was used to derive a 3D pharmacophore model which was used in effective alignment of data set. Eight molecules were selected on the basis of structure diversity to build 10 different pharmacophore models. Model 1 was considered as the best model as it has highest fitness score compared to other nine models. The obtained model contained two acceptor sites, two donor atoms and one hydrophobic region. Pharmacophore modeling was followed by substructure searching and virtual screening. The best CoMFA model, representing steric and electrostatic fields, obtained for 30 training set molecules was statistically significant with cross-validated coefficient (q(2)) of 0.673 and conventional coefficient (r(2)) of 0.988. In addition to steric and electrostatic fields observed in CoMFA, CoMSIA also represents hydrophobic, hydrogen bond donor and hydrogen bond acceptor fields. CoMSIA model was also significant with cross-validated coefficient (q(2)) and conventional coefficient (r(2)) of 0.636 and 0.986, respectively. Both models were validated by an external test set of eight compounds and gave satisfactory prediction (r(pred)(2)) of 0.772 and 0.701 for CoMFA and CoMSIA models, respectively. This pharmacophore based 3D-QSAR approach provides significant insights that can be used to design novel, potent and selective SHE inhibitors. PMID:22546667

  2. Psychophysical estimation of 3D virtual depth of united, synthesized and mixed type stereograms by means of simultaneous observation

    NASA Astrophysics Data System (ADS)

    Iizuka, Masayuki; Ookuma, Yoshio; Nakashima, Yoshio; Takamatsu, Mamoru

    2007-02-01

    Recently, many types of computer-generated stereograms (CGSs), i.e. various works of art produced by using computer are published for hobby and entertainment. It is said that activation of brain, improvement of visual eye sight, decrease of mental stress, effect of healing, etc. are expected when properly appreciating a kind of CGS as the stereoscopic view. There is a lot of information on the internet web site concerning all aspects of stereogram history, science, social organization, various types of stereograms, and free software for generating CGS. Generally, the CGS is classified into nine types: (1) stereo pair type, (2) anaglyph type, (3) repeated pattern type, (4) embedded type, (5) random dot stereogram (RDS), (6) single image stereogram (SIS), (7) united stereogram, (8) synthesized stereogram, and (9) mixed or multiple type stereogram. Each stereogram has advantages and disadvantages when viewing directly the stereogram with two eyes by training with a little patience. In this study, the characteristics of united, synthesized and mixed type stereograms, the role and composition of depth map image (DMI) called hidden image or picture, and the effect of irregular shift of texture pattern image called wall paper are discussed from the viewpoint of psychophysical estimation of 3D virtual depth and visual quality of virtual image by means of simultaneous observation in the case of the parallel viewing method.

  3. Evaluation of historical museum interior lighting system using fully immersive virtual luminous environment

    NASA Astrophysics Data System (ADS)

    Navvab, Mojtaba; Bisegna, Fabio; Gugliermetti, Franco

    2013-05-01

    Saint Rocco Museum, a historical building in Venice, Italy is used as a case study to explore the performance of its' lighting system and visible light impact on viewing the large size art works. The transition from threedimensional architectural rendering to the three-dimensional virtual luminance mapping and visualization within a virtual environment is described as an integrated optical method for its application toward preservation of the cultural heritage of the space. Lighting simulation programs represent color as RGB triplets in a devicedependent color space such as ITU-R BT709. Prerequisite for this is a 3D-model which can be created within this computer aided virtual environment. The onsite measured surface luminance, chromaticity and spectral data were used as input to an established real-time indirect illumination and a physically based algorithms to produce the best approximation for RGB to be used as an input to generate the image of the objects. Conversion of RGB to and from spectra has been a major undertaking in order to match the infinite number of spectra to create the same colors that were defined by RGB in the program. The ability to simulate light intensity, candle power and spectral power distributions provide opportunity to examine the impact of color inter-reflections on historical paintings. VR offers an effective technique to quantify the visible light impact on human visual performance under precisely controlled representation of light spectrum that could be experienced in 3D format in a virtual environment as well as historical visual archives. The system can easily be expanded to include other measurements and stimuli.

  4. The development of a virtual 3D model of the renal corpuscle from serial histological sections for E-learning environments.

    PubMed

    Roth, Jeremy A; Wilson, Timothy D; Sandig, Martin

    2015-01-01

    Histology is a core subject in the anatomical sciences where learners are challenged to interpret two-dimensional (2D) information (gained from histological sections) to extrapolate and understand the three-dimensional (3D) morphology of cells, tissues, and organs. In gross anatomical education 3D models and learning tools have been associated with improved learning outcomes, but similar tools have not been created for histology education to visualize complex cellular structure-function relationships. This study outlines steps in creating a virtual 3D model of the renal corpuscle from serial, semi-thin, histological sections obtained from epoxy resin-embedded kidney tissue. The virtual renal corpuscle model was generated by digital segmentation to identify: Bowman's capsule, nuclei of epithelial cells in the parietal capsule, afferent arteriole, efferent arteriole, proximal convoluted tubule, distal convoluted tubule, glomerular capillaries, podocyte nuclei, nuclei of extraglomerular mesangial cells, nuclei of epithelial cells of the macula densa in the distal convoluted tubule. In addition to the imported images of the original sections the software generates, and allows for visualization of, images of virtual sections generated in any desired orientation, thus serving as a "virtual microtome". These sections can be viewed separately or with the 3D model in transparency. This approach allows for the development of interactive e-learning tools designed to enhance histology education of microscopic structures with complex cellular interrelationships. Future studies will focus on testing the efficacy of interactive virtual 3D models for histology education.

  5. The development of a virtual 3D model of the renal corpuscle from serial histological sections for E-learning environments.

    PubMed

    Roth, Jeremy A; Wilson, Timothy D; Sandig, Martin

    2015-01-01

    Histology is a core subject in the anatomical sciences where learners are challenged to interpret two-dimensional (2D) information (gained from histological sections) to extrapolate and understand the three-dimensional (3D) morphology of cells, tissues, and organs. In gross anatomical education 3D models and learning tools have been associated with improved learning outcomes, but similar tools have not been created for histology education to visualize complex cellular structure-function relationships. This study outlines steps in creating a virtual 3D model of the renal corpuscle from serial, semi-thin, histological sections obtained from epoxy resin-embedded kidney tissue. The virtual renal corpuscle model was generated by digital segmentation to identify: Bowman's capsule, nuclei of epithelial cells in the parietal capsule, afferent arteriole, efferent arteriole, proximal convoluted tubule, distal convoluted tubule, glomerular capillaries, podocyte nuclei, nuclei of extraglomerular mesangial cells, nuclei of epithelial cells of the macula densa in the distal convoluted tubule. In addition to the imported images of the original sections the software generates, and allows for visualization of, images of virtual sections generated in any desired orientation, thus serving as a "virtual microtome". These sections can be viewed separately or with the 3D model in transparency. This approach allows for the development of interactive e-learning tools designed to enhance histology education of microscopic structures with complex cellular interrelationships. Future studies will focus on testing the efficacy of interactive virtual 3D models for histology education. PMID:25808044

  6. iVFTs - immersive virtual field trips for interactive learning about Earth's environment.

    NASA Astrophysics Data System (ADS)

    Bruce, G.; Anbar, A. D.; Semken, S. C.; Summons, R. E.; Oliver, C.; Buxner, S.

    2014-12-01

    Innovations in immersive interactive technologies are changing the way students explore Earth and its environment. State-of-the-art hardware has given developers the tools needed to capture high-resolution spherical content, 360° panoramic video, giga-pixel imagery, and unique viewpoints via unmanned aerial vehicles as they explore remote and physically challenging regions of our planet. Advanced software enables integration of these data into seamless, dynamic, immersive, interactive, content-rich, and learner-driven virtual field explorations, experienced online via HTML5. These surpass conventional online exercises that use 2-D static imagery and enable the student to engage in these virtual environments that are more like games than like lectures. Grounded in the active learning of exploration, inquiry, and application of knowledge as it is acquired, users interact non-linearly in conjunction with an intelligent tutoring system (ITS). The integration of this system allows the educational experience to be adapted to each individual student as they interact within the program. Such explorations, which we term "immersive virtual field trips" (iVFTs), are being integrated into cyber-learning allowing science teachers to take students to scientifically significant but inaccessible environments. Our team and collaborators are producing a diverse suite of freely accessible, iVFTs to teach key concepts in geology, astrobiology, ecology, and anthropology. Topics include Early Life, Biodiversity, Impact craters, Photosynthesis, Geologic Time, Stratigraphy, Tectonics, Volcanism, Surface Processes, The Rise of Oxygen, Origin of Water, Early Civilizations, Early Multicellular Organisms, and Bioarcheology. These diverse topics allow students to experience field sites all over the world, including, Grand Canyon (USA), Flinders Ranges (Australia), Shark Bay (Australia), Rainforests (Panama), Teotihuacan (Mexico), Upheaval Dome (USA), Pilbara (Australia), Mid-Atlantic Ridge

  7. Behavioral compliance for dynamic versus static signs in an immersive virtual environment.

    PubMed

    Duarte, Emília; Rebelo, Francisco; Teles, Júlia; Wogalter, Michael S

    2014-09-01

    This study used an immersive virtual environment (IVE) to examine how dynamic features in signage affect behavioral compliance during a work-related task and an emergency egress. Ninety participants performed a work-related task followed by an emergency egress. Compliance with uncued and cued safety signs was assessed prior to an explosion/fire involving egress with exit signs. Although dynamic presentation produced the highest compliance, the difference between dynamic and static presentation was only statistically significant for uncued signs. Uncued signs, both static and dynamic, were effective in changing behavior compared to no/minimal signs. Findings are explained based on sign salience and on task differences. If signs must capture attention while individuals are attending to other tasks, salient (e.g., dynamic) signs are useful in benefiting compliance. This study demonstrates the potential for IVEs to serve as a useful tool in behavioral compliance research.

  8. Behavioral compliance for dynamic versus static signs in an immersive virtual environment.

    PubMed

    Duarte, Emília; Rebelo, Francisco; Teles, Júlia; Wogalter, Michael S

    2014-09-01

    This study used an immersive virtual environment (IVE) to examine how dynamic features in signage affect behavioral compliance during a work-related task and an emergency egress. Ninety participants performed a work-related task followed by an emergency egress. Compliance with uncued and cued safety signs was assessed prior to an explosion/fire involving egress with exit signs. Although dynamic presentation produced the highest compliance, the difference between dynamic and static presentation was only statistically significant for uncued signs. Uncued signs, both static and dynamic, were effective in changing behavior compared to no/minimal signs. Findings are explained based on sign salience and on task differences. If signs must capture attention while individuals are attending to other tasks, salient (e.g., dynamic) signs are useful in benefiting compliance. This study demonstrates the potential for IVEs to serve as a useful tool in behavioral compliance research. PMID:24210840

  9. Eye Movements, Visual Search and Scene Memory, in an Immersive Virtual Environment

    PubMed Central

    Sullivan, Brian; Snyder, Kat; Ballard, Dana; Hayhoe, Mary

    2014-01-01

    Visual memory has been demonstrated to play a role in both visual search and attentional prioritization in natural scenes. However, it has been studied predominantly in experimental paradigms using multiple two-dimensional images. Natural experience, however, entails prolonged immersion in a limited number of three-dimensional environments. The goal of the present experiment was to recreate circumstances comparable to natural visual experience in order to evaluate the role of scene memory in guiding eye movements in a natural environment. Subjects performed a continuous visual-search task within an immersive virtual-reality environment over three days. We found that, similar to two-dimensional contexts, viewers rapidly learn the location of objects in the environment over time, and use spatial memory to guide search. Incidental fixations did not provide obvious benefit to subsequent search, suggesting that semantic contextual cues may often be just as efficient, or that many incidentally fixated items are not held in memory in the absence of a specific task. On the third day of the experience in the environment, previous search items changed in color. These items were fixated upon with increased probability relative to control objects, suggesting that memory-guided prioritization (or Surprise) may be a robust mechanisms for attracting gaze to novel features of natural environments, in addition to task factors and simple spatial saliency. PMID:24759905

  10. Subliminal Reorientation and Repositioning in Immersive Virtual Environments using Saccadic Suppression.

    PubMed

    Bolte, Benjamin; Lappe, Markus

    2015-04-01

    Virtual reality strives to provide a user with an experience of a simulated world that feels as natural as the real world. Yet, to induce this feeling, sometimes it becomes necessary for technical reasons to deviate from a one-to-one correspondence between the real and the virtual world, and to reorient or reposition the user's viewpoint. Ideally, users should not notice the change of the viewpoint to avoid breaks in perceptual continuity. Saccades, the fast eye movements that we make in order to switch gaze from one object to another, produce a visual discontinuity on the retina, but this is not perceived because the visual system suppresses perception during saccades. As a consequence, our perception fails to detect rotations of the visual scene during saccades. We investigated whether saccadic suppression of image displacement (SSID) can be used in an immersive virtual environment (VE) to unconsciously rotate and translate the observer's viewpoint. To do this, the scene changes have to be precisely time-locked to the saccade onset. We used electrooculography (EOG) for eye movement tracking and assessed the performance of two modified eye movement classification algorithms for the challenging task of online saccade detection that is fast enough for SSID. We investigated the sensitivity of participants to translations (forward/backward) and rotations (in the transverse plane) during trans-saccadic scene changes. We found that participants were unable to detect approximately ±0.5m translations along the line of gaze and ±5° rotations in the transverse plane during saccades with an amplitude of 15°. If the user stands still, our approach exploiting SSID thus provides the means to unconsciously change the user's virtual position and/or orientation. For future research and applications, exploiting SSID has the potential to improve existing redirected walking and change blindness techniques for unlimited navigation through arbitrarily-sized VEs by real walking.

  11. Immersive Virtual Reality Technologies as a New Platform for Science, Scholarship, and Education

    NASA Astrophysics Data System (ADS)

    Djorgovski, Stanislav G.; Hut, P.; McMillan, S.; Knop, R.; Vesperini, E.; Graham, M.; Portegies Zwart, S.; Farr, W.; Mahabal, A.; Donalek, C.; Longo, G.

    2010-01-01

    Immersive virtual reality (VR) and virtual worlds (VWs) are an emerging set of technologies which likely represent the next evolutionary step in the ways we use information technology to interact with the world of information and with other people, the roles now generally fulfilled by the Web and other common Internet applications. Currently, these technologies are mainly accessed through various VWs, e.g., the Second Life (SL), which are general platforms for a broad range of user activities. As an experiment in the utilization of these technologies for science, scholarship, education, and public outreach, we have formed the Meta-Institute for Computational Astrophysics (MICA; http://mica-vw.org), the first professional scientific organization based exclusively in VWs. The goals of MICA are: (1) Exploration, development and promotion of VWs and VR technologies for professional research in astronomy and related fields. (2) Providing and developing novel social networking venues and mechanisms for scientific collaboration and communications, including professional meetings, effective telepresence, etc. (3) Use of VWs and VR technologies for education and public outreach. (4) Exchange of ideas and joint efforts with other scientific disciplines in promoting these goals for science and scholarship in general. To this effect, we have a regular schedule of professional and public outreach events in SL, including technical seminars, workshops, journal club, collaboration meetings, public lectures, etc. We find that these technologies are already remarkably effective as a telepresence platform for scientific and scholarly discussions, meetings, etc. They can offer substantial savings of time and resources, and eliminate a lot of unnecessary travel. They are equally effective as a public outreach platform, reaching a world-wide audience. On the pure research front, we are currently exploring the use of these technologies as a venue for numerical simulations and their

  12. 3D virtual planning in orthognathic surgery and CAD/CAM surgical splints generation in one patient with craniofacial microsomia: a case report

    PubMed Central

    Vale, Francisco; Scherzberg, Jessica; Cavaleiro, João; Sanz, David; Caramelo, Francisco; Maló, Luísa; Marcelino, João Pedro

    2016-01-01

    Objective: In this case report, the feasibility and precision of tridimensional (3D) virtual planning in one patient with craniofacial microsomia is tested using Nemoceph 3D-OS software (Software Nemotec SL, Madrid, Spain) to predict postoperative outcomes on hard tissue and produce CAD/CAM (Computer Aided Design/Computer Aided Manufacturing) surgical splints. Methods: The clinical protocol consists of 3D data acquisition of the craniofacial complex by cone-beam computed tomography (CBCT) and surface scanning of the plaster dental casts. The ''virtual patient'' created underwent virtual surgery and a simulation of postoperative results on hard tissues. Surgical splints were manufactured using CAD/CAM technology in order to transfer the virtual surgical plan to the operating room. Intraoperatively, both CAD/CAM and conventional surgical splints are comparable. A second set of 3D images was obtained after surgery to acquire linear measurements and compare them with measurements obtained when predicting postoperative results virtually. Results: It was found a high similarity between both types of surgical splints with equal fitting on the dental arches. The linear measurements presented some discrepancies between the actual surgical outcomes and the predicted results from the 3D virtual simulation, but caution must be taken in the analysis of these results due to several variables. Conclusions: The reported case confirms the clinical feasibility of the described computer-assisted orthognathic surgical protocol. Further progress in the development of technologies for 3D image acquisition and improvements on software programs to simulate postoperative changes on soft tissue are required. PMID:27007767

  13. Collaborative Science Learning in Three-Dimensional Immersive Virtual Worlds: Pre-Service Teachers' Experiences in Second Life

    ERIC Educational Resources Information Center

    Nussli, Natalie; Oh, Kevin; McCandless, Kevin

    2014-01-01

    The purpose of this mixed methods study was to help pre-service teachers experience and evaluate the potential of Second Life, a three-dimensional immersive virtual environment, for potential integration into their future teaching. By completing collaborative assignments in Second Life, nineteen pre-service general education teachers explored an…

  14. Designing the Self: The Transformation of the Relational Self-Concept through Social Encounters in a Virtual Immersive Environment

    ERIC Educational Resources Information Center

    Knutzen, K. Brant; Kennedy, David M.

    2012-01-01

    This article describes the findings of a 3-month study on how social encounters mediated by an online Virtual Immersive Environment (VIE) impacted on the relational self-concept of adolescents. The study gathered data from two groups of students as they took an Introduction to Design and Programming class. Students in group 1 undertook course…

  15. 3D Virtual Reality Applied in Tectonic Geomorphic Study of the Gombori Range of Greater Caucasus Mountains

    NASA Astrophysics Data System (ADS)

    Sukhishvili, Lasha; Javakhishvili, Zurab

    2016-04-01

    Gombori Range represents the southern part of the young Greater Caucasus Mountains and stretches from NW to SE. The range separates Alazani and Iori basins within the eastern Georgian province of Kakheti. The active phase of Caucasian orogeny started in the Pliocene, but according to alluvial sediments of Gombori range (mapped in the Soviet geologic map), we observe its uplift process to be Quaternary event. The highest peak of the Gombori range has an absolute elevation of 1991 m, while its neighboring Alazani valley gains only 400 m. We assume the range has a very fast uplift rate and it could trigger streams flow direction course reverse in Quaternary. To check this preliminary assumptions we are going to use a tectonic and fluvial geomorphic and stratigraphic approaches including paleocurrent analyses and various affordable absolute dating techniques to detect the evidence of river course reverses and date them. For these purposes we have selected river Turdo outcrop. The river itself flows northwards from the Gombori range and nearby region`s main city of Telavi generates 30-40 m high continuous outcrop along 1 km section. Turdo outcrop has very steep walls and requires special climbing skills to work on it. The goal of this particularly study is to avoid time and resource consuming ground survey process of this steep, high and wide outcrop and test 3D aerial and ground base photogrammetric modelling and analyzing approaches in initial stage of the tectonic geomorphic study. Using this type of remote sensing and virtual lab analyses of 3D outcrop model, we roughly delineated stratigraphic layers, selected exact locations for applying various research techniques and planned safe and suitable climbing routes for getting to the investigation sites.

  16. Extension of the Optimized Virtual Fields Method to estimate viscoelastic material parameters from 3D dynamic displacement fields

    PubMed Central

    Connesson, N.; Clayton, E.H.; Bayly, P.V.; Pierron, F.

    2015-01-01

    In-vivo measurement of the mechanical properties of soft tissues is essential to provide necessary data in biomechanics and medicine (early cancer diagnosis, study of traumatic brain injuries, etc.). Imaging techniques such as Magnetic Resonance Elastography (MRE) can provide 3D displacement maps in the bulk and in vivo, from which, using inverse methods, it is then possible to identify some mechanical parameters of the tissues (stiffness, damping etc.). The main difficulties in these inverse identification procedures consist in dealing with the pressure waves contained in the data and with the experimental noise perturbing the spatial derivatives required during the processing. The Optimized Virtual Fields Method (OVFM) [1], designed to be robust to noise, present natural and rigorous solution to deal with these problems. The OVFM has been adapted to identify material parameter maps from Magnetic Resonance Elastography (MRE) data consisting of 3-dimensional displacement fields in harmonically loaded soft materials. In this work, the method has been developed to identify elastic and viscoelastic models. The OVFM sensitivity to spatial resolution and to noise has been studied by analyzing 3D analytically simulated displacement data. This study evaluates and describes the OVFM identification performances: different biases on the identified parameters are induced by the spatial resolution and experimental noise. The well-known identification problems in the case of quasi-incompressible materials also find a natural solution in the OVFM. Moreover, an a posteriori criterion to estimate the local identification quality is proposed. The identification results obtained on actual experiments are briefly presented. PMID:26146416

  17. The stress and workload of virtual reality training: the effects of presence, immersion and flow.

    PubMed

    Lackey, S J; Salcedo, J N; Szalma, J L; Hancock, P A

    2016-08-01

    The present investigation evaluated the effects of virtual reality (VR) training on the performance, perceived workload and stress response to a live training exercise in a sample of Soldiers. We also examined the relationship between the perceptions of that same VR as measured by engagement, immersion, presence, flow, perceived utility and ease of use with the performance, workload and stress reported on the live training task. To a degree, these latter relationships were moderated by task performance, as measured by binary (Go/No-Go) ratings. Participants who reported positive VR experiences also tended to experience lower stress and lower workload when performing the live version of the task. Thus, VR training regimens may be efficacious for mitigating the stress and workload associated with criterion tasks, thereby reducing the ultimate likelihood of real-world performance failure. Practitioner Summary: VR provides opportunities for training in artificial worlds comprised of highly realistic features. Our virtual room clearing scenario facilitated the integration of Training and Readiness objectives and satisfied training doctrine obligations in a compelling engaging experience for both novice and experienced trainees. PMID:26977540

  18. Redirecting walking and driving for natural navigation in immersive virtual environments.

    PubMed

    Bruder, Gerd; Interrante, Victoria; Phillips, Lane; Steinicke, Frank

    2012-04-01

    Walking is the most natural form of locomotion for humans, and real walking interfaces have demonstrated their benefits for several navigation tasks. With recently proposed redirection techniques it becomes possible to overcome space limitations as imposed by tracking sensors or laboratory setups, and, theoretically, it is now possible to walk through arbitrarily large virtual environments. However, walking as sole locomotion technique has drawbacks, in particular, for long distances, such that even in the real world we tend to support walking with passive or active transportation for longer-distance travel. In this article we show that concepts from the field of redirected walking can be applied to movements with transportation devices. We conducted psychophysical experiments to determine perceptual detection thresholds for redirected driving, and set these in relation to results from redirected walking. We show that redirected walking-and-driving approaches can easily be realized in immersive virtual reality laboratories, e. g., with electric wheelchairs, and show that such systems can combine advantages of real walking in confined spaces with benefits of using vehicle-based self-motion for longer-distance travel.

  19. A new dynamic 3D virtual methodology for teaching the mechanics of atrial septation as seen in the human heart

    PubMed Central

    Schleich, Jean-Marc; Dillenseger, Jean-Louis; Houyel, Lucile; Almange, Claude; Anderson, Robert H.

    2009-01-01

    Background Learning embryology remains difficult, since it requires understanding of many complex phenomena. The temporal evolution of developmental events has classically been illustrated using cartoons, which create difficulty in linking spatial and temporal aspects, such correlation being the keystone of descriptive embryology. Methods We synthesized the bibliographic data from recent studies of atrial septal development. On the basis of this synthesis, consensus on the stages of atrial septation as seen in the human heart has been reached by a group of experts in cardiac embryology and paediatric cardiology. This has permitted the preparation of three-dimensional (3-D) computer graphic objects for the anatomical components involved in the different stages of normal human atrial septation. Results We have provided a virtual guide to the process of normal atrial septation, the animation providing an appreciation of the temporal and morphologic events necessary to separate the systemic and pulmonary venous returns. Conclusion We have shown that our animations of normal human atrial septation increase significantly the teaching of the complex developmental processes involved, and provide a new dynamic for the process of learning. PMID:19363807

  20. A new dynamic 3D virtual methodology for teaching the mechanics of atrial septation as seen in the human heart.

    PubMed

    Schleich, Jean-Marc; Dillenseger, Jean-Louis; Houyel, Lucile; Almange, Claude; Anderson, Robert H

    2009-01-01

    Learning embryology remains difficult, since it requires understanding of many complex phenomena. The temporal evolution of developmental events has classically been illustrated using cartoons, which create difficulty in linking spatial and temporal aspects, such correlation being the keystone of descriptive embryology. We synthesized the bibliographic data from recent studies of atrial septal development. On the basis of this synthesis, consensus on the stages of atrial septation as seen in the human heart has been reached by a group of experts in cardiac embryology and pediatric cardiology. This has permitted the preparation of three-dimensional (3D) computer graphic objects for the anatomical components involved in the different stages of normal human atrial septation. We have provided a virtual guide to the process of normal atrial septation, the animation providing an appreciation of the temporal and morphologic events necessary to separate the systemic and pulmonary venous returns. We have shown that our animations of normal human atrial septation increase significantly the teaching of the complex developmental processes involved, and provide a new dynamic for the process of learning. PMID:19363807

  1. A Learner-Centered Approach for Training Science Teachers through Virtual Reality and 3D Visualization Technologies: Practical Experience for Sharing

    ERIC Educational Resources Information Center

    Yeung, Yau-Yuen

    2004-01-01

    This paper presentation will report on how some science educators at the Science Department of The Hong Kong Institute of Education have successfully employed an array of innovative learning media such as three-dimensional (3D) and virtual reality (VR) technologies to create seven sets of resource kits, most of which are being placed on the…

  2. Three Dimensional (3D) Printing: A Straightforward, User-Friendly Protocol to Convert Virtual Chemical Models to Real-Life Objects

    ERIC Educational Resources Information Center

    Rossi, Sergio; Benaglia, Maurizio; Brenna, Davide; Porta, Riccardo; Orlandi, Manuel

    2015-01-01

    A simple procedure to convert protein data bank files (.pdb) into a stereolithography file (.stl) using VMD software (Virtual Molecular Dynamic) is reported. This tutorial allows generating, with a very simple protocol, three-dimensional customized structures that can be printed by a low-cost 3D-printer, and used for teaching chemical education…

  3. The use of a new 3D splint and double CT scan procedure to obtain an accurate anatomic virtual augmented model of the skull.

    PubMed

    Swennen, G R J; Barth, E-L; Eulzer, C; Schutyser, F

    2007-02-01

    Three-dimensional (3D) virtual planning of orthognathic surgery requires detailed visualization of the interocclusal relationship. The purpose of this study was to introduce the modification of the double computed tomography (CT) scan procedure using a newly designed 3D splint in order to obtain a detailed anatomic 3D virtual augmented model of the skull. A total of 10 dry adult human cadaver skulls were used to evaluate the accuracy of the automatic rigid registration method for fusion of both CT datasets (Maxilim, version 1.3.0). The overall mean registration error was 0.1355+/-0.0323 mm (range 0.0760-0.1782 mm). Analysis of variance showed a registration method error of 0.0564 mm (P < 0.001; 95% confidence interval = 0.0491-0.0622). The combination of the newly designed 3D splint with the double CT scan procedure allowed accurate registration and the set-up of an accurate anatomic 3D virtual augmented model of the skull with detailed dental surface.

  4. Improving the Sequential Time Perception of Teenagers with Mild to Moderate Mental Retardation with 3D Immersive Virtual Reality (IVR)

    ERIC Educational Resources Information Center

    Passig, David

    2009-01-01

    Children with mental retardation have pronounced difficulties in using cognitive strategies and comprehending abstract concepts--among them, the concept of sequential time (Van-Handel, Swaab, De-Vries, & Jongmans, 2007). The perception of sequential time is generally tested by using scenarios presenting a continuum of actions. The goal of this…

  5. Europeana and 3D

    NASA Astrophysics Data System (ADS)

    Pletinckx, D.

    2011-09-01

    The current 3D hype creates a lot of interest in 3D. People go to 3D movies, but are we ready to use 3D in our homes, in our offices, in our communication? Are we ready to deliver real 3D to a general public and use interactive 3D in a meaningful way to enjoy, learn, communicate? The CARARE project is realising this for the moment in the domain of monuments and archaeology, so that real 3D of archaeological sites and European monuments will be available to the general public by 2012. There are several aspects to this endeavour. First of all is the technical aspect of flawlessly delivering 3D content over all platforms and operating systems, without installing software. We have currently a working solution in PDF, but HTML5 will probably be the future. Secondly, there is still little knowledge on how to create 3D learning objects, 3D tourist information or 3D scholarly communication. We are still in a prototype phase when it comes to integrate 3D objects in physical or virtual museums. Nevertheless, Europeana has a tremendous potential as a multi-facetted virtual museum. Finally, 3D has a large potential to act as a hub of information, linking to related 2D imagery, texts, video, sound. We describe how to create such rich, explorable 3D objects that can be used intuitively by the generic Europeana user and what metadata is needed to support the semantic linking.

  6. Incorporating immersive virtual environments in health promotion campaigns: a construal level theory approach.

    PubMed

    Ahn, Sun Joo Grace

    2015-01-01

    In immersive virtual environments (IVEs), users may observe negative consequences of a risky health behavior in a personally involving way via digital simulations. In the context of an ongoing health promotion campaign, IVEs coupled with pamphlets are proposed as a novel messaging strategy to heighten personal relevance and involvement with the issue of soft-drink consumption and obesity, as well as perceptions that the risk is proximal and imminent. The framework of construal level theory guided the design of a 2 (tailoring: other vs. self) × 2 (medium: pamphlet only vs. pamphlet with IVEs) between-subjects experiment to test the efficacy in reducing the consumption of soft drinks over 1 week. Immediately following exposure, tailoring the message to the self (vs. other) seemed to be effective in reducing intentions to consume soft drinks. The effect of tailoring dissipated after 1 week, and measures of actual soft-drink consumption 1 week following experimental treatments demonstrated that coupling IVEs with the pamphlet was more effective. Behavioral intention was a significant predictor of actual behavior, but underlying mechanisms driving intentions and actual behavior were distinct. Results prescribed a messaging strategy that incorporates both tailoring and coupling IVEs with traditional media to increase behavioral changes over time. PMID:24991725

  7. Multi-parallel open technology to enable collaborative volume visualization: how to create global immersive virtual anatomy classrooms.

    PubMed

    Silverstein, Jonathan C; Walsh, Colin; Dech, Fred; Olson, Eric; E, Michael; Parsad, Nigel; Stevens, Rick

    2008-01-01

    Many prototype projects aspire to develop a sustainable model of immersive radiological volume visualization for virtual anatomic education. Some have focused on distributed or parallel architectures. However, very few, if any others, have combined multi-location, multi-directional, multi-stream sharing of video, audio, desktop applications, and parallel stereo volume rendering, to converge on an open, globally scalable, and inexpensive collaborative architecture and implementation method for anatomic teaching using radiological volumes. We have focused our efforts on bringing this all together for several years. We outline here the technology we're making available to the open source community and a system implementation suggestion for how to create global immersive virtual anatomy classrooms. With the releases of Access Grid 3.1 and our parallel stereo volume rendering code, inexpensive globally scalable technology is available to enable collaborative volume visualization upon an award-winning framework. Based upon these technologies, immersive virtual anatomy classrooms that share educational or clinical principles can be constructed with the setup described with moderate technological expertise and global scalability.

  8. The use of a low-cost visible light 3D scanner to create virtual reality environment models of actors and objects

    NASA Astrophysics Data System (ADS)

    Straub, Jeremy

    2015-05-01

    A low-cost 3D scanner has been developed with a parts cost of approximately USD $5,000. This scanner uses visible light sensing to capture both structural as well as texture and color data of a subject. This paper discusses the use of this type of scanner to create 3D models for incorporation into a virtual reality environment. It describes the basic scanning process (which takes under a minute for a single scan), which can be repeated to collect multiple positions, if needed for actor model creation. The efficacy of visible light versus other scanner types is also discussed.

  9. A Second Chance at Health: How a 3D Virtual World Can Improve Health Self-Efficacy for Weight Loss Management Among Adults.

    PubMed

    Behm-Morawitz, Elizabeth; Lewallen, Jennifer; Choi, Grace

    2016-02-01

    Health self-efficacy, or the beliefs in one's capabilities to perform health behaviors, is a significant factor in eliciting health behavior change, such as weight loss. Research has demonstrated that virtual embodiment has the potential to alter one's psychology and physicality, particularly in health contexts; however, little is known about the impacts embodiment in a virtual world has on health self-efficacy. The present research is a randomized controlled trial (N = 90) examining the effectiveness of virtual embodiment and play in a social virtual world (Second Life [SL]) for increasing health self-efficacy (exercise and nutrition efficacy) among overweight adults. Participants were randomly assigned to a 3D social virtual world (avatar virtual interaction experimental condition), 2D social networking site (no avatar virtual interaction control condition), or no intervention (no virtual interaction control condition). The findings of this study provide initial evidence for the use of SL to improve exercise efficacy and to support weight loss. Results also suggest that individuals who have higher self-presence with their avatar reap more benefits. Finally, quantitative findings are triangulated with qualitative data to increase confidence in the results and provide richer insight into the perceived effectiveness and limitations of SL for meeting weight loss goals. Themes resulting from the qualitative analysis indicate that participation in SL can improve motivation and efficacy to try new physical activities; however, individuals who have a dislike for video games may not be benefitted by avatar-based virtual interventions. Implications for research on the transformative potential of virtual embodiment and self-presence in general are discussed.

  10. A Second Chance at Health: How a 3D Virtual World Can Improve Health Self-Efficacy for Weight Loss Management Among Adults.

    PubMed

    Behm-Morawitz, Elizabeth; Lewallen, Jennifer; Choi, Grace

    2016-02-01

    Health self-efficacy, or the beliefs in one's capabilities to perform health behaviors, is a significant factor in eliciting health behavior change, such as weight loss. Research has demonstrated that virtual embodiment has the potential to alter one's psychology and physicality, particularly in health contexts; however, little is known about the impacts embodiment in a virtual world has on health self-efficacy. The present research is a randomized controlled trial (N = 90) examining the effectiveness of virtual embodiment and play in a social virtual world (Second Life [SL]) for increasing health self-efficacy (exercise and nutrition efficacy) among overweight adults. Participants were randomly assigned to a 3D social virtual world (avatar virtual interaction experimental condition), 2D social networking site (no avatar virtual interaction control condition), or no intervention (no virtual interaction control condition). The findings of this study provide initial evidence for the use of SL to improve exercise efficacy and to support weight loss. Results also suggest that individuals who have higher self-presence with their avatar reap more benefits. Finally, quantitative findings are triangulated with qualitative data to increase confidence in the results and provide richer insight into the perceived effectiveness and limitations of SL for meeting weight loss goals. Themes resulting from the qualitative analysis indicate that participation in SL can improve motivation and efficacy to try new physical activities; however, individuals who have a dislike for video games may not be benefitted by avatar-based virtual interventions. Implications for research on the transformative potential of virtual embodiment and self-presence in general are discussed. PMID:26882324

  11. Development of an immersive virtual reality head-mounted display with high performance.

    PubMed

    Wang, Yunqi; Liu, Weiqi; Meng, Xiangxiang; Fu, Hanyi; Zhang, Daliang; Kang, Yusi; Feng, Rui; Wei, Zhonglun; Zhu, Xiuqing; Jiang, Guohua

    2016-09-01

    To resolve the contradiction between large field of view and high resolution in immersive virtual reality (VR) head-mounted displays (HMDs), an HMD monocular optical system with a large field of view and high resolution was designed. The system was fabricated by adopting aspheric technology with CNC grinding and a high-resolution LCD as the image source. With this monocular optical system, an HMD binocular optical system with a wide-range continuously adjustable interpupillary distance was achieved in the form of partially overlapping fields of view (FOV) combined with a screw adjustment mechanism. A fast image processor-centered LCD driver circuit and an image preprocessing system were also built to address binocular vision inconsistency in the partially overlapping FOV binocular optical system. The distortions of the HMD optical system with a large field of view were measured. Meanwhile, the optical distortions in the display and the trapezoidal distortions introduced during image processing were corrected by a calibration model for reverse rotations and translations. A high-performance not-fully-transparent VR HMD device with high resolution (1920×1080) and large FOV [141.6°(H)×73.08°(V)] was developed. The full field-of-view average value of angular resolution is 18.6  pixels/degree. With the device, high-quality VR simulations can be completed under various scenarios, and the device can be utilized for simulated trainings in aeronautics, astronautics, and other fields with corresponding platforms. The developed device has positive practical significance. PMID:27607272

  12. Bystander responses to a violent incident in an immersive virtual environment.

    PubMed

    Slater, Mel; Rovira, Aitor; Southern, Richard; Swapp, David; Zhang, Jian J; Campbell, Claire; Levine, Mark

    2013-01-01

    Under what conditions will a bystander intervene to try to stop a violent attack by one person on another? It is generally believed that the greater the size of the crowd of bystanders, the less the chance that any of them will intervene. A complementary model is that social identity is critical as an explanatory variable. For example, when the bystander shares common social identity with the victim the probability of intervention is enhanced, other things being equal. However, it is generally not possible to study such hypotheses experimentally for practical and ethical reasons. Here we show that an experiment that depicts a violent incident at life-size in immersive virtual reality lends support to the social identity explanation. 40 male supporters of Arsenal Football Club in England were recruited for a two-factor between-groups experiment: the victim was either an Arsenal supporter or not (in-group/out-group), and looked towards the participant for help or not during the confrontation. The response variables were the numbers of verbal and physical interventions by the participant during the violent argument. The number of physical interventions had a significantly greater mean in the in-group condition compared to the out-group. The more that participants perceived that the Victim was looking to them for help the greater the number of interventions in the in-group but not in the out-group. These results are supported by standard statistical analysis of variance, with more detailed findings obtained by a symbolic regression procedure based on genetic programming. Verbal interventions made during their experience, and analysis of post-experiment interview data suggest that in-group members were more prone to confrontational intervention compared to the out-group who were more prone to make statements to try to diffuse the situation.

  13. Bystander responses to a violent incident in an immersive virtual environment.

    PubMed

    Slater, Mel; Rovira, Aitor; Southern, Richard; Swapp, David; Zhang, Jian J; Campbell, Claire; Levine, Mark

    2013-01-01

    Under what conditions will a bystander intervene to try to stop a violent attack by one person on another? It is generally believed that the greater the size of the crowd of bystanders, the less the chance that any of them will intervene. A complementary model is that social identity is critical as an explanatory variable. For example, when the bystander shares common social identity with the victim the probability of intervention is enhanced, other things being equal. However, it is generally not possible to study such hypotheses experimentally for practical and ethical reasons. Here we show that an experiment that depicts a violent incident at life-size in immersive virtual reality lends support to the social identity explanation. 40 male supporters of Arsenal Football Club in England were recruited for a two-factor between-groups experiment: the victim was either an Arsenal supporter or not (in-group/out-group), and looked towards the participant for help or not during the confrontation. The response variables were the numbers of verbal and physical interventions by the participant during the violent argument. The number of physical interventions had a significantly greater mean in the in-group condition compared to the out-group. The more that participants perceived that the Victim was looking to them for help the greater the number of interventions in the in-group but not in the out-group. These results are supported by standard statistical analysis of variance, with more detailed findings obtained by a symbolic regression procedure based on genetic programming. Verbal interventions made during their experience, and analysis of post-experiment interview data suggest that in-group members were more prone to confrontational intervention compared to the out-group who were more prone to make statements to try to diffuse the situation. PMID:23300991

  14. Bystander Responses to a Violent Incident in an Immersive Virtual Environment

    PubMed Central

    Slater, Mel; Rovira, Aitor; Southern, Richard; Swapp, David; Zhang, Jian J.; Campbell, Claire; Levine, Mark

    2013-01-01

    Under what conditions will a bystander intervene to try to stop a violent attack by one person on another? It is generally believed that the greater the size of the crowd of bystanders, the less the chance that any of them will intervene. A complementary model is that social identity is critical as an explanatory variable. For example, when the bystander shares common social identity with the victim the probability of intervention is enhanced, other things being equal. However, it is generally not possible to study such hypotheses experimentally for practical and ethical reasons. Here we show that an experiment that depicts a violent incident at life-size in immersive virtual reality lends support to the social identity explanation. 40 male supporters of Arsenal Football Club in England were recruited for a two-factor between-groups experiment: the victim was either an Arsenal supporter or not (in-group/out-group), and looked towards the participant for help or not during the confrontation. The response variables were the numbers of verbal and physical interventions by the participant during the violent argument. The number of physical interventions had a significantly greater mean in the in-group condition compared to the out-group. The more that participants perceived that the Victim was looking to them for help the greater the number of interventions in the in-group but not in the out-group. These results are supported by standard statistical analysis of variance, with more detailed findings obtained by a symbolic regression procedure based on genetic programming. Verbal interventions made during their experience, and analysis of post-experiment interview data suggest that in-group members were more prone to confrontational intervention compared to the out-group who were more prone to make statements to try to diffuse the situation. PMID:23300991

  15. Feasibility of an Immersive Virtual Reality Intervention for Hospitalized Patients: An Observational Cohort Study

    PubMed Central

    2016-01-01

    Background Virtual reality (VR) offers immersive, realistic, three-dimensional experiences that “transport” users to novel environments. Because VR is effective for acute pain and anxiety, it may have benefits for hospitalized patients; however, there are few reports using VR in this setting. Objective The aim was to evaluate the acceptability and feasibility of VR in a diverse cohort of hospitalized patients. Methods We assessed the acceptability and feasibility of VR in a cohort of patients admitted to an inpatient hospitalist service over a 4-month period. We excluded patients with motion sickness, stroke, seizure, dementia, nausea, and in isolation. Eligible patients viewed VR experiences (eg, ocean exploration; Cirque du Soleil; tour of Iceland) with Samsung Gear VR goggles. We then conducted semistructured patient interview and performed statistical testing to compare patients willing versus unwilling to use VR. Results We evaluated 510 patients; 423 were excluded and 57 refused to participate, leaving 30 participants. Patients willing versus unwilling to use VR were younger (mean 49.1, SD 17.4 years vs mean 60.2, SD 17.7 years; P=.01); there were no differences by sex, race, or ethnicity. Among users, most reported a positive experience and indicated that VR could improve pain and anxiety, although many felt the goggles were uncomfortable. Conclusions Most inpatient users of VR described the experience as pleasant and capable of reducing pain and anxiety. However, few hospitalized patients in this “real-world” series were both eligible and willing to use VR. Consistent with the “digital divide” for emerging technologies, younger patients were more willing to participate. Future research should evaluate the impact of VR on clinical and resource outcomes. ClinicalTrial Clinicaltrials.gov NCT02456987; https://clinicaltrials.gov/ct2/show/NCT02456987 (Archived by WebCite at http://www.webcitation.org/6iFIMRNh3) PMID:27349654

  16. Development of an immersive virtual reality head-mounted display with high performance.

    PubMed

    Wang, Yunqi; Liu, Weiqi; Meng, Xiangxiang; Fu, Hanyi; Zhang, Daliang; Kang, Yusi; Feng, Rui; Wei, Zhonglun; Zhu, Xiuqing; Jiang, Guohua

    2016-09-01

    To resolve the contradiction between large field of view and high resolution in immersive virtual reality (VR) head-mounted displays (HMDs), an HMD monocular optical system with a large field of view and high resolution was designed. The system was fabricated by adopting aspheric technology with CNC grinding and a high-resolution LCD as the image source. With this monocular optical system, an HMD binocular optical system with a wide-range continuously adjustable interpupillary distance was achieved in the form of partially overlapping fields of view (FOV) combined with a screw adjustment mechanism. A fast image processor-centered LCD driver circuit and an image preprocessing system were also built to address binocular vision inconsistency in the partially overlapping FOV binocular optical system. The distortions of the HMD optical system with a large field of view were measured. Meanwhile, the optical distortions in the display and the trapezoidal distortions introduced during image processing were corrected by a calibration model for reverse rotations and translations. A high-performance not-fully-transparent VR HMD device with high resolution (1920×1080) and large FOV [141.6°(H)×73.08°(V)] was developed. The full field-of-view average value of angular resolution is 18.6  pixels/degree. With the device, high-quality VR simulations can be completed under various scenarios, and the device can be utilized for simulated trainings in aeronautics, astronautics, and other fields with corresponding platforms. The developed device has positive practical significance.

  17. Atom pair 2D-fingerprints perceive 3D-molecular shape and pharmacophores for very fast virtual screening of ZINC and GDB-17.

    PubMed

    Awale, Mahendra; Reymond, Jean-Louis

    2014-07-28

    Three-dimensional (3D) molecular shape and pharmacophores are important determinants of the biological activity of organic molecules; however, a precise computation of 3D-shape is generally too slow for virtual screening of very large databases. A reinvestigation of the concept of atom pairs initially reported by Carhart et al. and extended by Schneider et al. showed that a simple atom pair fingerprint (APfp) counting atom pairs at increasing topological distances in 2D-structures without atom property assignment correlates with various representations of molecular shape extracted from the 3D-structures. A related 55-dimensional atom pair fingerprint extended with atom properties (Xfp) provided an efficient pharmacophore fingerprint with good performance for ligand-based virtual screening such as the recovery of active compounds from decoys in DUD, and overlap with the ROCS 3D-pharmacophore scoring function. The APfp and Xfp data were organized for web-based extremely fast nearest-neighbor searching in ZINC (13.5 M compounds) and GDB-17 (50 M random subset) freely accessible at www.gdb.unibe.ch .

  18. 3D Imaging.

    ERIC Educational Resources Information Center

    Hastings, S. K.

    2002-01-01

    Discusses 3 D imaging as it relates to digital representations in virtual library collections. Highlights include X-ray computed tomography (X-ray CT); the National Science Foundation (NSF) Digital Library Initiatives; output peripherals; image retrieval systems, including metadata; and applications of 3 D imaging for libraries and museums. (LRW)

  19. Coming down to Earth: Helping Teachers Use 3D Virtual Worlds in Across-Spaces Learning Situations

    ERIC Educational Resources Information Center

    Muñoz-Cristóbal, Juan A.; Prieto, Luis P.; Asensio-Pérez, Juan I.; Martínez-Monés, Alejandra; Jorrín-Abellán, Iván M.; Dimitriadis, Yannis

    2015-01-01

    Different approaches have explored how to provide seamless learning across multiple ICT-enabled physical and virtual spaces, including three-dimensional virtual worlds (3DVW). However, these approaches present limitations that may reduce their acceptance in authentic educational practice: The difficulties of authoring and sharing teacher-created…

  20. A Numerical Method for Solving the 3D Unsteady Incompressible Navier-Stokes Equations in Curvilinear Domains with Complex Immersed Boundaries

    PubMed Central

    Ge, Liang; Sotiropoulos, Fotis

    2008-01-01

    A novel numerical method is developed that integrates boundary-conforming grids with a sharp interface, immersed boundary methodology. The method is intended for simulating internal flows containing complex, moving immersed boundaries such as those encountered in several cardiovascular applications. The background domain (e.g the empty aorta) is discretized efficiently with a curvilinear boundary-fitted mesh while the complex moving immersed boundary (say a prosthetic heart valve) is treated with the sharp-interface, hybrid Cartesian/immersed-boundary approach of Gilmanov and Sotiropoulos [1]. To facilitate the implementation of this novel modeling paradigm in complex flow simulations, an accurate and efficient numerical method is developed for solving the unsteady, incompressible Navier-Stokes equations in generalized curvilinear coordinates. The method employs a novel, fully-curvilinear staggered grid discretization approach, which does not require either the explicit evaluation of the Christoffel symbols or the discretization of all three momentum equations at cell interfaces as done in previous formulations. The equations are integrated in time using an efficient, second-order accurate fractional step methodology coupled with a Jacobian-free, Newton-Krylov solver for the momentum equations and a GMRES solver enhanced with multigrid as preconditioner for the Poisson equation. Several numerical experiments are carried out on fine computational meshes to demonstrate the accuracy and efficiency of the proposed method for standard benchmark problems as well as for unsteady, pulsatile flow through a curved, pipe bend. To demonstrate the ability of the method to simulate flows with complex, moving immersed boundaries we apply it to calculate pulsatile, physiological flow through a mechanical, bileaflet heart valve mounted in a model straight aorta with an anatomical-like triple sinus. PMID:19194533

  1. A Comparative Analysis of 2D and 3D Tasks for Virtual Reality Therapies Based on Robotic-Assisted Neurorehabilitation for Post-stroke Patients.

    PubMed

    Lledó, Luis D; Díez, Jorge A; Bertomeu-Motos, Arturo; Ezquerro, Santiago; Badesa, Francisco J; Sabater-Navarro, José M; García-Aracil, Nicolás

    2016-01-01

    Post-stroke neurorehabilitation based on virtual therapies are performed completing repetitive exercises shown in visual electronic devices, whose content represents imaginary or daily life tasks. Currently, there are two ways of visualization of these task. 3D virtual environments are used to get a three dimensional space that represents the real world with a high level of detail, whose realism is determinated by the resolucion and fidelity of the objects of the task. Furthermore, 2D virtual environments are used to represent the tasks with a low degree of realism using techniques of bidimensional graphics. However, the type of visualization can influence the quality of perception of the task, affecting the patient's sensorimotor performance. The purpose of this paper was to evaluate if there were differences in patterns of kinematic movements when post-stroke patients performed a reach task viewing a virtual therapeutic game with two different type of visualization of virtual environment: 2D and 3D. Nine post-stroke patients have participated in the study receiving a virtual therapy assisted by PUPArm rehabilitation robot. Horizontal movements of the upper limb were performed to complete the aim of the tasks, which consist in reaching peripheral or perspective targets depending on the virtual environment shown. Various parameter types such as the maximum speed, reaction time, path length, or initial movement are analyzed from the data acquired objectively by the robotic device to evaluate the influence of the task visualization. At the end of the study, a usability survey was provided to each patient to analysis his/her satisfaction level. For all patients, the movement trajectories were enhanced when they completed the therapy. This fact suggests that patient's motor recovery was increased. Despite of the similarity in majority of the kinematic parameters, differences in reaction time and path length were higher using the 3D task. Regarding the success rates

  2. A Comparative Analysis of 2D and 3D Tasks for Virtual Reality Therapies Based on Robotic-Assisted Neurorehabilitation for Post-stroke Patients.

    PubMed

    Lledó, Luis D; Díez, Jorge A; Bertomeu-Motos, Arturo; Ezquerro, Santiago; Badesa, Francisco J; Sabater-Navarro, José M; García-Aracil, Nicolás

    2016-01-01

    Post-stroke neurorehabilitation based on virtual therapies are performed completing repetitive exercises shown in visual electronic devices, whose content represents imaginary or daily life tasks. Currently, there are two ways of visualization of these task. 3D virtual environments are used to get a three dimensional space that represents the real world with a high level of detail, whose realism is determinated by the resolucion and fidelity of the objects of the task. Furthermore, 2D virtual environments are used to represent the tasks with a low degree of realism using techniques of bidimensional graphics. However, the type of visualization can influence the quality of perception of the task, affecting the patient's sensorimotor performance. The purpose of this paper was to evaluate if there were differences in patterns of kinematic movements when post-stroke patients performed a reach task viewing a virtual therapeutic game with two different type of visualization of virtual environment: 2D and 3D. Nine post-stroke patients have participated in the study receiving a virtual therapy assisted by PUPArm rehabilitation robot. Horizontal movements of the upper limb were performed to complete the aim of the tasks, which consist in reaching peripheral or perspective targets depending on the virtual environment shown. Various parameter types such as the maximum speed, reaction time, path length, or initial movement are analyzed from the data acquired objectively by the robotic device to evaluate the influence of the task visualization. At the end of the study, a usability survey was provided to each patient to analysis his/her satisfaction level. For all patients, the movement trajectories were enhanced when they completed the therapy. This fact suggests that patient's motor recovery was increased. Despite of the similarity in majority of the kinematic parameters, differences in reaction time and path length were higher using the 3D task. Regarding the success rates

  3. A Comparative Analysis of 2D and 3D Tasks for Virtual Reality Therapies Based on Robotic-Assisted Neurorehabilitation for Post-stroke Patients

    PubMed Central

    Lledó, Luis D.; Díez, Jorge A.; Bertomeu-Motos, Arturo; Ezquerro, Santiago; Badesa, Francisco J.; Sabater-Navarro, José M.; García-Aracil, Nicolás

    2016-01-01

    Post-stroke neurorehabilitation based on virtual therapies are performed completing repetitive exercises shown in visual electronic devices, whose content represents imaginary or daily life tasks. Currently, there are two ways of visualization of these task. 3D virtual environments are used to get a three dimensional space that represents the real world with a high level of detail, whose realism is determinated by the resolucion and fidelity of the objects of the task. Furthermore, 2D virtual environments are used to represent the tasks with a low degree of realism using techniques of bidimensional graphics. However, the type of visualization can influence the quality of perception of the task, affecting the patient's sensorimotor performance. The purpose of this paper was to evaluate if there were differences in patterns of kinematic movements when post-stroke patients performed a reach task viewing a virtual therapeutic game with two different type of visualization of virtual environment: 2D and 3D. Nine post-stroke patients have participated in the study receiving a virtual therapy assisted by PUPArm rehabilitation robot. Horizontal movements of the upper limb were performed to complete the aim of the tasks, which consist in reaching peripheral or perspective targets depending on the virtual environment shown. Various parameter types such as the maximum speed, reaction time, path length, or initial movement are analyzed from the data acquired objectively by the robotic device to evaluate the influence of the task visualization. At the end of the study, a usability survey was provided to each patient to analysis his/her satisfaction level. For all patients, the movement trajectories were enhanced when they completed the therapy. This fact suggests that patient's motor recovery was increased. Despite of the similarity in majority of the kinematic parameters, differences in reaction time and path length were higher using the 3D task. Regarding the success rates

  4. A Comparative Analysis of 2D and 3D Tasks for Virtual Reality Therapies Based on Robotic-Assisted Neurorehabilitation for Post-stroke Patients

    PubMed Central

    Lledó, Luis D.; Díez, Jorge A.; Bertomeu-Motos, Arturo; Ezquerro, Santiago; Badesa, Francisco J.; Sabater-Navarro, José M.; García-Aracil, Nicolás

    2016-01-01

    Post-stroke neurorehabilitation based on virtual therapies are performed completing repetitive exercises shown in visual electronic devices, whose content represents imaginary or daily life tasks. Currently, there are two ways of visualization of these task. 3D virtual environments are used to get a three dimensional space that represents the real world with a high level of detail, whose realism is determinated by the resolucion and fidelity of the objects of the task. Furthermore, 2D virtual environments are used to represent the tasks with a low degree of realism using techniques of bidimensional graphics. However, the type of visualization can influence the quality of perception of the task, affecting the patient's sensorimotor performance. The purpose of this paper was to evaluate if there were differences in patterns of kinematic movements when post-stroke patients performed a reach task viewing a virtual therapeutic game with two different type of visualization of virtual environment: 2D and 3D. Nine post-stroke patients have participated in the study receiving a virtual therapy assisted by PUPArm rehabilitation robot. Horizontal movements of the upper limb were performed to complete the aim of the tasks, which consist in reaching peripheral or perspective targets depending on the virtual environment shown. Various parameter types such as the maximum speed, reaction time, path length, or initial movement are analyzed from the data acquired objectively by the robotic device to evaluate the influence of the task visualization. At the end of the study, a usability survey was provided to each patient to analysis his/her satisfaction level. For all patients, the movement trajectories were enhanced when they completed the therapy. This fact suggests that patient's motor recovery was increased. Despite of the similarity in majority of the kinematic parameters, differences in reaction time and path length were higher using the 3D task. Regarding the success rates

  5. The Effect of the Use of the 3-D Multi-User Virtual Environment "Second Life" on Student Motivation and Language Proficiency in Courses of Spanish as a Foreign Language

    ERIC Educational Resources Information Center

    Pares-Toral, Maria T.

    2013-01-01

    The ever increasing popularity of virtual worlds, also known as 3-D multi-user virtual environments (MUVEs) or simply virtual worlds provides language instructors with a new tool they can exploit in their courses. For now, "Second Life" is one of the most popular MUVEs used for teaching and learning, and although "Second Life"…

  6. A New Approach to Improve Cognition, Muscle Strength, and Postural Balance in Community-Dwelling Elderly with a 3-D Virtual Reality Kayak Program.

    PubMed

    Park, Junhyuck; Yim, JongEun

    2016-01-01

    Aging is usually accompanied with deterioration of physical abilities, such as muscular strength, sensory sensitivity, and functional capacity. Recently, intervention methods with virtual reality have been introduced, providing an enjoyable therapy for elderly. The aim of this study was to investigate whether a 3-D virtual reality kayak program could improve the cognitive function, muscle strength, and balance of community-dwelling elderly. Importantly, kayaking involves most of the upper body musculature and needs the balance control. Seventy-two participants were randomly allocated into the kayak program group (n = 36) and the control group (n = 36). The two groups were well matched with respect to general characteristics at baseline. The participants in both groups performed a conventional exercise program for 30 min, and then the 3-D virtual reality kayak program was performed in the kayak program group for 20 min, two times a week for 6 weeks. Cognitive function was measured using the Montreal Cognitive Assessment. Muscle strength was measured using the arm curl and handgrip strength tests. Standing and sitting balance was measured using the Good Balance system. The post-test was performed in the same manner as the pre-test; the overall outcomes such as cognitive function (p < 0.05), muscle strength (p < 0.05), and balance (standing and sitting balance, p < 0.05) were significantly improved in kayak program group compared to the control group. We propose that the 3-D virtual reality kayak program is a promising intervention method for improving the cognitive function, muscle strength, and balance of elderly.

  7. High-immersion three-dimensional display of the numerical computer model

    NASA Astrophysics Data System (ADS)

    Xing, Shujun; Yu, Xunbo; Zhao, Tianqi; Cai, Yuanfa; Chen, Duo; Chen, Zhidong; Sang, Xinzhu

    2013-08-01

    High-immersion three-dimensional (3D) displays making them valuable tools for many applications, such as designing and constructing desired building houses, industrial architecture design, aeronautics, scientific research, entertainment, media advertisement, military areas and so on. However, most technologies provide 3D display in the front of screens which are in parallel with the walls, and the sense of immersion is decreased. To get the right multi-view stereo ground image, cameras' photosensitive surface should be parallax to the public focus plane and the cameras' optical axes should be offset to the center of public focus plane both atvertical direction and horizontal direction. It is very common to use virtual cameras, which is an ideal pinhole camera to display 3D model in computer system. We can use virtual cameras to simulate the shooting method of multi-view ground based stereo image. Here, two virtual shooting methods for ground based high-immersion 3D display are presented. The position of virtual camera is determined by the people's eye position in the real world. When the observer stand in the circumcircle of 3D ground display, offset perspective projection virtual cameras is used. If the observer stands out the circumcircle of 3D ground display, offset perspective projection virtual cameras and the orthogonal projection virtual cameras are adopted. In this paper, we mainly discussed the parameter setting of virtual cameras. The Near Clip Plane parameter setting is the main point in the first method, while the rotation angle of virtual cameras is the main point in the second method. In order to validate the results, we use the D3D and OpenGL to render scenes of different viewpoints and generate a stereoscopic image. A realistic visualization system for 3D models is constructed and demonstrated for viewing horizontally, which provides high-immersion 3D visualization. The displayed 3D scenes are compared with the real objects in the real world.

  8. Searching for anthranilic acid-based thumb pocket 2 HCV NS5B polymerase inhibitors through a combination of molecular docking, 3D-QSAR and virtual screening.

    PubMed

    Vrontaki, Eleni; Melagraki, Georgia; Mavromoustakos, Thomas; Afantitis, Antreas

    2016-01-01

    A combination of the following computational methods: (i) molecular docking, (ii) 3-D Quantitative Structure Activity Relationship Comparative Molecular Field Analysis (3D-QSAR CoMFA), (iii) similarity search and (iv) virtual screening using PubChem database was applied to identify new anthranilic acid-based inhibitors of hepatitis C virus (HCV) replication. A number of known inhibitors were initially docked into the "Thumb Pocket 2" allosteric site of the crystal structure of the enzyme HCV RNA-dependent RNA polymerase (NS5B GT1b). Then, the CoMFA fields were generated through a receptor-based alignment of docking poses to build a validated and stable 3D-QSAR CoMFA model. The proposed model can be first utilized to get insight into the molecular features that promote bioactivity, and then within a virtual screening procedure, it can be used to estimate the activity of novel potential bioactive compounds prior to their synthesis and biological tests.

  9. 3D-Reconstructions and Virtual 4D-Visualization to Study Metamorphic Brain Development in the Sphinx Moth Manduca Sexta.

    PubMed

    Huetteroth, Wolf; El Jundi, Basil; El Jundi, Sirri; Schachtner, Joachim

    2010-01-01

    DURING METAMORPHOSIS, THE TRANSITION FROM THE LARVA TO THE ADULT, THE INSECT BRAIN UNDERGOES CONSIDERABLE REMODELING: new neurons are integrated while larval neurons are remodeled or eliminated. One well acknowledged model to study metamorphic brain development is the sphinx moth Manduca sexta. To further understand mechanisms involved in the metamorphic transition of the brain we generated a 3D standard brain based on selected brain areas of adult females and 3D reconstructed the same areas during defined stages of pupal development. Selected brain areas include for example mushroom bodies, central complex, antennal- and optic lobes. With this approach we eventually want to quantify developmental changes in neuropilar architecture, but also quantify changes in the neuronal complement and monitor the development of selected neuronal populations. Furthermore, we used a modeling software (Cinema 4D) to create a virtual 4D brain, morphing through its developmental stages. Thus the didactical advantages of 3D visualization are expanded to better comprehend complex processes of neuropil formation and remodeling during development. To obtain datasets of the M. sexta brain areas, we stained whole brains with an antiserum against the synaptic vesicle protein synapsin. Such labeled brains were then scanned with a confocal laser scanning microscope and selected neuropils were reconstructed with the 3D software AMIRA 4.1.

  10. Transforming Clinical Imaging and 3D Data for Virtual Reality Learning Objects: HTML5 and Mobile Devices Implementation

    ERIC Educational Resources Information Center

    Trelease, Robert B.; Nieder, Gary L.

    2013-01-01

    Web deployable anatomical simulations or "virtual reality learning objects" can easily be produced with QuickTime VR software, but their use for online and mobile learning is being limited by the declining support for web browser plug-ins for personal computers and unavailability on popular mobile devices like Apple iPad and Android…

  11. Implementing Advanced Characteristics of X3D Collaborative Virtual Environments for Supporting e-Learning: The Case of EVE Platform

    ERIC Educational Resources Information Center

    Bouras, Christos; Triglianos, Vasileios; Tsiatsos, Thrasyvoulos

    2014-01-01

    Three dimensional Collaborative Virtual Environments are a powerful form of collaborative telecommunication applications, enabling the users to share a common three-dimensional space and interact with each other as well as with the environment surrounding them, in order to collaboratively solve problems or aid learning processes. Such an…

  12. Caring in the Dynamics of Design and Languaging: Exploring Second Language Learning in 3D Virtual Spaces

    ERIC Educational Resources Information Center

    Zheng, Dongping

    2012-01-01

    This study provides concrete evidence of ecological, dialogical views of languaging within the dynamics of coordination and cooperation in a virtual world. Beginning level second language learners of Chinese engaged in cooperative activities designed to provide them opportunities to refine linguistic actions by way of caring for others, for the…

  13. i-BRUSH: a gaze-contingent virtual paintbrush for dense 3D reconstruction in robotic assisted surgery.

    PubMed

    Visentini-Scarzanella, Marco; Mylonas, George P; Stoyanov, Danail; Yang, Guang-Zhong

    2009-01-01

    With increasing demand on intra-operative navigation and motion compensation during robotic assisted minimally invasive surgery, real-time 3D deformation recovery remains a central problem. Currently the majority of existing methods rely on salient features, where the inherent paucity of distinctive landmarks implies either a semi-dense reconstruction or the use of strong geometrical constraints. In this study, we propose a gaze-contingent depth reconstruction scheme by integrating human perception with semi-dense stereo and p-q based shading information. Depth inference is carried out in real-time through a novel application of Bayesian chains without smoothness priors. The practical value of the scheme is highlighted by detailed validation using a beating heart phantom model with known geometry to verify the performance of gaze-contingent 3D surface reconstruction and deformation recovery. PMID:20426007

  14. i-BRUSH: a gaze-contingent virtual paintbrush for dense 3D reconstruction in robotic assisted surgery.

    PubMed

    Visentini-Scarzanella, Marco; Mylonas, George P; Stoyanov, Danail; Yang, Guang-Zhong

    2009-01-01

    With increasing demand on intra-operative navigation and motion compensation during robotic assisted minimally invasive surgery, real-time 3D deformation recovery remains a central problem. Currently the majority of existing methods rely on salient features, where the inherent paucity of distinctive landmarks implies either a semi-dense reconstruction or the use of strong geometrical constraints. In this study, we propose a gaze-contingent depth reconstruction scheme by integrating human perception with semi-dense stereo and p-q based shading information. Depth inference is carried out in real-time through a novel application of Bayesian chains without smoothness priors. The practical value of the scheme is highlighted by detailed validation using a beating heart phantom model with known geometry to verify the performance of gaze-contingent 3D surface reconstruction and deformation recovery.

  15. Towards photorealistic and immersive virtual-reality environments for simulated prosthetic vision: integrating recent breakthroughs in consumer hardware and software.

    PubMed

    Zapf, Marc P; Matteucci, Paul B; Lovell, Nigel H; Zheng, Steven; Suaning, Gregg J

    2014-01-01

    Simulated prosthetic vision (SPV) in normally sighted subjects is an established way of investigating the prospective efficacy of visual prosthesis designs in visually guided tasks such as mobility. To perform meaningful SPV mobility studies in computer-based environments, a credible representation of both the virtual scene to navigate and the experienced artificial vision has to be established. It is therefore prudent to make optimal use of existing hardware and software solutions when establishing a testing framework. The authors aimed at improving the realism and immersion of SPV by integrating state-of-the-art yet low-cost consumer technology. The feasibility of body motion tracking to control movement in photo-realistic virtual environments was evaluated in a pilot study. Five subjects were recruited and performed an obstacle avoidance and wayfinding task using either keyboard and mouse, gamepad or Kinect motion tracking. Walking speed and collisions were analyzed as basic measures for task performance. Kinect motion tracking resulted in lower performance as compared to classical input methods, yet results were more uniform across vision conditions. The chosen framework was successfully applied in a basic virtual task and is suited to realistically simulate real-world scenes under SPV in mobility research. Classical input peripherals remain a feasible and effective way of controlling the virtual movement. Motion tracking, despite its limitations and early state of implementation, is intuitive and can eliminate between-subject differences due to familiarity to established input methods.

  16. Foreign Language Vocabulary Development through Activities in an Online 3D Environment

    ERIC Educational Resources Information Center

    Milton, James; Jonsen, Sunniva; Hirst, Steven; Lindenburn, Sharn

    2012-01-01

    On-line virtual 3D worlds offer the opportunity for users to interact in real time with native speakers of the language they are learning. In principle, this ought to be of great benefit to learners, and mimicking the opportunity for immersion that real-life travel to a foreign country offers. We have very little research to show whether this is…

  17. Enabling immersive simulation.

    SciTech Connect

    McCoy, Josh; Mateas, Michael; Hart, Derek H.; Whetzel, Jonathan; Basilico, Justin Derrick; Glickman, Matthew R.; Abbott, Robert G.

    2009-02-01

    The object of the 'Enabling Immersive Simulation for Complex Systems Analysis and Training' LDRD has been to research, design, and engineer a capability to develop simulations which (1) provide a rich, immersive interface for participation by real humans (exploiting existing high-performance game-engine technology wherever possible), and (2) can leverage Sandia's substantial investment in high-fidelity physical and cognitive models implemented in the Umbra simulation framework. We report here on these efforts. First, we describe the integration of Sandia's Umbra modular simulation framework with the open-source Delta3D game engine. Next, we report on Umbra's integration with Sandia's Cognitive Foundry, specifically to provide for learning behaviors for 'virtual teammates' directly from observed human behavior. Finally, we describe the integration of Delta3D with the ABL behavior engine, and report on research into establishing the theoretical framework that will be required to make use of tools like ABL to scale up to increasingly rich and realistic virtual characters.

  18. Workflows and the Role of Images for Virtual 3d Reconstruction of no Longer Extant Historic Objects

    NASA Astrophysics Data System (ADS)

    Münster, S.

    2013-07-01

    3D reconstruction technologies have gained importance as tools for the research and visualization of no longer extant historic objects during the last decade. Within such reconstruction processes, visual media assumes several important roles: as the most important sources especially for a reconstruction of no longer extant objects, as a tool for communication and cooperation within the production process, as well as for a communication and visualization of results. While there are many discourses about theoretical issues of depiction as sources and as visualization outcomes of such projects, there is no systematic research about the importance of depiction during a 3D reconstruction process and based on empirical findings. Moreover, from a methodological perspective, it would be necessary to understand which role visual media plays during the production process and how it is affected by disciplinary boundaries and challenges specific to historic topics. Research includes an analysis of published work and case studies investigating reconstruction projects. This study uses methods taken from social sciences to gain a grounded view of how production processes would take place in practice and which functions and roles images would play within them. For the investigation of these topics, a content analysis of 452 conference proceedings and journal articles related to 3D reconstruction modeling in the field of humanities has been completed. Most of the projects described in those publications dealt with data acquisition and model building for existing objects. Only a small number of projects focused on structures that no longer or never existed physically. Especially that type of project seems to be interesting for a study of the importance of pictures as sources and as tools for interdisciplinary cooperation during the production process. In the course of the examination the authors of this paper applied a qualitative content analysis for a sample of 26 previously

  19. Building Analysis for Urban Energy Planning Using Key Indicators on Virtual 3d City Models - the Energy Atlas of Berlin

    NASA Astrophysics Data System (ADS)

    Krüger, A.; Kolbe, T. H.

    2012-07-01

    In the context of increasing greenhouse gas emission and global demographic change with the simultaneous trend to urbanization, it is a big challenge for cities around the world to perform modifications in energy supply chain and building characteristics resulting in reduced energy consumption and carbon dioxide mitigation. Sound knowledge of energy resource demand and supply including its spatial distribution within urban areas is of great importance for planning strategies addressing greater energy efficiency. The understanding of the city as a complex energy system affects several areas of the urban living, e.g. energy supply, urban texture, human lifestyle, and climate protection. With the growing availability of 3D city models around the world based on the standard language and format CityGML, energy system modelling, analysis and simulation can be incorporated into these models. Both domains will profit from that interaction by bringing together official and accurate building models including building geometries, semantics and locations forming a realistic image of the urban structure with systemic energy simulation models. A holistic view on the impacts of energy planning scenarios can be modelled and analyzed including side effects on urban texture and human lifestyle. This paper focuses on the identification, classification, and integration of energy-related key indicators of buildings and neighbourhoods within 3D building models. Consequent application of 3D city models conforming to CityGML serves the purpose of deriving indicators for this topic. These will be set into the context of urban energy planning within the Energy Atlas Berlin. The generation of indicator objects covering the indicator values and related processing information will be presented on the sample scenario estimation of heating energy consumption in buildings and neighbourhoods. In their entirety the key indicators will form an adequate image of the local energy situation for

  20. The Perception of Immersion

    NASA Technical Reports Server (NTRS)

    Begault, Durand Rene'; Wenzel, Elizabeth M.

    2016-01-01

    Immersion refers acoustically to sounds as coming from all directions about a listener, which normally is an inevitable consequence of human listening in an air medium. Audible sound sources are everywhere in everyday environments where sound waves propagate and reflect from surfaces around a listener. Even in environments where sounds are minimized to the greatest degree possible, such as an anechoic chamber, self-generated sound will be audible. However, the common meaning of immersion in audio and acoustics refers to the psychological sensation of being surrounded by specific sound sources. Although acoustically a sound can reach a listener from multiple surrounding directions, its spatial characteristics may be judged as unrealistic, static or constrained. For example, good quality concert hall acoustics has traditionally been correlated with a listeners sensation of being immersed by the sound of the orchestra, as opposed to the sound seeming distant and removed. Spatial audio techniques, particularly 3D audio, can provide an immersive experience because virtual sound sources and sounds reflections can be made to appear from anywhere in space about a listener. This chapter introduces a listener to the physiological, psychoacoustic and acoustic bases of these sensations.

  1. ARC+(Registered Trademark) and ARC PC Welding Simulators: Teach Welders with Virtual Interactive 3D Technologies

    NASA Technical Reports Server (NTRS)

    Choquet, Claude

    2011-01-01

    123 Certification Inc., a Montreal based company, has developed an innovative hands-on welding simulator solution to help build the welding workforce in the most simple way. The solution lies in virtual reality technology, which has been fully tested since the early 90's. President and founder of 123 Certification Inc., Mr. Claude Choquet Ing. Msc. IWE. acts as a bridge between the welding and the programming world. Working in these fields for more than 20 years. he has filed 12 patents world-wide for a gesture control platform with leading edge hardware related to simulation. In the summer of 2006. Mr Choquet was proud to be invited to the annual IIW International Weld ing Congress in Quebec City to launch the ARC+ welding simulator. A 100% virtual reality system and web based training center was developed to simulate multi process. multi-materiaL multi-position and multi pass welding. The simulator is intended to train welding students and apprentices in schools or industries. The welding simulator is composed of a real welding e[eetrode holder (SMAW-GTAW) and gun (GMAW-FCAW). a head mounted display (HMD), a 6 degrees of freedom tracking system for interaction between the user's hands and head. as well as external audio speakers. Both guns and HMD are interacting online and simultaneously. The welding simulation is based on the law of physics and empirical results from detailed analysis of a series of welding tests based on industrial applications tested over the last 20 years. The simulation runs in real-time, using a local logic network to determine the quality and shape of the created weld. These results are based on the orientation distance. and speed of the welding torch and depth of penetration. The welding process and resulting weld bc.1d are displayed in a virtual environment with screenplay interactive training modules. For review. weld quality and recorded process values can be displayed and diagnosed after welding. To help in the le.tming process, a

  2. 3D World Building System

    ScienceCinema

    None

    2016-07-12

    This video provides an overview of the Sandia National Laboratories developed 3-D World Model Building capability that provides users with an immersive, texture rich 3-D model of their environment in minutes using a laptop and color and depth camera.

  3. 3D World Building System

    SciTech Connect

    2013-10-30

    This video provides an overview of the Sandia National Laboratories developed 3-D World Model Building capability that provides users with an immersive, texture rich 3-D model of their environment in minutes using a laptop and color and depth camera.

  4. In silico exploration of c-KIT inhibitors by pharmaco-informatics methodology: pharmacophore modeling, 3D QSAR, docking studies, and virtual screening.

    PubMed

    Chaudhari, Prashant; Bari, Sanjay

    2016-02-01

    c-KIT is a component of the platelet-derived growth factor receptor family, classified as type-III receptor tyrosine kinase. c-KIT has been reported to be involved in, small cell lung cancer, other malignant human cancers, and inflammatory and autoimmune diseases associated with mast cells. Available c-KIT inhibitors suffer from tribulations of growing resistance or cardiac toxicity. A combined in silico pharmacophore and structure-based virtual screening was performed to identify novel potential c-KIT inhibitors. In the present study, five molecules from the ZINC database were retrieved as new potential c-KIT inhibitors, using Schrödinger's Maestro 9.0 molecular modeling suite. An atom-featured 3D QSAR model was built using previously reported c-KIT inhibitors containing the indolin-2-one scaffold. The developed 3D QSAR model ADHRR.24 was found to be significant (R2 = 0.9378, Q2 = 0.7832) and instituted to be sufficiently robust with good predictive accuracy, as confirmed through external validation approaches, Y-randomization and GH approach [GH score 0.84 and Enrichment factor (E) 4.964]. The present QSAR model was further validated for the OECD principle 3, in that the applicability domain was calculated using a "standardization approach." Molecular docking of the QSAR dataset molecules and final ZINC hits were performed on the c-KIT receptor (PDB ID: 3G0E). Docking interactions were in agreement with the developed 3D QSAR model. Model ADHRR.24 was explored for ligand-based virtual screening followed by in silico ADME prediction studies. Five molecules from the ZINC database were obtained as potential c-KIT inhibitors with high in -silico predicted activity and strong key binding interactions with the c-KIT receptor.

  5. High precision analysis of an embryonic extensional fault-related fold using 3D orthorectified virtual outcrops: The viewpoint importance in structural geology

    NASA Astrophysics Data System (ADS)

    Tavani, Stefano; Corradetti, Amerigo; Billi, Andrea

    2016-05-01

    Image-based 3D modeling has recently opened the way to the use of virtual outcrop models in geology. An intriguing application of this method involves the production of orthorectified images of outcrops using almost any user-defined point of view, so that photorealistic cross-sections suitable for numerous geological purposes and measurements can be easily generated. These purposes include the accurate quantitative analysis of fault-fold relationships starting from imperfectly oriented and partly inaccessible real outcrops. We applied the method of image-based 3D modeling and orthorectification to a case study from the northern Apennines, Italy, where an incipient extensional fault affecting well-layered limestones is exposed on a 10-m-high barely accessible cliff. Through a few simple steps, we constructed a high-quality image-based 3D model of the outcrop. In the model, we made a series of measurements including fault and bedding attitudes, which allowed us to derive the bedding-fault intersection direction. We then used this direction as viewpoint to obtain a distortion-free photorealistic cross-section, on which we measured bed dips and thicknesses as well as fault stratigraphic separations. These measurements allowed us to identify a slight difference (i.e. only 0.5°) between the hangingwall and footwall cutoff angles. We show that the hangingwall strain required to compensate the upward-decreasing displacement of the fault was accommodated by this 0.5° rotation (i.e. folding) and coeval 0.8% thickening of strata in the hangingwall relatively to footwall strata. This evidence is consistent with trishear fault-propagation folding. Our results emphasize the viewpoint importance in structural geology and therefore the potential of using orthorectified virtual outcrops.

  6. Virtualizing ancient Rome: 3D acquisition and modeling of a large plaster-of-Paris model of imperial Rome

    NASA Astrophysics Data System (ADS)

    Guidi, Gabriele; Frischer, Bernard; De Simone, Monica; Cioci, Andrea; Spinetti, Alessandro; Carosso, Luca; Micoli, Laura L.; Russo, Michele; Grasso, Tommaso

    2004-12-01

    Computer modeling through digital range images has been used for many applications, including 3D modeling of objects belonging to our cultural heritage. The scales involved range from small objects (e.g. pottery), to middle-sized works of art (statues, architectural decorations), up to very large structures (architectural and archaeological monuments). For any of these applications, suitable sensors and methodologies have been explored by different authors. The object to be modeled within this project is the "Plastico di Roma antica," a large plaster-of-Paris model of imperial Rome (16x17 meters) created in the last century. Its overall size therefore demands an acquisition approach typical of large structures, but it also is characterized extremely tiny details typical of small objects (houses are a few centimeters high; their doors, windows, etc. are smaller than 1 centimeter). This paper gives an account of the procedures followed for solving this "contradiction" and describes how a huge 3D model was acquired and generated by using a special metrology Laser Radar. The procedures for reorienting in a single reference system the huge point clouds obtained after each acquisition phase, thanks to the measurement of fixed redundant references, are described. The data set was split in smaller sub-areas 2 x 2 meters each for purposes of mesh editing. This subdivision was necessary owing to the huge number of points in each individual scan (50-60 millions). The final merge of the edited parts made it possible to create a single mesh. All these processes were made with software specifically designed for this project since no commercial package could be found that was suitable for managing such a large number of points. Preliminary models are presented. Finally, the significance of the project is discussed in terms of the overall project known as "Rome Reborn," of which the present acquisition is an important component.

  7. Virtualizing ancient Rome: 3D acquisition and modeling of a large plaster-of-Paris model of imperial Rome

    NASA Astrophysics Data System (ADS)

    Guidi, Gabriele; Frischer, Bernard; De Simone, Monica; Cioci, Andrea; Spinetti, Alessandro; Carosso, Luca; Micoli, Laura L.; Russo, Michele; Grasso, Tommaso

    2005-01-01

    Computer modeling through digital range images has been used for many applications, including 3D modeling of objects belonging to our cultural heritage. The scales involved range from small objects (e.g. pottery), to middle-sized works of art (statues, architectural decorations), up to very large structures (architectural and archaeological monuments). For any of these applications, suitable sensors and methodologies have been explored by different authors. The object to be modeled within this project is the "Plastico di Roma antica," a large plaster-of-Paris model of imperial Rome (16x17 meters) created in the last century. Its overall size therefore demands an acquisition approach typical of large structures, but it also is characterized extremely tiny details typical of small objects (houses are a few centimeters high; their doors, windows, etc. are smaller than 1 centimeter). This paper gives an account of the procedures followed for solving this "contradiction" and describes how a huge 3D model was acquired and generated by using a special metrology Laser Radar. The procedures for reorienting in a single reference system the huge point clouds obtained after each acquisition phase, thanks to the measurement of fixed redundant references, are described. The data set was split in smaller sub-areas 2 x 2 meters each for purposes of mesh editing. This subdivision was necessary owing to the huge number of points in each individual scan (50-60 millions). The final merge of the edited parts made it possible to create a single mesh. All these processes were made with software specifically designed for this project since no commercial package could be found that was suitable for managing such a large number of points. Preliminary models are presented. Finally, the significance of the project is discussed in terms of the overall project known as "Rome Reborn," of which the present acquisition is an important component.

  8. Extending Body Space in Immersive Virtual Reality: A Very Long Arm Illusion

    PubMed Central

    Kilteni, Konstantina; Normand, Jean-Marie; Sanchez-Vives, Maria V.; Slater, Mel

    2012-01-01

    Recent studies have shown that a fake body part can be incorporated into human body representation through synchronous multisensory stimulation on the fake and corresponding real body part – the most famous example being the Rubber Hand Illusion. However, the extent to which gross asymmetries in the fake body can be assimilated remains unknown. Participants experienced, through a head-tracked stereo head-mounted display a virtual body coincident with their real body. There were 5 conditions in a between-groups experiment, with 10 participants per condition. In all conditions there was visuo-motor congruence between the real and virtual dominant arm. In an Incongruent condition (I), where the virtual arm length was equal to the real length, there was visuo-tactile incongruence. In four Congruent conditions there was visuo-tactile congruence, but the virtual arm lengths were either equal to (C1), double (C2), triple (C3) or quadruple (C4) the real ones. Questionnaire scores and defensive withdrawal movements in response to a threat showed that the overall level of ownership was high in both C1 and I, and there was no significant difference between these conditions. Additionally, participants experienced ownership over the virtual arm up to three times the length of the real one, and less strongly at four times the length. The illusion did decline, however, with the length of the virtual arm. In the C2–C4 conditions although a measure of proprioceptive drift positively correlated with virtual arm length, there was no correlation between the drift and ownership of the virtual arm, suggesting different underlying mechanisms between ownership and drift. Overall, these findings extend and enrich previous results that multisensory and sensorimotor information can reconstruct our perception of the body shape, size and symmetry even when this is not consistent with normal body proportions. PMID:22829891

  9. MITRE's virtual model shop

    NASA Astrophysics Data System (ADS)

    Wingfield, Michael A.

    1995-04-01

    The exploration of visual data and the use of visual information during the design process can be greatly enhanced by working within the virtual environment where the user is closely coupled to the data by means of immersive technologies and natural user interfaces. Current technology enables us to construct a virtual environment utilizing 3D graphics projection, object generated stereo sound, tactile feedback, and voice command input. Advances in software architectures and user interfaces enable us to focus on enhancing the design process within the virtual environment. These explorations at MITRE have evolved into an application which focuses on the ability to create, manipulate, and explore photo and audio realistic 3D models of work spaces, office complexes, and entire communities in real-time. This application, the Virtual Interactive Planning System, is a component of the MITRE virtual model shop, a suite of applications which permits the user to design and manipulate computer graphics models within the virtual environment.

  10. Drug Design for CNS Diseases: Polypharmacological Profiling of Compounds Using Cheminformatic, 3D-QSAR and Virtual Screening Methodologies.

    PubMed

    Nikolic, Katarina; Mavridis, Lazaros; Djikic, Teodora; Vucicevic, Jelica; Agbaba, Danica; Yelekci, Kemal; Mitchell, John B O

    2016-01-01

    HIGHLIGHTS Many CNS targets are being explored for multi-target drug designNew databases and cheminformatic methods enable prediction of primary pharmaceutical target and off-targets of compoundsQSAR, virtual screening and docking methods increase the potential of rational drug design The diverse cerebral mechanisms implicated in Central Nervous System (CNS) diseases together with the heterogeneous and overlapping nature of phenotypes indicated that multitarget strategies may be appropriate for the improved treatment of complex brain diseases. Understanding how the neurotransmitter systems interact is also important in optimizing therapeutic strategies. Pharmacological intervention on one target will often influence another one, such as the well-established serotonin-dopamine interaction or the dopamine-glutamate interaction. It is now accepted that drug action can involve plural targets and that polypharmacological interaction with multiple targets, to address disease in more subtle and effective ways, is a key concept for development of novel drug candidates against complex CNS diseases. A multi-target therapeutic strategy for Alzheimer's disease resulted in the development of very effective Multi-Target Designed Ligands (MTDL) that act on both the cholinergic and monoaminergic systems, and also retard the progression of neurodegeneration by inhibiting amyloid aggregation. Many compounds already in databases have been investigated as ligands for multiple targets in drug-discovery programs. A probabilistic method, the Parzen-Rosenblatt Window approach, was used to build a "predictor" model using data collected from the ChEMBL database. The model can be used to predict both the primary pharmaceutical target and off-targets of a compound based on its structure. Several multi-target ligands were selected for further study, as compounds with possible additional beneficial pharmacological activities. Based on all these findings, it is concluded that multipotent ligands

  11. Drug Design for CNS Diseases: Polypharmacological Profiling of Compounds Using Cheminformatic, 3D-QSAR and Virtual Screening Methodologies

    PubMed Central

    Nikolic, Katarina; Mavridis, Lazaros; Djikic, Teodora; Vucicevic, Jelica; Agbaba, Danica; Yelekci, Kemal; Mitchell, John B. O.

    2016-01-01

    HIGHLIGHTS Many CNS targets are being explored for multi-target drug designNew databases and cheminformatic methods enable prediction of primary pharmaceutical target and off-targets of compoundsQSAR, virtual screening and docking methods increase the potential of rational drug design The diverse cerebral mechanisms implicated in Central Nervous System (CNS) diseases together with the heterogeneous and overlapping nature of phenotypes indicated that multitarget strategies may be appropriate for the improved treatment of complex brain diseases. Understanding how the neurotransmitter systems interact is also important in optimizing therapeutic strategies. Pharmacological intervention on one target will often influence another one, such as the well-established serotonin-dopamine interaction or the dopamine-glutamate interaction. It is now accepted that drug action can involve plural targets and that polypharmacological interaction with multiple targets, to address disease in more subtle and effective ways, is a key concept for development of novel drug candidates against complex CNS diseases. A multi-target therapeutic strategy for Alzheimer‘s disease resulted in the development of very effective Multi-Target Designed Ligands (MTDL) that act on both the cholinergic and monoaminergic systems, and also retard the progression of neurodegeneration by inhibiting amyloid aggregation. Many compounds already in databases have been investigated as ligands for multiple targets in drug-discovery programs. A probabilistic method, the Parzen-Rosenblatt Window approach, was used to build a “predictor” model using data collected from the ChEMBL database. The model can be used to predict both the primary pharmaceutical target and off-targets of a compound based on its structure. Several multi-target ligands were selected for further study, as compounds with possible additional beneficial pharmacological activities. Based on all these findings, it is concluded that multipotent

  12. Comparing "pick and place" task in spatial Augmented Reality versus non-immersive Virtual Reality for rehabilitation setting.

    PubMed

    Khademi, Maryam; Hondori, Hossein Mousavi; Dodakian, Lucy; Cramer, Steve; Lopes, Cristina V

    2013-01-01

    Introducing computer games to the rehabilitation market led to development of numerous Virtual Reality (VR) training applications. Although VR has provided tremendous benefit to the patients and caregivers, it has inherent limitations, some of which might be solved by replacing it with Augmented Reality (AR). The task of pick-and-place, which is part of many activities of daily living (ADL's), is one of the major affected functions stroke patients mainly expect to recover. We developed an exercise consisting of moving an object between various points, following a flash light that indicates the next target. The results show superior performance of subjects in spatial AR versus non-immersive VR setting. This could be due to the extraneous hand-eye coordination which exists in VR whereas it is eliminated in spatial AR. PMID:24110762

  13. Comparing "pick and place" task in spatial Augmented Reality versus non-immersive Virtual Reality for rehabilitation setting.

    PubMed

    Khademi, Maryam; Hondori, Hossein Mousavi; Dodakian, Lucy; Cramer, Steve; Lopes, Cristina V

    2013-01-01

    Introducing computer games to the rehabilitation market led to development of numerous Virtual Reality (VR) training applications. Although VR has provided tremendous benefit to the patients and caregivers, it has inherent limitations, some of which might be solved by replacing it with Augmented Reality (AR). The task of pick-and-place, which is part of many activities of daily living (ADL's), is one of the major affected functions stroke patients mainly expect to recover. We developed an exercise consisting of moving an object between various points, following a flash light that indicates the next target. The results show superior performance of subjects in spatial AR versus non-immersive VR setting. This could be due to the extraneous hand-eye coordination which exists in VR whereas it is eliminated in spatial AR.

  14. Transforming clinical imaging and 3D data for virtual reality learning objects: HTML5 and mobile devices implementation.

    PubMed

    Trelease, Robert B; Nieder, Gary L

    2013-01-01

    Web deployable anatomical simulations or "virtual reality learning objects" can easily be produced with QuickTime VR software, but their use for online and mobile learning is being limited by the declining support for web browser plug-ins for personal computers and unavailability on popular mobile devices like Apple iPad and Android tablets. This article describes complementary methods for creating comparable, multiplatform VR learning objects in the new HTML5 standard format, circumventing platform-specific limitations imposed by the QuickTime VR multimedia file format. Multiple types or "dimensions" of anatomical information can be embedded in such learning objects, supporting different kinds of online learning applications, including interactive atlases, examination questions, and complex, multi-structure presentations. Such HTML5 VR learning objects are usable on new mobile devices that do not support QuickTime VR, as well as on personal computers. Furthermore, HTML5 VR learning objects can be embedded in "ebook" document files, supporting the development of new types of electronic textbooks on mobile devices that are increasingly popular and self-adopted for mobile learning.

  15. Transforming clinical imaging and 3D data for virtual reality learning objects: HTML5 and mobile devices implementation.

    PubMed

    Trelease, Robert B; Nieder, Gary L

    2013-01-01

    Web deployable anatomical simulations or "virtual reality learning objects" can easily be produced with QuickTime VR software, but their use for online and mobile learning is being limited by the declining support for web browser plug-ins for personal computers and unavailability on popular mobile devices like Apple iPad and Android tablets. This article describes complementary methods for creating comparable, multiplatform VR learning objects in the new HTML5 standard format, circumventing platform-specific limitations imposed by the QuickTime VR multimedia file format. Multiple types or "dimensions" of anatomical information can be embedded in such learning objects, supporting different kinds of online learning applications, including interactive atlases, examination questions, and complex, multi-structure presentations. Such HTML5 VR learning objects are usable on new mobile devices that do not support QuickTime VR, as well as on personal computers. Furthermore, HTML5 VR learning objects can be embedded in "ebook" document files, supporting the development of new types of electronic textbooks on mobile devices that are increasingly popular and self-adopted for mobile learning. PMID:23212750

  16. Urban Archaeology: how to Communicate a Story of a Site, 3d Virtual Reconstruction but not Only

    NASA Astrophysics Data System (ADS)

    Capone, M.

    2011-09-01

    Over the past few years experimental systems have been developed to introduce new ways of enjoying cultural heritage using digital media. Technology had a lead role in this testing ground increasing the need to develop new way of communication according to contemporary iconography culture. Most applications are aimed at creating online databases that allow free access to information, that helps to spread the culture and simplify the study about cultural heritage. To this type of application are added others, which are aimed at defining new and different ways of cultural heritage enjoyment. Very interesting applications are those regarding to reconstruction of archaeological landscape. The target of these applications is to develop a new level of knowledge that increases the value of the archaeological find and the level of understanding. In fact, digital media can bridge the gap of communication associated to archaeological find: the virtual simulation offers the possibility to put it in the context and it defines a new way to enjoy the cultural heritage. In most of these cases the spectacular and recreational factor generally prevails. We believe that experimentation is needed in this area, particularly for the development of Urban Archaeology. In this case, another trouble to enjoy is added to the lack of communication, typical of archaeological finds, because it is "hidden" in an irreversible way: it is under water or under city. So, our research is mainly oriented to define a methodological path to elaborate a communication strategy to increase interest about Urban Archaeology.

  17. Treatment of acrophobia in virtual reality: the role of immersion and presence.

    PubMed

    Krijn, Merel; Emmelkamp, Paul M G; Biemond, Roeline; de Wilde de Ligny, Claudius; Schuemie, Martijn J; van der Mast, Charles A P G

    2004-02-01

    In this study the effects of virtual reality exposure therapy (VRET) were investigated in patients with acrophobia. Feelings of presence in VRET were systematically varied by using either a head-mounted display (HMD) (low presence) or a computer automatic virtual environment (CAVE) (high presence). VRET in general was found to be more effective than no treatment. No differences were found in effectiveness between VRET using an HMD or CAVE. Results were maintained at 6 months follow-up. Results of VRET were comparable with those of exposure in vivo (Cyberpsychology and Behavior 4 (2001) 335). In treatment completers no relationship was found between presence and anxiety. Early drop-outs experienced less acrophobic complaints and psychopathology in general at pre-test. They also experienced less presence and anxiety in the virtual environment used in session one as compared to patients that completed VRET.

  18. New Desktop Virtual Reality Technology in Technical Education

    ERIC Educational Resources Information Center

    Ausburn, Lynna J.; Ausburn, Floyd B.

    2008-01-01

    Virtual reality (VR) that immerses users in a 3D environment through use of headwear, body suits, and data gloves has demonstrated effectiveness in technical and professional education. Immersive VR is highly engaging and appealing to technically skilled young Net Generation learners. However, technical difficulty and very high costs have kept…

  19. Game engines and immersive displays

    NASA Astrophysics Data System (ADS)

    Chang, Benjamin; Destefano, Marc

    2014-02-01

    While virtual reality and digital games share many core technologies, the programming environments, toolkits, and workflows for developing games and VR environments are often distinct. VR toolkits designed for applications in visualization and simulation often have a different feature set or design philosophy than game engines, while popular game engines often lack support for VR hardware. Extending a game engine to support systems such as the CAVE gives developers a unified development environment and the ability to easily port projects, but involves challenges beyond just adding stereo 3D visuals. In this paper we outline the issues involved in adapting a game engine for use with an immersive display system including stereoscopy, tracking, and clustering, and present example implementation details using Unity3D. We discuss application development and workflow approaches including camera management, rendering synchronization, GUI design, and issues specific to Unity3D, and present examples of projects created for a multi-wall, clustered, stereoscopic display.

  20. Real-time recording and classification of eye movements in an immersive virtual environment

    PubMed Central

    Diaz, Gabriel; Cooper, Joseph; Kit, Dmitry; Hayhoe, Mary

    2013-01-01

    Despite the growing popularity of virtual reality environments, few laboratories are equipped to investigate eye movements within these environments. This primer is intended to reduce the time and effort required to incorporate eye-tracking equipment into a virtual reality environment. We discuss issues related to the initial startup and provide algorithms necessary for basic analysis. Algorithms are provided for the calculation of gaze angle within a virtual world using a monocular eye-tracker in a three-dimensional environment. In addition, we provide algorithms for the calculation of the angular distance between the gaze and a relevant virtual object and for the identification of fixations, saccades, and pursuit eye movements. Finally, we provide tools that temporally synchronize gaze data and the visual stimulus and enable real-time assembly of a video-based record of the experiment using the Quicktime MOV format, available at http://sourceforge.net/p/utdvrlibraries/. This record contains the visual stimulus, the gaze cursor, and associated numerical data and can be used for data exportation, visual inspection, and validation of calculated gaze movements. PMID:24113087

  1. The Development and Evaluation of a Virtual Radiotherapy Treatment Machine Using an Immersive Visualisation Environment

    ERIC Educational Resources Information Center

    Bridge, P.; Appleyard, R. M.; Ward, J. W.; Philips, R.; Beavis, A. W.

    2007-01-01

    Due to the lengthy learning process associated with complicated clinical techniques, undergraduate radiotherapy students can struggle to access sufficient time or patients to gain the level of expertise they require. By developing a hybrid virtual environment with real controls, it was hoped that group learning of these techniques could take place…

  2. The importance of postural cues for determining eye height in immersive virtual reality.

    PubMed

    Leyrer, Markus; Linkenauger, Sally A; Bülthoff, Heinrich H; Mohler, Betty J

    2015-01-01

    In human perception, the ability to determine eye height is essential, because eye height is used to scale heights of objects, velocities, affordances and distances, all of which allow for successful environmental interaction. It is well understood that eye height is fundamental to determine many of these percepts. Yet, how eye height itself is provided is still largely unknown. While the information potentially specifying eye height in the real world is naturally coincident in an environment with a regular ground surface, these sources of information can be easily divergent in similar and common virtual reality scenarios. Thus, we conducted virtual reality experiments where we manipulated the virtual eye height in a distance perception task to investigate how eye height might be determined in such a scenario. We found that humans rely more on their postural cues for determining their eye height if there is a conflict between visual and postural information and little opportunity for perceptual-motor calibration is provided. This is demonstrated by the predictable variations in their distance estimates. Our results suggest that the eye height in such circumstances is informed by postural cues when estimating egocentric distances in virtual reality and consequently, does not depend on an internalized value for eye height. PMID:25993274

  3. Immersive Virtual Reality in the Psychology Classroom: What Purpose Could it Serve?

    ERIC Educational Resources Information Center

    Coxon, Matthew

    2013-01-01

    Virtual reality is by no means a new technology, yet it is increasingly being used, to different degrees, in education, training, rehabilitation, therapy, and home entertainment. Although the exact reasons for this shift are not the subject of this short opinion piece, it is possible to speculate that decreased costs, and increased performance, of…

  4. On the Potential for Using Immersive Virtual Environments to Support Laboratory Experiment Contextualisation

    ERIC Educational Resources Information Center

    Machet, Tania; Lowe, David; Gutl, Christian

    2012-01-01

    This paper explores the hypothesis that embedding a laboratory activity into a virtual environment can provide a richer experimental context and hence improve the understanding of the relationship between a theoretical model and the real world, particularly in terms of the model's strengths and weaknesses. While an identified learning objective of…

  5. The Importance of Postural Cues for Determining Eye Height in Immersive Virtual Reality

    PubMed Central

    Leyrer, Markus; Linkenauger, Sally A.; Bülthoff, Heinrich H.; Mohler, Betty J.

    2015-01-01

    In human perception, the ability to determine eye height is essential, because eye height is used to scale heights of objects, velocities, affordances and distances, all of which allow for successful environmental interaction. It is well understood that eye height is fundamental to determine many of these percepts. Yet, how eye height itself is provided is still largely unknown. While the information potentially specifying eye height in the real world is naturally coincident in an environment with a regular ground surface, these sources of information can be easily divergent in similar and common virtual reality scenarios. Thus, we conducted virtual reality experiments where we manipulated the virtual eye height in a distance perception task to investigate how eye height might be determined in such a scenario. We found that humans rely more on their postural cues for determining their eye height if there is a conflict between visual and postural information and little opportunity for perceptual-motor calibration is provided. This is demonstrated by the predictable variations in their distance estimates. Our results suggest that the eye height in such circumstances is informed by postural cues when estimating egocentric distances in virtual reality and consequently, does not depend on an internalized value for eye height. PMID:25993274

  6. Visual Perspectives within Educational Computer Games: Effects on Presence and Flow within Virtual Immersive Learning Environments

    ERIC Educational Resources Information Center

    Scoresby, Jon; Shelton, Brett E.

    2011-01-01

    The mis-categorizing of cognitive states involved in learning within virtual environments has complicated instructional technology research. Further, most educational computer game research does not account for how learning activity is influenced by factors of game content and differences in viewing perspectives. This study is a qualitative…

  7. A Virtual Walk through London: Culture Learning through a Cultural Immersion Experience

    ERIC Educational Resources Information Center

    Shih, Ya-Chun

    2015-01-01

    Integrating Google Street View into a three-dimensional virtual environment in which users control personal avatars provides these said users with access to an innovative, interactive, and real-world context for communication and culture learning. We have selected London, a city famous for its rich historical, architectural, and artistic heritage,…

  8. 3D-QSAR studies and shape based virtual screening for identification of novel hits to inhibit MbtA in Mycobacterium tuberculosis.

    PubMed

    Maganti, Lakshmi; Ghoshal, Nanda

    2015-01-01

    Mycobacterium tuberculosis, the pathogen responsible for tuberculosis, uses various strategies to survive in a variety of host lesions. The re-emergence of multi-drug-resistant strains of M. tuberculosis underlines the necessity to discover new molecules. Inhibitors of aryl acid adenylating enzyme, MbtA, involved in siderophore biosynthesis in M. tuberculosis, are being explored as potential anti tubercular agents. In this study, we have used 3D-QSAR models and shape based virtual screening to identify novel MbtA inhibitors. 3D-QSAR studies were carried out on nucleoside bisubstrate derivatives. Both Comparative Molecular Field Analysis (r(2) = .944 and r(2)(pred) = .938) and Comparative Molecular Similarity Indices Analysis (r(2) = .892 and r(2)(pred) = .842) models, developed using Gasteiger charges with all fields, predicted efficiently. A total of 13 hits were identified as novel prospective inhibitors for MbtA by utilizing an insilico workflow. Out of 13 hits, five top ranked hits were used for further molecular dynamics studies to gain more insights about the stability of the complexes. PMID:24417439

  9. Modeling and Accuracy Assessment for 3D-VIRTUAL Reconstruction in Cultural Heritage Using Low-Cost Photogrammetry: Surveying of the "santa MARÍA Azogue" Church's Front

    NASA Astrophysics Data System (ADS)

    Robleda Prieto, G.; Pérez Ramos, A.

    2015-02-01

    Sometimes it could be difficult to represent "on paper" an architectural idea, a solution, a detail or a newly created element, depending on the complexity what it want be conveyed through its graphical representation but it may be even harder to represent the existing reality. (a building, a detail,...), at least with an acceptable degree of definition and accuracy. As a solution to this hypothetical problem, this paper try to show a methodology to collect measure data by combining different methods or techniques, to obtain the characteristic geometry of architectonic elements, especially in those highly decorated and/or complex geometry, as well as to assess the accuracy of the results obtained, but in an accuracy level enough and not very expensive costs. In addition, we can obtain a 3D recovery model that allows us a strong support, beyond point clouds obtained through another more expensive methods as using laser scanner, to obtain orthoimages. This methodology was used in the study case of the 3D-virtual reconstruction of a main medieval church façade because of the geometrical complexity in many elements as the existing main doorway with archivolts and many details, as well as the rose window located above it so it's inaccessible due to the height.

  10. Integrated computational tools for identification of CCR5 antagonists as potential HIV-1 entry inhibitors: homology modeling, virtual screening, molecular dynamics simulations and 3D QSAR analysis.

    PubMed

    Moonsamy, Suri; Dash, Radha Charan; Soliman, Mahmoud E S

    2014-04-23

    Using integrated in-silico computational techniques, including homology modeling, structure-based and pharmacophore-based virtual screening, molecular dynamic simulations, per-residue energy decomposition analysis and atom-based 3D-QSAR analysis, we proposed ten novel compounds as potential CCR5-dependent HIV-1 entry inhibitors. Via validated docking calculations, binding free energies revealed that novel leads demonstrated better binding affinities with CCR5 compared to maraviroc, an FDA-approved HIV-1 entry inhibitor and in clinical use. Per-residue interaction energy decomposition analysis on the averaged MD structure showed that hydrophobic active residues Trp86, Tyr89 and Tyr108 contributed the most to inhibitor binding. The validated 3D-QSAR model showed a high cross-validated rcv2 value of 0.84 using three principal components and non-cross-validated r2 value of 0.941. It was also revealed that almost all compounds in the test set and training set yielded a good predicted value. Information gained from this study could shed light on the activity of a new series of lead compounds as potential HIV entry inhibitors and serve as a powerful tool in the drug design and development machinery.

  11. A novel orthogonal transmission-virtual grating method and its applications in measuring micro 3-D shape of deformed liquid surface

    NASA Astrophysics Data System (ADS)

    Liu, Zhanwei; Huang, Xianfu; Xie, Huimin

    2013-02-01

    Deformed liquid surface directly involves the surface tension, which can always be used to account for the kinematics of aquatic insects in gas-liquid interface and the light metal floating on the water surface. In this paper a novel method based upon deformed transmission-virtual grating is proposed for determination of deformed liquid surface. By addressing an orthogonal grating (1-5 line/mm) under the transparent water groove and then capturing images from upset of the deformed water surface, a displacement vector of full-field which directly associates the 3-D deformed liquid surface then can be evaluated by processing the recorded deformed fringe pattern in the two directions (x- and y-direction). Theories and equations for the method are thoroughly delivered. Validation test to measure the deformed water surface caused by a Chinese 1-cent coin has been conducted to demonstrate the ability of the developed method. The obtained results show that the method is robust in determination of micro 3-D surface of deformed liquid with a submicron scale resolution and with a wide range application scope.

  12. Weapon identification using antemortem computed tomography with virtual 3D and rapid prototype modeling--a report in a case of blunt force head injury.

    PubMed

    Woźniak, Krzysztof; Rzepecka-Woźniak, Ewa; Moskała, Artur; Pohl, Jerzy; Latacz, Katarzyna; Dybała, Bogdan

    2012-10-10

    A frequent request of a prosecutor referring to forensic autopsy is to determine the mechanism of an injury and to identify the weapons used to cause those injuries. This task could be problematic in many ways, including changes in the primary injury caused by medical intervention and the process of healing. To accomplish this task, the forensic pathologist has to gather all possible information during the post-mortem examination. The more data is collected, the easier it is to obtain an accurate answer to the prosecutor's question. The authors present a case of head injuries that the victim sustained under unknown circumstances. The patient underwent neurosurgical treatment which resulted in alteration of the bone fracture pattern. The only way to evaluate this injury was to analyze antemortem clinical data, especially CT scans, with virtual 3D reconstruction of the fractured skull. A physical model of a part of the broken skull was created with the use of 3D printing. These advanced techniques, applied for the first time in Poland for forensic purposes, allowed investigators to extract enough data to develop a hypothesis about the mechanism of injury and the weapon most likely used.

  13. Interactive and Stereoscopic Hybrid 3D Viewer of Radar Data with Gesture Recognition

    NASA Astrophysics Data System (ADS)

    Goenetxea, Jon; Moreno, Aitor; Unzueta, Luis; Galdós, Andoni; Segura, Álvaro

    This work presents an interactive and stereoscopic 3D viewer of weather information coming from a Doppler radar. The hybrid system shows a GIS model of the regional zone where the radar is located and the corresponding reconstructed 3D volume weather data. To enhance the immersiveness of the navigation, stereoscopic visualization has been added to the viewer, using a polarized glasses based system. The user can interact with the 3D virtual world using a Nintendo Wiimote for navigating through it and a Nintendo Wii Nunchuk for giving commands by means of hand gestures. We also present a dynamic gesture recognition procedure that measures the temporal advance of the performed gesture postures. Experimental results show how dynamic gestures are effectively recognized so that a more natural interaction and immersive navigation in the virtual world is achieved.

  14. The discovery of a novel and selective inhibitor of PTP1B over TCPTP: 3D QSAR pharmacophore modeling, virtual screening, synthesis, and biological evaluation.

    PubMed

    Ma, Ying; Jin, Yuan-Yuan; Wang, Ye-Liu; Wang, Run-Ling; Lu, Xin-Hua; Kong, De-Xin; Xu, Wei-Ren

    2014-06-01

    Given the special role of insulin and leptin signaling in various biological responses, protein-tyrosine phosphatase-1B (PTP1B) was regarded as a novel therapeutic target for treating type 2 diabetes and obesity. However, owing to the highly conserved (sequence identity of about 74%) in active pocket, targeting PTP1B for drug discovery is a great challenge. In this study, we employed the software package Discovery Studio to develop 3D QSAR pharmacophore models for PTP1B and TCPTP inhibitors. It was further validated by three methods (cost analysis, test set prediction, and Fisher's test) to show that the models can be used to predict the biological activities of compounds without costly and time-consuming synthesis. The criteria for virtual screening were also validated by testing the selective PTP1B inhibitors. Virtual screening experiments and subsequent in vitro evaluation of promising hits revealed a novel and selective inhibitor of PTP1B over TCPTP. After that, a most likely binding mode was proposed. Thus, the findings reported here may provide a new strategy in discovering selective PTP1B inhibitors.

  15. A semi-immersive virtual reality incremental swing balance task activates prefrontal cortex: a functional near-infrared spectroscopy study.

    PubMed

    Basso Moro, Sara; Bisconti, Silvia; Muthalib, Makii; Spezialetti, Matteo; Cutini, Simone; Ferrari, Marco; Placidi, Giuseppe; Quaresima, Valentina

    2014-01-15

    Previous functional near-infrared spectroscopy (fNIRS) studies indicated that the prefrontal cortex (PFC) is involved in the maintenance of the postural balance after external perturbations. So far, no studies have been conducted to investigate the PFC hemodynamic response to virtual reality (VR) tasks that could be adopted in the field of functional neurorehabilitation. The aim of this fNIRS study was to assess PFC oxygenation response during an incremental and a control swing balance task (ISBT and CSBT, respectively) in a semi-immersive VR environment driven by a depth-sensing camera. It was hypothesized that: i) the PFC would be bilaterally activated in response to the increase of the ISBT difficulty, as this cortical region is involved in the allocation of attentional resources to maintain postural control; and ii) the PFC activation would be greater in the right than in the left hemisphere considering its dominance for visual control of body balance. To verify these hypotheses, 16 healthy male subjects were requested to stand barefoot while watching a 3 dimensional virtual representation of themselves projected onto a screen. They were asked to maintain their equilibrium on a virtual blue swing board susceptible to external destabilizing perturbations (i.e., randomizing the forward-backward direction of the impressed pulse force) during a 3-min ISBT (performed at four levels of difficulty) or during a 3-min CSBT (performed constantly at the lowest level of difficulty of the ISBT). The center of mass (COM), at each frame, was calculated and projected on the floor. When the subjects were unable to maintain the COM over the board, this became red (error). After each error, the time required to bring back the COM on the board was calculated (returning time). An eight-channel continuous wave fNIRS system was employed for measuring oxygenation changes (oxygenated-hemoglobin, O2Hb; deoxygenated-hemoglobin, HHb) related to the PFC activation (Brodmann Areas 10, 11

  16. A semi-immersive virtual reality incremental swing balance task activates prefrontal cortex: a functional near-infrared spectroscopy study.

    PubMed

    Basso Moro, Sara; Bisconti, Silvia; Muthalib, Makii; Spezialetti, Matteo; Cutini, Simone; Ferrari, Marco; Placidi, Giuseppe; Quaresima, Valentina

    2014-01-15

    Previous functional near-infrared spectroscopy (fNIRS) studies indicated that the prefrontal cortex (PFC) is involved in the maintenance of the postural balance after external perturbations. So far, no studies have been conducted to investigate the PFC hemodynamic response to virtual reality (VR) tasks that could be adopted in the field of functional neurorehabilitation. The aim of this fNIRS study was to assess PFC oxygenation response during an incremental and a control swing balance task (ISBT and CSBT, respectively) in a semi-immersive VR environment driven by a depth-sensing camera. It was hypothesized that: i) the PFC would be bilaterally activated in response to the increase of the ISBT difficulty, as this cortical region is involved in the allocation of attentional resources to maintain postural control; and ii) the PFC activation would be greater in the right than in the left hemisphere considering its dominance for visual control of body balance. To verify these hypotheses, 16 healthy male subjects were requested to stand barefoot while watching a 3 dimensional virtual representation of themselves projected onto a screen. They were asked to maintain their equilibrium on a virtual blue swing board susceptible to external destabilizing perturbations (i.e., randomizing the forward-backward direction of the impressed pulse force) during a 3-min ISBT (performed at four levels of difficulty) or during a 3-min CSBT (performed constantly at the lowest level of difficulty of the ISBT). The center of mass (COM), at each frame, was calculated and projected on the floor. When the subjects were unable to maintain the COM over the board, this became red (error). After each error, the time required to bring back the COM on the board was calculated (returning time). An eight-channel continuous wave fNIRS system was employed for measuring oxygenation changes (oxygenated-hemoglobin, O2Hb; deoxygenated-hemoglobin, HHb) related to the PFC activation (Brodmann Areas 10, 11

  17. Affordable virtual environments: building a virtual beach for clinical use.

    PubMed

    Sherstyuk, Andrei; Aschwanden, Christoph; Saiki, Stanley

    2005-01-01

    Virtual Reality has been used for clinical application for about 10 years and has proved to be an effective tool for treating various disorders. In this paper, we want to share our experience in building a 3D, motion tracked, immersive VR system for pain treatment and biofeedback research. PMID:15718779

  18. Hypnosis delivered through immersive virtual reality for burn pain: A clinical case series.

    PubMed

    Patterson, David R; Wiechman, Shelley A; Jensen, Mark; Sharar, Sam R

    2006-04-01

    This study is the first to use virtual-reality technology on a series of clinical patients to make hypnotic analgesia less effortful for patients and to increase the efficiency of hypnosis by eliminating the need for the presence of a trained clinician. This technologically based hypnotic induction was used to deliver hypnotic analgesia to burn-injury patients undergoing painful wound-care procedures. Pre- and postprocedure measures were collected on 13 patients with burn injuries across 3 days. In an uncontrolled series of cases, there was a decrease in reported pain and anxiety, and the need for opioid medication was cut in half. The results support additional research on the utility and efficacy of hypnotic analgesia provided by virtual reality hypnosis. PMID:16581687

  19. A method for generating an illusion of backwards time travel using immersive virtual reality-an exploratory study.

    PubMed

    Friedman, Doron; Pizarro, Rodrigo; Or-Berkers, Keren; Neyret, Solène; Pan, Xueni; Slater, Mel

    2014-01-01

    We introduce a new method, based on immersive virtual reality (IVR), to give people the illusion of having traveled backwards through time to relive a sequence of events in which they can intervene and change history. The participant had played an important part in events with a tragic outcome-deaths of strangers-by having to choose between saving 5 people or 1. We consider whether the ability to go back through time, and intervene, to possibly avoid all deaths, has an impact on how the participant views such moral dilemmas, and also whether this experience leads to a re-evaluation of past unfortunate events in their own lives. We carried out an exploratory study where in the "Time Travel" condition 16 participants relived these events three times, seeing incarnations of their past selves carrying out the actions that they had previously carried out. In a "Repetition" condition another 16 participants replayed the same situation three times, without any notion of time travel. Our results suggest that those in the Time Travel condition did achieve an illusion of "time travel" provided that they also experienced an illusion of presence in the virtual environment, body ownership, and agency over the virtual body that substituted their own. Time travel produced an increase in guilt feelings about the events that had occurred, and an increase in support of utilitarian behavior as the solution to the moral dilemma. Time travel also produced an increase in implicit morality as judged by an implicit association test. The time travel illusion was associated with a reduction of regret associated with bad decisions in their own lives. The results show that when participants have a third action that they can take to solve the moral dilemma (that does not immediately involve choosing between the 1 and the 5) then they tend to take this option, even though it is useless in solving the dilemma, and actually results in the deaths of a greater number.

  20. Conformal Visualization for Partially-Immersive Platforms

    PubMed Central

    Petkov, Kaloian; Papadopoulos, Charilaos; Zhang, Min; Kaufman, Arie E.; Gu, Xianfeng

    2010-01-01

    Current immersive VR systems such as the CAVE provide an effective platform for the immersive exploration of large 3D data. A major limitation is that in most cases at least one display surface is missing due to space, access or cost constraints. This partially-immersive visualization results in a substantial loss of visual information that may be acceptable for some applications, however it becomes a major obstacle for critical tasks, such as the analysis of medical data. We propose a conformal deformation rendering pipeline for the visualization of datasets on partially-immersive platforms. The angle-preserving conformal mapping approach is used to map the 360°3D view volume to arbitrary display configurations. It has the desirable property of preserving shapes under distortion, which is important for identifying features, especially in medical data. The conformal mapping is used for rasterization, realtime raytracing and volume rendering of the datasets. Since the technique is applied during the rendering, we can construct stereoscopic images from the data, which is usually not true for image-based distortion approaches. We demonstrate the stereo conformal mapping rendering pipeline in the partially-immersive 5-wall Immersive Cabin (IC) for virtual colonoscopy and architectural review. PMID:26279083

  1. Structural properties of spatial representations in blind people: Scanning images constructed from haptic exploration or from locomotion in a 3-D audio virtual environment.

    PubMed

    Afonso, Amandine; Blum, Alan; Katz, Brian F G; Tarroux, Philippe; Borst, Grégoire; Denis, Michel

    2010-07-01

    When people scan mental images, their response times increase linearly with increases in the distance to be scanned, which is generally taken as reflecting the fact that their internal representations incorporate the metric properties of the corresponding objects. In view of this finding, we investigated the structural properties of spatial mental images created from nonvisual sources in three groups (blindfolded sighted, late blind, and congenitally blind). In Experiment 1, blindfolded sighted and late blind participants created metrically accurate spatial representations of a small-scale spatial configuration under both verbal and haptic learning conditions. In Experiment 2, late and congenitally blind participants generated accurate spatial mental images after both verbal and locomotor learning of a full-scale navigable space (created by an immersive audio virtual reality system), whereas blindfolded sighted participants were selectively impaired in their ability to generate precise spatial representations from locomotor experience. These results attest that in the context of a permanent lack of sight, encoding spatial information on the basis of the most reliable currently functional system (the sensorimotor system) is crucial for building a metrically accurate representation of a spatial environment. The results also highlight the potential of spatialized audio-rendering technology for exploring the spatial representations of visually impaired participants.

  2. Structural properties of spatial representations in blind people: Scanning images constructed from haptic exploration or from locomotion in a 3-D audio virtual environment.

    PubMed

    Afonso, Amandine; Blum, Alan; Katz, Brian F G; Tarroux, Philippe; Borst, Grégoire; Denis, Michel

    2010-07-01

    When people scan mental images, their response times increase linearly with increases in the distance to be scanned, which is generally taken as reflecting the fact that their internal representations incorporate the metric properties of the corresponding objects. In view of this finding, we investigated the structural properties of spatial mental images created from nonvisual sources in three groups (blindfolded sighted, late blind, and congenitally blind). In Experiment 1, blindfolded sighted and late blind participants created metrically accurate spatial representations of a small-scale spatial configuration under both verbal and haptic learning conditions. In Experiment 2, late and congenitally blind participants generated accurate spatial mental images after both verbal and locomotor learning of a full-scale navigable space (created by an immersive audio virtual reality system), whereas blindfolded sighted participants were selectively impaired in their ability to generate precise spatial representations from locomotor experience. These results attest that in the context of a permanent lack of sight, encoding spatial information on the basis of the most reliable currently functional system (the sensorimotor system) is crucial for building a metrically accurate representation of a spatial environment. The results also highlight the potential of spatialized audio-rendering technology for exploring the spatial representations of visually impaired participants. PMID:20551339

  3. Manifold compositions, music visualization, and scientific sonification in an immersive virtual-reality environment.

    SciTech Connect

    Kaper, H. G.

    1998-01-05

    An interdisciplinary project encompassing sound synthesis, music composition, sonification, and visualization of music is facilitated by the high-performance computing capabilities and the virtual-reality environments available at Argonne National Laboratory. The paper describes the main features of the project's centerpiece, DIASS (Digital Instrument for Additive Sound Synthesis); ''A.N.L.-folds'', an equivalence class of compositions produced with DIASS; and application of DIASS in two experiments in the sonification of complex scientific data. Some of the larger issues connected with this project, such as the changing ways in which both scientists and composers perform their tasks, are briefly discussed.

  4. On the potential for using immersive virtual environments to support laboratory experiment contextualisation

    NASA Astrophysics Data System (ADS)

    Machet, Tania; Lowe, David; Gütl, Christian

    2012-12-01

    This paper explores the hypothesis that embedding a laboratory activity into a virtual environment can provide a richer experimental context and hence improve the understanding of the relationship between a theoretical model and the real world, particularly in terms of the model's strengths and weaknesses. While an identified learning objective of laboratories is to support the understanding of the relationship between models and reality, the paper illustrates that this understanding is hindered by inherently limited experiments and that there is scope for improvement. Despite the contextualisation of learning activities having been shown to support learning objectives in many fields, there is traditionally little contextual information presented during laboratory experimentation. The paper argues that the enhancing laboratory activity with contextual information affords an opportunity to improve students' understanding of the relationship between the theoretical model and the experiment (which is effectively a proxy for the complex real world), thereby improving their understanding of the relationship between the model and reality. The authors propose that these improvements can be achieved by setting remote laboratories within context-rich virtual worlds.

  5. Body Space in Social Interactions: A Comparison of Reaching and Comfort Distance in Immersive Virtual Reality

    PubMed Central

    Iachini, Tina; Coello, Yann; Frassinetti, Francesca; Ruggiero, Gennaro

    2014-01-01

    Background Do peripersonal space for acting on objects and interpersonal space for interacting with con-specifics share common mechanisms and reflect the social valence of stimuli? To answer this question, we investigated whether these spaces refer to a similar or different physical distance. Methodology Participants provided reachability-distance (for potential action) and comfort-distance (for social processing) judgments towards human and non-human virtual stimuli while standing still (passive) or walking toward stimuli (active). Principal Findings Comfort-distance was larger than other conditions when participants were passive, but reachability and comfort distances were similar when participants were active. Both spaces were modulated by the social valence of stimuli (reduction with virtual females vs males, expansion with cylinder vs robot) and the gender of participants. Conclusions These findings reveal that peripersonal reaching and interpersonal comfort spaces share a common motor nature and are sensitive, at different degrees, to social modulation. Therefore, social processing seems embodied and grounded in the body acting in space. PMID:25405344

  6. ‘My Virtual Dream’: Collective Neurofeedback in an Immersive Art Environment

    PubMed Central

    Kovacevic, Natasha; Ritter, Petra; Tays, William; Moreno, Sylvain; McIntosh, Anthony Randal

    2015-01-01

    While human brains are specialized for complex and variable real world tasks, most neuroscience studies reduce environmental complexity, which limits the range of behaviours that can be explored. Motivated to overcome this limitation, we conducted a large-scale experiment with electroencephalography (EEG) based brain-computer interface (BCI) technology as part of an immersive multi-media science-art installation. Data from 523 participants were collected in a single night. The exploratory experiment was designed as a collective computer game where players manipulated mental states of relaxation and concentration with neurofeedback targeting modulation of relative spectral power in alpha and beta frequency ranges. Besides validating robust time-of-night effects, gender differences and distinct spectral power patterns for the two mental states, our results also show differences in neurofeedback learning outcome. The unusually large sample size allowed us to detect unprecedented speed of learning changes in the power spectrum (~ 1 min). Moreover, we found that participants' baseline brain activity predicted subsequent neurofeedback beta training, indicating state-dependent learning. Besides revealing these training effects, which are relevant for BCI applications, our results validate a novel platform engaging art and science and fostering the understanding of brains under natural conditions. PMID:26154513

  7. 3D Audio System

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Ames Research Center research into virtual reality led to the development of the Convolvotron, a high speed digital audio processing system that delivers three-dimensional sound over headphones. It consists of a two-card set designed for use with a personal computer. The Convolvotron's primary application is presentation of 3D audio signals over headphones. Four independent sound sources are filtered with large time-varying filters that compensate for motion. The perceived location of the sound remains constant. Possible applications are in air traffic control towers or airplane cockpits, hearing and perception research and virtual reality development.

  8. Comparative usability studies of full vs. partial immersive virtual reality simulation for medical education and training.

    PubMed

    Pierce, Jennifer; Gutiérrez, Fátima; Vergara, Víctor M; Alverson, Dale C; Qualls, Clifford; Saland, Linda; Goldsmith, Timothy; Caudell, Thomas Preston

    2008-01-01

    Virtual reality (VR) simulation provides a means of making experiential learning reproducible and reusable. This study was designed to determine the efficiency and satisfaction components of usability. Previously, it was found that first year medical students using a VR simulation for medical education demonstrated effectiveness in learning as measured by knowledge structure improvements both with and without a head mounted display (HMD) but students using a HMD showed statistically greater improvement in knowledge structures compared to those not using a HMD. However, in this current analysis of other components of usability, there were no overall significance differences in efficiency (ease of use), nor in satisfaction, within this same group of randomized subjects comparing students using a HMD to those not using a HMD. These types of studies may be important in determining the most appropriate, cost effective VR simulation technology needed to achieve specific learning goals and objectives. PMID:18391324

  9. What to expect from immersive virtual environment exposure: influences of gender, body mass index, and past experience.

    PubMed

    Stanney, Kay M; Hale, Kelly S; Nahmens, Isabelina; Kennedy, Robert S

    2003-01-01

    For those interested in using head-coupled PC-based immersive virtual environment (VE) technology to train, entertain, or inform, it is essential to understand the effects this technology has on its users. This study investigated potential adverse effects, including the sickness associated with exposure and extreme responses (emesis, flashbacks). Participants were exposed to a VE for 15 to 60 min, with either complete or streamlined navigational control and simple or complex scenes, after which time measures of sickness were obtained. More than 80% of participants experienced nausea, oculomotor disturbances, and/or disorientation, with disorientation potentially lasting > 24 hr. Of the participants, 12.9% prematurely ended their exposure because of adverse effects; of these, 9.2% experienced an emetic response, whereas only 1.2% of all participants experienced emesis. The results indicate that designers may be able to reduce these rates by limiting exposure duration and reducing the degrees of freedom of the user's navigational control. Results from gender, body mass, and past experience comparisons indicated it may be possible to identify those who will experience adverse effects attributable to exposure and warn such individuals. Applications for this research include military, entertainment, and any other interactive systems for which designers seek to avoid adverse effects associated with exposure.

  10. Resting-state fMRI activity predicts unsupervised learning and memory in an immersive virtual reality environment.

    PubMed

    Wong, Chi Wah; Olafsson, Valur; Plank, Markus; Snider, Joseph; Halgren, Eric; Poizner, Howard; Liu, Thomas T

    2014-01-01

    In the real world, learning often proceeds in an unsupervised manner without explicit instructions or feedback. In this study, we employed an experimental paradigm in which subjects explored an immersive virtual reality environment on each of two days. On day 1, subjects implicitly learned the location of 39 objects in an unsupervised fashion. On day 2, the locations of some of the objects were changed, and object location recall performance was assessed and found to vary across subjects. As prior work had shown that functional magnetic resonance imaging (fMRI) measures of resting-state brain activity can predict various measures of brain performance across individuals, we examined whether resting-state fMRI measures could be used to predict object location recall performance. We found a significant correlation between performance and the variability of the resting-state fMRI signal in the basal ganglia, hippocampus, amygdala, thalamus, insula, and regions in the frontal and temporal lobes, regions important for spatial exploration, learning, memory, and decision making. In addition, performance was significantly correlated with resting-state fMRI connectivity between the left caudate and the right fusiform gyrus, lateral occipital complex, and superior temporal gyrus. Given the basal ganglia's role in exploration, these findings suggest that tighter integration of the brain systems responsible for exploration and visuospatial processing may be critical for learning in a complex environment. PMID:25286145

  11. Resting-State fMRI Activity Predicts Unsupervised Learning and Memory in an Immersive Virtual Reality Environment

    PubMed Central

    Wong, Chi Wah; Olafsson, Valur; Plank, Markus; Snider, Joseph; Halgren, Eric; Poizner, Howard; Liu, Thomas T.

    2014-01-01

    In the real world, learning often proceeds in an unsupervised manner without explicit instructions or feedback. In this study, we employed an experimental paradigm in which subjects explored an immersive virtual reality environment on each of two days. On day 1, subjects implicitly learned the location of 39 objects in an unsupervised fashion. On day 2, the locations of some of the objects were changed, and object location recall performance was assessed and found to vary across subjects. As prior work had shown that functional magnetic resonance imaging (fMRI) measures of resting-state brain activity can predict various measures of brain performance across individuals, we examined whether resting-state fMRI measures could be used to predict object location recall performance. We found a significant correlation between performance and the variability of the resting-state fMRI signal in the basal ganglia, hippocampus, amygdala, thalamus, insula, and regions in the frontal and temporal lobes, regions important for spatial exploration, learning, memory, and decision making. In addition, performance was significantly correlated with resting-state fMRI connectivity between the left caudate and the right fusiform gyrus, lateral occipital complex, and superior temporal gyrus. Given the basal ganglia's role in exploration, these findings suggest that tighter integration of the brain systems responsible for exploration and visuospatial processing may be critical for learning in a complex environment. PMID:25286145

  12. Eye-tracking and EMG supported 3D Virtual Reality - an integrated tool for perceptual and motor development of children with severe physical disabilities: a research concept.

    PubMed

    Pulay, Márk Ágoston

    2015-01-01

    Letting children with severe physical disabilities (like Tetraparesis spastica) to get relevant motional experiences of appropriate quality and quantity is now the greatest challenge for us in the field of neurorehabilitation. These motional experiences may establish many cognitive processes, but may also cause additional secondary cognitive dysfunctions such as disorders in body image, figure invariance, visual perception, auditory differentiation, concentration, analytic and synthetic ways of thinking, visual memory etc. Virtual Reality is a technology that provides a sense of presence in a real environment with the help of 3D pictures and animations formed in a computer environment and enable the person to interact with the objects in that environment. One of our biggest challenges is to find a well suited input device (hardware) to let the children with severe physical disabilities to interact with the computer. Based on our own experiences and a thorough literature review we have come to the conclusion that an effective combination of eye-tracking and EMG devices should work well.

  13. 2.5D/3D Models for the enhancement of architectural-urban heritage. An Virtual Tour of design of the Fascist headquarters in Littoria

    NASA Astrophysics Data System (ADS)

    Ippoliti, E.; Calvano, M.; Mores, L.

    2014-05-01

    Enhancement of cultural heritage is not simply a matter of preserving material objects but comes full circle only when the heritage can be enjoyed and used by the community. This is the rationale behind this presentation: an urban Virtual Tour to explore the 1937 design of the Fascist Headquarters in Littoria, now part of Latina, by the architect Oriolo Frezzotti. Although the application is deliberately "simple", it was part of a much broader framework of goals. One such goal was to create "friendly and perceptively meaningful" interfaces by integrating different "3D models" and so enriching. In fact, by exploiting the activation of natural mechanisms of visual perception and the ensuing emotional emphasis associated with vision, the illusionistic simulation of the scene facilitates access to the data even for "amateur" users. A second goal was to "contextualise the information" on which the concept of cultural heritage is based. In the application, communication of the heritage is linked to its physical and linguistic context; the latter is then used as a basis from which to set out to explore and understand the historical evidence. A third goal was to foster the widespread dissemination and sharing of this heritage of knowledge. On the one hand we worked to make the application usable from the Web, on the other, we established a reliable, rapid operational procedure with high quality processed data and ensuing contents. The procedure was also repeatable on a large scale.

  14. A common feature-based 3D-pharmacophore model generation and virtual screening: identification of potential PfDHFR inhibitors.

    PubMed

    Adane, Legesse; Bharatam, Prasad V; Sharma, Vikas

    2010-10-01

    A four-feature 3D-pharmacophore model was built from a set of 24 compounds whose activities were reported against the V1/S strain of the Plasmodium falciparum dihydrofolate reductase (PfDHFR) enzyme. This is an enzyme harboring Asn51Ile + Cys59Arg + Ser108Asn + Ile164Leu mutations. The HipHop module of the Catalyst program was used to generate the model. Selection of the best model among the 10 hypotheses generated by HipHop was carried out based on rank and best-fit values or alignments of the training set compounds onto a particular hypothesis. The best model (hypo1) consisted of two H-bond donors, one hydrophobic aromatic, and one hydrophobic aliphatic features. Hypo1 was used as a query to virtually screen Maybridge2004 and NCI2000 databases. The hits obtained from the search were subsequently subjected to FlexX and Glide docking studies. Based on the binding scores and interactions in the active site of quadruple-mutant PfDHFR, a set of nine hits were identified as potential inhibitors. PMID:19995305

  15. A common feature-based 3D-pharmacophore model generation and virtual screening: identification of potential PfDHFR inhibitors.

    PubMed

    Adane, Legesse; Bharatam, Prasad V; Sharma, Vikas

    2010-10-01

    A four-feature 3D-pharmacophore model was built from a set of 24 compounds whose activities were reported against the V1/S strain of the Plasmodium falciparum dihydrofolate reductase (PfDHFR) enzyme. This is an enzyme harboring Asn51Ile + Cys59Arg + Ser108Asn + Ile164Leu mutations. The HipHop module of the Catalyst program was used to generate the model. Selection of the best model among the 10 hypotheses generated by HipHop was carried out based on rank and best-fit values or alignments of the training set compounds onto a particular hypothesis. The best model (hypo1) consisted of two H-bond donors, one hydrophobic aromatic, and one hydrophobic aliphatic features. Hypo1 was used as a query to virtually screen Maybridge2004 and NCI2000 databases. The hits obtained from the search were subsequently subjected to FlexX and Glide docking studies. Based on the binding scores and interactions in the active site of quadruple-mutant PfDHFR, a set of nine hits were identified as potential inhibitors.

  16. Eliciting Affect via Immersive Virtual Reality: A Tool for Adolescent Risk Reduction

    PubMed Central

    Houck, Christopher D.; Barker, David H.; Garcia, Abbe Marrs; Spitalnick, Josh S.; Curtis, Virginia; Roye, Scott; Brown, Larry K.

    2014-01-01

    Objective A virtual reality environment (VRE) was designed to expose participants to substance use and sexual risk-taking cues to examine the utility of VR in eliciting adolescent physiological arousal. Methods 42 adolescents (55% male) with a mean age of 14.54 years (SD = 1.13) participated. Physiological arousal was examined through heart rate (HR), respiratory sinus arrhythmia (RSA), and self-reported somatic arousal. A within-subject design (neutral VRE, VR party, and neutral VRE) was utilized to examine changes in arousal. Results The VR party demonstrated an increase in physiological arousal relative to a neutral VRE. Examination of individual segments of the party (e.g., orientation, substance use, and sexual risk) demonstrated that HR was significantly elevated across all segments, whereas only the orientation and sexual risk segments demonstrated significant impact on RSA. Conclusions This study provides preliminary evidence that VREs can be used to generate physiological arousal in response to substance use and sexual risk cues. PMID:24365699

  17. The Effects of Actual Human Size Display and Stereoscopic Presentation on Users' Sense of Being Together with and of Psychological Immersion in a Virtual Character

    PubMed Central

    Ahn, Dohyun; Seo, Youngnam; Kim, Minkyung; Kwon, Joung Huem; Jung, Younbo; Ahn, Jungsun

    2014-01-01

    Abstract This study examined the role of display size and mode in increasing users' sense of being together with and of their psychological immersion in a virtual character. Using a high-resolution three-dimensional virtual character, this study employed a 2×2 (stereoscopic mode vs. monoscopic mode×actual human size vs. small size display) factorial design in an experiment with 144 participants randomly assigned to each condition. Findings showed that stereoscopic mode had a significant effect on both users' sense of being together and psychological immersion. However, display size affected only the sense of being together. Furthermore, display size was not found to moderate the effect of stereoscopic mode. PMID:24606057

  18. A method for generating an illusion of backwards time travel using immersive virtual reality—an exploratory study

    PubMed Central

    Friedman, Doron; Pizarro, Rodrigo; Or-Berkers, Keren; Neyret, Solène; Pan, Xueni; Slater, Mel

    2014-01-01

    We introduce a new method, based on immersive virtual reality (IVR), to give people the illusion of having traveled backwards through time to relive a sequence of events in which they can intervene and change history. The participant had played an important part in events with a tragic outcome—deaths of strangers—by having to choose between saving 5 people or 1. We consider whether the ability to go back through time, and intervene, to possibly avoid all deaths, has an impact on how the participant views such moral dilemmas, and also whether this experience leads to a re-evaluation of past unfortunate events in their own lives. We carried out an exploratory study where in the “Time Travel” condition 16 participants relived these events three times, seeing incarnations of their past selves carrying out the actions that they had previously carried out. In a “Repetition” condition another 16 participants replayed the same situation three times, without any notion of time travel. Our results suggest that those in the Time Travel condition did achieve an illusion of “time travel” provided that they also experienced an illusion of presence in the virtual environment, body ownership, and agency over the virtual body that substituted their own. Time travel produced an increase in guilt feelings about the events that had occurred, and an increase in support of utilitarian behavior as the solution to the moral dilemma. Time travel also produced an increase in implicit morality as judged by an implicit association test. The time travel illusion was associated with a reduction of regret associated with bad decisions in their own lives. The results show that when participants have a third action that they can take to solve the moral dilemma (that does not immediately involve choosing between the 1 and the 5) then they tend to take this option, even though it is useless in solving the dilemma, and actually results in the deaths of a greater number. PMID:25228889

  19. Immersive volume rendering of blood vessels

    NASA Astrophysics Data System (ADS)

    Long, Gregory; Kim, Han Suk; Marsden, Alison; Bazilevs, Yuri; Schulze, Jürgen P.

    2012-03-01

    In this paper, we present a novel method of visualizing flow in blood vessels. Our approach reads unstructured tetrahedral data, resamples it, and uses slice based 3D texture volume rendering. Due to the sparse structure of blood vessels, we utilize an octree to efficiently store the resampled data by discarding empty regions of the volume. We use animation to convey time series data, wireframe surface to give structure, and utilize the StarCAVE, a 3D virtual reality environment, to add a fully immersive element to the visualization. Our tool has great value in interdisciplinary work, helping scientists collaborate with clinicians, by improving the understanding of blood flow simulations. Full immersion in the flow field allows for a more intuitive understanding of the flow phenomena, and can be a great help to medical experts for treatment planning.

  20. Learning immersion without getting wet

    NASA Astrophysics Data System (ADS)

    Aguilera, Julieta C.

    2012-03-01

    This paper describes the teaching of an immersive environments class on the Spring of 2011. The class had students from undergraduate as well as graduate art related majors. Their digital background and interests were also diverse. These variables were channeled as different approaches throughout the semester. Class components included fundamentals of stereoscopic computer graphics to explore spatial depth, 3D modeling and skeleton animation to in turn explore presence, exposure to formats like a stereo projection wall and dome environments to compare field of view across devices, and finally, interaction and tracking to explore issues of embodiment. All these components were supported by theoretical readings discussed in class. Guest artists presented their work in Virtual Reality, Dome Environments and other immersive formats. Museum professionals also introduced students to space science visualizations, which utilize immersive formats. Here I present the assignments and their outcome, together with insights as to how the creation of immersive environments can be learned through constraints that expose students to situations of embodied cognition.