Science.gov

Sample records for 3d immersive virtual

  1. Versatile, Immersive, Creative and Dynamic Virtual 3-D Healthcare Learning Environments: A Review of the Literature

    PubMed Central

    2008-01-01

    The author provides a critical overview of three-dimensional (3-D) virtual worlds and “serious gaming” that are currently being developed and used in healthcare professional education and medicine. The relevance of this e-learning innovation for teaching students and professionals is debatable and variables influencing adoption, such as increased knowledge, self-directed learning, and peer collaboration, by academics, healthcare professionals, and business executives are examined while looking at various Web 2.0/3.0 applications. There is a need for more empirical research in order to unearth the pedagogical outcomes and advantages associated with this e-learning technology. A brief description of Roger’s Diffusion of Innovations Theory and Siemens’ Connectivism Theory for today’s learners is presented as potential underlying pedagogical tenets to support the use of virtual 3-D learning environments in higher education and healthcare. PMID:18762473

  2. Three‐dimensional immersive virtual reality for studying cellular compartments in 3D models from EM preparations of neural tissues

    PubMed Central

    Baghabra, Jumana; Boges, Daniya J.; Holst, Glendon R.; Kreshuk, Anna; Hamprecht, Fred A.; Srinivasan, Madhusudhanan; Lehväslaiho, Heikki

    2016-01-01

    ABSTRACT Advances in the application of electron microscopy (EM) to serial imaging are opening doors to new ways of analyzing cellular structure. New and improved algorithms and workflows for manual and semiautomated segmentation allow us to observe the spatial arrangement of the smallest cellular features with unprecedented detail in full three‐dimensions. From larger samples, higher complexity models can be generated; however, they pose new challenges to data management and analysis. Here we review some currently available solutions and present our approach in detail. We use the fully immersive virtual reality (VR) environment CAVE (cave automatic virtual environment), a room in which we are able to project a cellular reconstruction and visualize in 3D, to step into a world created with Blender, a free, fully customizable 3D modeling software with NeuroMorph plug‐ins for visualization and analysis of EM preparations of brain tissue. Our workflow allows for full and fast reconstructions of volumes of brain neuropil using ilastik, a software tool for semiautomated segmentation of EM stacks. With this visualization environment, we can walk into the model containing neuronal and astrocytic processes to study the spatial distribution of glycogen granules, a major energy source that is selectively stored in astrocytes. The use of CAVE was key to the observation of a nonrandom distribution of glycogen, and led us to develop tools to quantitatively analyze glycogen clustering and proximity to other subcellular features. J. Comp. Neurol. 524:23–38, 2016. © 2015 Wiley Periodicals, Inc. PMID:26179415

  3. L2 Immersion in 3D Virtual Worlds: The Next Thing to Being There?

    ERIC Educational Resources Information Center

    Paillat, Edith

    2014-01-01

    Second Life is one of the many three-dimensional virtual environments accessible through a computer and a fast broadband connection. Thousands of participants connect to this platform to interact virtually with the world, join international communities of practice and, for some, role play groups. Unlike online role play games however, Second Life…

  4. Enhancing Time-Connectives with 3D Immersive Virtual Reality (IVR)

    ERIC Educational Resources Information Center

    Passig, David; Eden, Sigal

    2010-01-01

    This study sought to test the most efficient representation mode with which children with hearing impairment could express a story while producing connectives indicating relations of time and of cause and effect. Using Bruner's (1973, 1986, 1990) representation stages, we tested the comparative effectiveness of Virtual Reality (VR) as a mode of…

  5. Immersive 3D geovisualisation in higher education

    NASA Astrophysics Data System (ADS)

    Philips, Andrea; Walz, Ariane; Bergner, Andreas; Graeff, Thomas; Heistermann, Maik; Kienzler, Sarah; Korup, Oliver; Lipp, Torsten; Schwanghart, Wolfgang; Zeilinger, Gerold

    2014-05-01

    Through geovisualisation we explore spatial data, we analyse it towards a specific questions, we synthesise results, and we present and communicate them to a specific audience (MacEachren & Kraak 1997). After centuries of paper maps, the means to represent and visualise our physical environment and its abstract qualities have changed dramatically since the 1990s - and accordingly the methods how to use geovisualisation in teaching. Whereas some people might still consider the traditional classroom as ideal setting for teaching and learning geographic relationships and its mapping, we used a 3D CAVE (computer-animated virtual environment) as environment for a problem-oriented learning project called "GEOSimulator". Focussing on this project, we empirically investigated, if such a technological advance like the CAVE make 3D visualisation, including 3D geovisualisation, not only an important tool for businesses (Abulrub et al. 2012) and for the public (Wissen et al. 2008), but also for educational purposes, for which it had hardly been used yet. The 3D CAVE is a three-sided visualisation platform, that allows for immersive and stereoscopic visualisation of observed and simulated spatial data. We examined the benefits of immersive 3D visualisation for geographic research and education and synthesized three fundamental technology-based visual aspects: First, the conception and comprehension of space and location does not need to be generated, but is instantaneously and intuitively present through stereoscopy. Second, optical immersion into virtual reality strengthens this spatial perception which is in particular important for complex 3D geometries. And third, a significant benefit is interactivity, which is enhanced through immersion and allows for multi-discursive and dynamic data exploration and knowledge transfer. Based on our problem-oriented learning project, which concentrates on a case study on flood risk management at the Wilde Weisseritz in Germany, a river

  6. Immersive 3D Visualization of Astronomical Data

    NASA Astrophysics Data System (ADS)

    Schaaff, A.; Berthier, J.; Da Rocha, J.; Deparis, N.; Derriere, S.; Gaultier, P.; Houpin, R.; Normand, J.; Ocvirk, P.

    2015-09-01

    The immersive-3D visualization, or Virtual Reality in our study, was previously dedicated to specific uses (research, flight simulators, etc.) The investment in infrastructure and its cost was reserved to large laboratories or companies. Lately we saw the development of immersive-3D masks intended for wide distribution, for example the Oculus Rift and the Sony Morpheus projects. The usual reaction is to say that these tools are primarily intended for games since it is easy to imagine a player in a virtual environment and the added value to conventional 2D screens. Yet it is likely that there are many applications in the professional field if these tools are becoming common. Introducing this technology into existing applications or new developments makes sense only if interest is properly evaluated. The use in Astronomy is clear for education, it is easy to imagine mobile and light planetariums or to reproduce poorly accessible environments (e.g., large instruments). In contrast, in the field of professional astronomy the use is probably less obvious and it requires to conduct studies to determine the most appropriate ones and to assess the contributions compared to the other display modes.

  7. Immersive 3D Geovisualization in Higher Education

    ERIC Educational Resources Information Center

    Philips, Andrea; Walz, Ariane; Bergner, Andreas; Graeff, Thomas; Heistermann, Maik; Kienzler, Sarah; Korup, Oliver; Lipp, Torsten; Schwanghart, Wolfgang; Zeilinger, Gerold

    2015-01-01

    In this study, we investigate how immersive 3D geovisualization can be used in higher education. Based on MacEachren and Kraak's geovisualization cube, we examine the usage of immersive 3D geovisualization and its usefulness in a research-based learning module on flood risk, called GEOSimulator. Results of a survey among participating students…

  8. 3D Immersive Visualization with Astrophysical Data

    NASA Astrophysics Data System (ADS)

    Kent, Brian R.

    2017-01-01

    We present the refinement of a new 3D immersion technique for astrophysical data visualization.Methodology to create 360 degree spherical panoramas is reviewed. The 3D software package Blender coupled with Python and the Google Spatial Media module are used together to create the final data products. Data can be viewed interactively with a mobile phone or tablet or in a web browser. The technique can apply to different kinds of astronomical data including 3D stellar and galaxy catalogs, images, and planetary maps.

  9. 3D Elevation Program—Virtual USA in 3D

    USGS Publications Warehouse

    Lukas, Vicki; Stoker, J.M.

    2016-04-14

    The U.S. Geological Survey (USGS) 3D Elevation Program (3DEP) uses a laser system called ‘lidar’ (light detection and ranging) to create a virtual reality map of the Nation that is very accurate. 3D maps have many uses with new uses being discovered all the time.  

  10. How 3D immersive visualization is changing medical diagnostics

    NASA Astrophysics Data System (ADS)

    Koning, Anton H. J.

    2011-03-01

    Originally the only way to look inside the human body without opening it up was by means of two dimensional (2D) images obtained using X-ray equipment. The fact that human anatomy is inherently three dimensional leads to ambiguities in interpretation and problems of occlusion. Three dimensional (3D) imaging modalities such as CT, MRI and 3D ultrasound remove these drawbacks and are now part of routine medical care. While most hospitals 'have gone digital', meaning that the images are no longer printed on film, they are still being viewed on 2D screens. However, this way valuable depth information is lost, and some interactions become unnecessarily complex or even unfeasible. Using a virtual reality (VR) system to present volumetric data means that depth information is presented to the viewer and 3D interaction is made possible. At the Erasmus MC we have developed V-Scope, an immersive volume visualization system for visualizing a variety of (bio-)medical volumetric datasets, ranging from 3D ultrasound, via CT and MRI, to confocal microscopy, OPT and 3D electron-microscopy data. In this talk we will address the advantages of such a system for both medical diagnostics as well as for (bio)medical research.

  11. "Immersed in Learning": Supporting Creative Practice in Virtual Worlds

    ERIC Educational Resources Information Center

    Doyle, Denise

    2010-01-01

    The "Immersed in Learning" project began in 2007 to evaluate the use of 3D virtual worlds as a teaching and learning tool in undergraduate programmes in digital media at the University of Wolverhampton, UK. A question that the research set out to explore was what were the benefits of integrating 3D immersive learning with face-to-face…

  12. Virtual Representations in 3D Learning Environments

    ERIC Educational Resources Information Center

    Shonfeld, Miri; Kritz, Miki

    2013-01-01

    This research explores the extent to which virtual worlds can serve as online collaborative learning environments for students by increasing social presence and engagement. 3D environments enable learning, which simulates face-to-face encounters while retaining the advantages of online learning. Students in Education departments created avatars…

  13. Full Immersive Virtual Environment Cave[TM] in Chemistry Education

    ERIC Educational Resources Information Center

    Limniou, M.; Roberts, D.; Papadopoulos, N.

    2008-01-01

    By comparing two-dimensional (2D) chemical animations designed for computer's desktop with three-dimensional (3D) chemical animations designed for the full immersive virtual reality environment CAVE[TM] we studied how virtual reality environments could raise student's interest and motivation for learning. By using the 3ds max[TM], we can visualize…

  14. 3DIVS: 3-Dimensional Immersive Virtual Sculpting

    SciTech Connect

    Kuester, F; Duchaineau, M A; Hamann, B; Joy, K I; Uva, A E

    2001-10-03

    Virtual Environments (VEs) have the potential to revolutionize traditional product design by enabling the transition from conventional CAD to fully digital product development. The presented prototype system targets closing the ''digital gap'' as introduced by the need for physical models such as clay models or mockups in the traditional product design and evaluation cycle. We describe a design environment that provides an intuitive human-machine interface for the creation and manipulation of three-dimensional (3D) models in a semi-immersive design space, focusing on ease of use and increased productivity for both designer and CAD engineers.

  15. Cognitive Aspects of Collaboration in 3d Virtual Environments

    NASA Astrophysics Data System (ADS)

    Juřík, V.; Herman, L.; Kubíček, P.; Stachoň, Z.; Šašinka, Č.

    2016-06-01

    Human-computer interaction has entered the 3D era. The most important models representing spatial information — maps — are transferred into 3D versions regarding the specific content to be displayed. Virtual worlds (VW) become promising area of interest because of possibility to dynamically modify content and multi-user cooperation when solving tasks regardless to physical presence. They can be used for sharing and elaborating information via virtual images or avatars. Attractiveness of VWs is emphasized also by possibility to measure operators' actions and complex strategies. Collaboration in 3D environments is the crucial issue in many areas where the visualizations are important for the group cooperation. Within the specific 3D user interface the operators' ability to manipulate the displayed content is explored regarding such phenomena as situation awareness, cognitive workload and human error. For such purpose, the VWs offer a great number of tools for measuring the operators' responses as recording virtual movement or spots of interest in the visual field. Study focuses on the methodological issues of measuring the usability of 3D VWs and comparing them with the existing principles of 2D maps. We explore operators' strategies to reach and interpret information regarding the specific type of visualization and different level of immersion.

  16. Objective and subjective quality assessment of geometry compression of reconstructed 3D humans in a 3D virtual room

    NASA Astrophysics Data System (ADS)

    Mekuria, Rufael; Cesar, Pablo; Doumanis, Ioannis; Frisiello, Antonella

    2015-09-01

    Compression of 3D object based video is relevant for 3D Immersive applications. Nevertheless, the perceptual aspects of the degradation introduced by codecs for meshes and point clouds are not well understood. In this paper we evaluate the subjective and objective degradations introduced by such codecs in a state of art 3D immersive virtual room. In the 3D immersive virtual room, users are captured with multiple cameras, and their surfaces are reconstructed as photorealistic colored/textured 3D meshes or point clouds. To test the perceptual effect of compression and transmission, we render degraded versions with different frame rates in different contexts (near/far) in the scene. A quantitative subjective study with 16 users shows that negligible distortion of decoded surfaces compared to the original reconstructions can be achieved in the 3D virtual room. In addition, a qualitative task based analysis in a full prototype field trial shows increased presence, emotion, user and state recognition of the reconstructed 3D Human representation compared to animated computer avatars.

  17. Virtual VMASC: A 3D Game Environment

    NASA Technical Reports Server (NTRS)

    Manepalli, Suchitra; Shen, Yuzhong; Garcia, Hector M.; Lawsure, Kaleen

    2010-01-01

    The advantages of creating interactive 3D simulations that allow viewing, exploring, and interacting with land improvements, such as buildings, in digital form are manifold and range from allowing individuals from anywhere in the world to explore those virtual land improvements online, to training military personnel in dealing with war-time environments, and to making those land improvements available in virtual worlds such as Second Life. While we haven't fully explored the true potential of such simulations, we have identified a requirement within our organization to use simulations like those to replace our front-desk personnel and allow visitors to query, naVigate, and communicate virtually with various entities within the building. We implemented the Virtual VMASC 3D simulation of the Virginia Modeling Analysis and Simulation Center (VMASC) office building to not only meet our front-desk requirement but also to evaluate the effort required in designing such a simulation and, thereby, leverage the experience we gained in future projects of this kind. This paper describes the goals we set for our implementation, the software approach taken, the modeling contribution made, and the technologies used such as XNA Game Studio, .NET framework, Autodesk software packages, and, finally, the applicability of our implementation on a variety of architectures including Xbox 360 and PC. This paper also summarizes the result of our evaluation and the lessons learned from our effort.

  18. Immersive virtual reality simulations in nursing education.

    PubMed

    Kilmon, Carol A; Brown, Leonard; Ghosh, Sumit; Mikitiuk, Artur

    2010-01-01

    This article explores immersive virtual reality as a potential educational strategy for nursing education and describes an immersive learning experience now being developed for nurses. This pioneering project is a virtual reality application targeting speed and accuracy of nurse response in emergency situations requiring cardiopulmonary resuscitation. Other potential uses and implications for the development of virtual reality learning programs are discussed.

  19. Virtual reality 3D headset based on DMD light modulators

    SciTech Connect

    Bernacki, Bruce E.; Evans, Allan; Tang, Edward

    2014-06-13

    We present the design of an immersion-type 3D headset suitable for virtual reality applications based upon digital micro-mirror devices (DMD). Our approach leverages silicon micro mirrors offering 720p resolution displays in a small form-factor. Supporting chip sets allow rapid integration of these devices into wearable displays with high resolution and low power consumption. Applications include night driving, piloting of UAVs, fusion of multiple sensors for pilots, training, vision diagnostics and consumer gaming. Our design is described in which light from the DMD is imaged to infinity and the user’s own eye lens forms a real image on the user’s retina.

  20. Inertial Motion-Tracking Technology for Virtual 3-D

    NASA Technical Reports Server (NTRS)

    2005-01-01

    In the 1990s, NASA pioneered virtual reality research. The concept was present long before, but, prior to this, the technology did not exist to make a viable virtual reality system. Scientists had theories and ideas they knew that the concept had potential, but the computers of the 1970s and 1980s were not fast enough, sensors were heavy and cumbersome, and people had difficulty blending fluidly with the machines. Scientists at Ames Research Center built upon the research of previous decades and put the necessary technology behind them, making the theories of virtual reality a reality. Virtual reality systems depend on complex motion-tracking sensors to convey information between the user and the computer to give the user the feeling that he is operating in the real world. These motion-tracking sensors measure and report an object s position and orientation as it changes. A simple example of motion tracking would be the cursor on a computer screen moving in correspondence to the shifting of the mouse. Tracking in 3-D, necessary to create virtual reality, however, is much more complex. To be successful, the perspective of the virtual image seen on the computer must be an accurate representation of what is seen in the real world. As the user s head or camera moves, turns, or tilts, the computer-generated environment must change accordingly with no noticeable lag, jitter, or distortion. Historically, the lack of smooth and rapid tracking of the user s motion has thwarted the widespread use of immersive 3-D computer graphics. NASA uses virtual reality technology for a variety of purposes, mostly training of astronauts. The actual missions are costly and dangerous, so any opportunity the crews have to practice their maneuvering in accurate situations before the mission is valuable and instructive. For that purpose, NASA has funded a great deal of virtual reality research, and benefited from the results.

  1. Virtual environment display for a 3D audio room simulation

    NASA Technical Reports Server (NTRS)

    Chapin, William L.; Foster, Scott H.

    1992-01-01

    The development of a virtual environment simulation system integrating a 3D acoustic audio model with an immersive 3D visual scene is discussed. The system complements the acoustic model and is specified to: allow the listener to freely move about the space, a room of manipulable size, shape, and audio character, while interactively relocating the sound sources; reinforce the listener's feeling of telepresence in the acoustical environment with visual and proprioceptive sensations; enhance the audio with the graphic and interactive components, rather than overwhelm or reduce it; and serve as a research testbed and technology transfer demonstration. The hardware/software design of two demonstration systems, one installed and one portable, are discussed through the development of four iterative configurations.

  2. Social Interaction Development through Immersive Virtual Environments

    ERIC Educational Resources Information Center

    Beach, Jason; Wendt, Jeremy

    2014-01-01

    The purpose of this pilot study was to determine if participants could improve their social interaction skills by participating in a virtual immersive environment. The participants used a developing virtual reality head-mounted display to engage themselves in a fully-immersive environment. While in the environment, participants had an opportunity…

  3. VILLAGE--Virtual Immersive Language Learning and Gaming Environment: Immersion and Presence

    ERIC Educational Resources Information Center

    Wang, Yi Fei; Petrina, Stephen; Feng, Francis

    2017-01-01

    3D virtual worlds are promising for immersive learning in English as a Foreign Language (EFL). Unlike English as a Second Language (ESL), EFL typically takes place in the learners' home countries, and the potential of the language is limited by geography. Although learning contexts where English is spoken is important, in most EFL courses at the…

  4. 3D Virtual Reality Check: Learner Engagement and Constructivist Theory

    ERIC Educational Resources Information Center

    Bair, Richard A.

    2013-01-01

    The inclusion of three-dimensional (3D) virtual tools has created a need to communicate the engagement of 3D tools and specify learning gains that educators and the institutions, which are funding 3D tools, can expect. A review of literature demonstrates that specific models and theories for 3D Virtual Reality (VR) learning do not exist "per…

  5. Faculty Perceptions of Instruction in Collaborative Virtual Immersive Learning Environments in Higher Education

    ERIC Educational Resources Information Center

    Janson, Barbara

    2013-01-01

    Use of 3D (three-dimensional) avatars in a synchronous virtual world for educational purposes has only been adopted for about a decade. Universities are offering synchronous, avatar-based virtual courses for credit - within 3D worlds (Luo & Kemp, 2008). Faculty and students immerse themselves, via avatars, in virtual worlds and communicate…

  6. 3D Immersive Visualization: An Educational Tool in Geosciences

    NASA Astrophysics Data System (ADS)

    Pérez-Campos, N.; Cárdenas-Soto, M.; Juárez-Casas, M.; Castrejón-Pineda, R.

    2007-05-01

    3D immersive visualization is an innovative tool currently used in various disciplines, such as medicine, architecture, engineering, video games, etc. Recently, the Universidad Nacional Autónoma de México (UNAM) mounted a visualization theater (Ixtli) with leading edge technology, for academic and research purposes that require immersive 3D tools for a better understanding of the concepts involved. The Division of Engineering in Earth Sciences of the School of Engineering, UNAM, is running a project focused on visualization of geoscience data. Its objective is to incoporate educational material in geoscience courses in order to support and to improve the teaching-learning process, especially in well-known difficult topics for students. As part of the project, proffessors and students are trained in visualization techniques, then their data are adapted and visualized in Ixtli as part of a class or a seminar, where all the attendants can interact, not only among each other but also with the object under study. As part of our results, we present specific examples used in basic geophysics courses, such as interpreted seismic cubes, seismic-wave propagation models, and structural models from bathymetric, gravimetric and seismological data; as well as examples from ongoing applied projects, such as a modeled SH upward wave, the occurrence of an earthquake cluster in 1999 in the Popocatepetl volcano, and a risk atlas from Delegación Alvaro Obregón in Mexico City. All these examples, plus those to come, constitute a library for students and professors willing to explore another dimension of the teaching-learning process. Furthermore, this experience can be enhaced by rich discussions and interactions by videoconferences with other universities and researchers.

  7. Data Visualization Using Immersive Virtual Reality Tools

    NASA Astrophysics Data System (ADS)

    Cioc, Alexandru; Djorgovski, S. G.; Donalek, C.; Lawler, E.; Sauer, F.; Longo, G.

    2013-01-01

    The growing complexity of scientific data poses serious challenges for an effective visualization. Data sets, e.g., catalogs of objects detected in sky surveys, can have a very high dimensionality, ~ 100 - 1000. Visualizing such hyper-dimensional data parameter spaces is essentially impossible, but there are ways of visualizing up to ~ 10 dimensions in a pseudo-3D display. We have been experimenting with the emerging technologies of immersive virtual reality (VR) as a platform for a scientific, interactive, collaborative data visualization. Our initial experiments used the virtual world of Second Life, and more recently VR worlds based on its open source code, OpenSimulator. There we can visualize up to ~ 100,000 data points in ~ 7 - 8 dimensions (3 spatial and others encoded as shapes, colors, sizes, etc.), in an immersive virtual space where scientists can interact with their data and with each other. We are now developing a more scalable visualization environment using the popular (practically an emerging standard) Unity 3D Game Engine, coded using C#, JavaScript, and the Unity Scripting Language. This visualization tool can be used through a standard web browser, or a standalone browser of its own. Rather than merely plotting data points, the application creates interactive three-dimensional objects of various shapes, colors, and sizes, and of course the XYZ positions, encoding various dimensions of the parameter space, that can be associated interactively. Multiple users can navigate through this data space simultaneously, either with their own, independent vantage points, or with a shared view. At this stage ~ 100,000 data points can be easily visualized within seconds on a simple laptop. The displayed data points can contain linked information; e.g., upon a clicking on a data point, a webpage with additional information can be rendered within the 3D world. A range of functionalities has been already deployed, and more are being added. We expect to make this

  8. Learning in 3-D Virtual Worlds: Rethinking Media Literacy

    ERIC Educational Resources Information Center

    Qian, Yufeng

    2008-01-01

    3-D virtual worlds, as a new form of learning environments in the 21st century, hold great potential in education. Learning in such environments, however, demands a broader spectrum of literacy skills. This article identifies a new set of media literacy skills required in 3-D virtual learning environments by reviewing exemplary 3-D virtual…

  9. A specification of 3D manipulation in virtual environments

    NASA Technical Reports Server (NTRS)

    Su, S. Augustine; Furuta, Richard

    1994-01-01

    In this paper we discuss the modeling of three basic kinds of 3-D manipulations in the context of a logical hand device and our virtual panel architecture. The logical hand device is a useful software abstraction representing hands in virtual environments. The virtual panel architecture is the 3-D component of the 2-D window systems. Both of the abstractions are intended to form the foundation for adaptable 3-D manipulation.

  10. Sensorized Garment Augmented 3D Pervasive Virtual Reality System

    NASA Astrophysics Data System (ADS)

    Gulrez, Tauseef; Tognetti, Alessandro; de Rossi, Danilo

    Virtual reality (VR) technology has matured to a point where humans can navigate in virtual scenes; however, providing them with a comfortable fully immersive role in VR remains a challenge. Currently available sensing solutions do not provide ease of deployment, particularly in the seated position due to sensor placement restrictions over the body, and optic-sensing requires a restricted indoor environment to track body movements. Here we present a 52-sensor laden garment interfaced with VR, which offers both portability and unencumbered user movement in a VR environment. This chapter addresses the systems engineering aspects of our pervasive computing solution of the interactive sensorized 3D VR and presents the initial results and future research directions. Participants navigated in a virtual art gallery using natural body movements that were detected by their wearable sensor shirt and then mapped the signals to electrical control signals responsible for VR scene navigation. The initial results are positive, and offer many opportunities for use in computationally intelligentman-machine multimedia control.

  11. Digital Immersive Virtual Environments and Instructional Computing

    ERIC Educational Resources Information Center

    Blascovich, Jim; Beall, Andrew C.

    2010-01-01

    This article reviews theory and research relevant to the development of digital immersive virtual environment-based instructional computing systems. The review is organized within the context of a multidimensional model of social influence and interaction within virtual environments that models the interaction of four theoretical factors: theory…

  12. Learning in 3D Virtual Environments: Collaboration and Knowledge Spirals

    ERIC Educational Resources Information Center

    Burton, Brian G.; Martin, Barbara N.

    2010-01-01

    The purpose of this case study was to determine if learning occurred within a 3D virtual learning environment by determining if elements of collaboration and Nonaka and Takeuchi's (1995) knowledge spiral were present. A key portion of this research was the creation of a Virtual Learning Environment. This 3D VLE utilized the Torque Game Engine…

  13. Contextual EFL Learning in a 3D Virtual Environment

    ERIC Educational Resources Information Center

    Lan, Yu-Ju

    2015-01-01

    The purposes of the current study are to develop virtually immersive EFL learning contexts for EFL learners in Taiwan to pre- and review English materials beyond the regular English class schedule. A 2-iteration action research lasting for one semester was conducted to evaluate the effects of virtual contexts on learners' EFL learning. 132…

  14. [Development of a software for 3D virtual phantom design].

    PubMed

    Zou, Lian; Xie, Zhao; Wu, Qi

    2014-02-01

    In this paper, we present a 3D virtual phantom design software, which was developed based on object-oriented programming methodology and dedicated to medical physics research. This software was named Magical Phan tom (MPhantom), which is composed of 3D visual builder module and virtual CT scanner. The users can conveniently construct any complex 3D phantom, and then export the phantom as DICOM 3.0 CT images. MPhantom is a user-friendly and powerful software for 3D phantom configuration, and has passed the real scene's application test. MPhantom will accelerate the Monte Carlo simulation for dose calculation in radiation therapy and X ray imaging reconstruction algorithm research.

  15. Development of visual 3D virtual environment for control software

    NASA Technical Reports Server (NTRS)

    Hirose, Michitaka; Myoi, Takeshi; Amari, Haruo; Inamura, Kohei; Stark, Lawrence

    1991-01-01

    Virtual environments for software visualization may enable complex programs to be created and maintained. A typical application might be for control of regional electric power systems. As these encompass broader computer networks than ever, construction of such systems becomes very difficult. Conventional text-oriented environments are useful in programming individual processors. However, they are obviously insufficient to program a large and complicated system, that includes large numbers of computers connected to each other; such programming is called 'programming in the large.' As a solution for this problem, the authors are developing a graphic programming environment wherein one can visualize complicated software in virtual 3D world. One of the major features of the environment is the 3D representation of concurrent process. 3D representation is used to supply both network-wide interprocess programming capability (capability for 'programming in the large') and real-time programming capability. The authors' idea is to fuse both the block diagram (which is useful to check relationship among large number of processes or processors) and the time chart (which is useful to check precise timing for synchronization) into a single 3D space. The 3D representation gives us a capability for direct and intuitive planning or understanding of complicated relationship among many concurrent processes. To realize the 3D representation, a technology to enable easy handling of virtual 3D object is a definite necessity. Using a stereo display system and a gesture input device (VPL DataGlove), our prototype of the virtual workstation has been implemented. The workstation can supply the 'sensation' of the virtual 3D space to a programmer. Software for the 3D programming environment is implemented on the workstation. According to preliminary assessments, a 50 percent reduction of programming effort is achieved by using the virtual 3D environment. The authors expect that the 3D

  16. Using Immersive Virtual Environments for Certification

    NASA Technical Reports Server (NTRS)

    Lutz, R.; Cruz-Neira, C.

    1998-01-01

    Immersive virtual environments (VEs) technology has matured to the point where it can be utilized as a scientific and engineering problem solving tool. In particular, VEs are starting to be used to design and evaluate safety-critical systems that involve human operators, such as flight and driving simulators, complex machinery training, and emergency rescue strategies.

  17. What Are the Learning Affordances of 3-D Virtual Environments?

    ERIC Educational Resources Information Center

    Dalgarno, Barney; Lee, Mark J. W.

    2010-01-01

    This article explores the potential learning benefits of three-dimensional (3-D) virtual learning environments (VLEs). Drawing on published research spanning two decades, it identifies a set of unique characteristics of 3-D VLEs, which includes aspects of their representational fidelity and aspects of the learner-computer interactivity they…

  18. ESL Teacher Training in 3D Virtual Worlds

    ERIC Educational Resources Information Center

    Kozlova, Iryna; Priven, Dmitri

    2015-01-01

    Although language learning in 3D Virtual Worlds (VWs) has become a focus of recent research, little is known about the knowledge and skills teachers need to acquire to provide effective task-based instruction in 3D VWs and the type of teacher training that best prepares instructors for such an endeavor. This study employs a situated learning…

  19. Educational Visualizations in 3D Collaborative Virtual Environments: A Methodology

    ERIC Educational Resources Information Center

    Fominykh, Mikhail; Prasolova-Forland, Ekaterina

    2012-01-01

    Purpose: Collaborative virtual environments (CVEs) have become increasingly popular in educational settings and the role of 3D content is becoming more and more important. Still, there are many challenges in this area, such as lack of empirical studies that provide design for educational activities in 3D CVEs and lack of norms of how to support…

  20. Game-Like Language Learning in 3-D Virtual Environments

    ERIC Educational Resources Information Center

    Berns, Anke; Gonzalez-Pardo, Antonio; Camacho, David

    2013-01-01

    This paper presents our recent experiences with the design of game-like applications in 3-D virtual environments as well as its impact on student motivation and learning. Therefore our paper starts with a brief analysis of the motivational aspects of videogames and virtual worlds (VWs). We then go on to explore the possible benefits of both in the…

  1. Foreign language learning in immersive virtual environments

    NASA Astrophysics Data System (ADS)

    Chang, Benjamin; Sheldon, Lee; Si, Mei; Hand, Anton

    2012-03-01

    Virtual reality has long been used for training simulations in fields from medicine to welding to vehicular operation, but simulations involving more complex cognitive skills present new design challenges. Foreign language learning, for example, is increasingly vital in the global economy, but computer-assisted education is still in its early stages. Immersive virtual reality is a promising avenue for language learning as a way of dynamically creating believable scenes for conversational training and role-play simulation. Visual immersion alone, however, only provides a starting point. We suggest that the addition of social interactions and motivated engagement through narrative gameplay can lead to truly effective language learning in virtual environments. In this paper, we describe the development of a novel application for teaching Mandarin using CAVE-like VR, physical props, human actors and intelligent virtual agents, all within a semester-long multiplayer mystery game. Students travel (virtually) to China on a class field trip, which soon becomes complicated with intrigue and mystery surrounding the lost manuscript of an early Chinese literary classic. Virtual reality environments such as the Forbidden City and a Beijing teahouse provide the setting for learning language, cultural traditions, and social customs, as well as the discovery of clues through conversation in Mandarin with characters in the game.

  2. Organizational Learning Goes Virtual?: A Study of Employees' Learning Achievement in Stereoscopic 3D Virtual Reality

    ERIC Educational Resources Information Center

    Lau, Kung Wong

    2015-01-01

    Purpose: This study aims to deepen understanding of the use of stereoscopic 3D technology (stereo3D) in facilitating organizational learning. The emergence of advanced virtual technologies, in particular to the stereo3D virtual reality, has fundamentally changed the ways in which organizations train their employees. However, in academic or…

  3. Web-based Three-dimensional Virtual Body Structures: W3D-VBS

    PubMed Central

    Temkin, Bharti; Acosta, Eric; Hatfield, Paul; Onal, Erhan; Tong, Alex

    2002-01-01

    Major efforts are being made to improve the teaching of human anatomy to foster cognition of visuospatial relationships. The Visible Human Project of the National Library of Medicine makes it possible to create virtual reality-based applications for teaching anatomy. Integration of traditional cadaver and illustration-based methods with Internet-based simulations brings us closer to this goal. Web-based three-dimensional Virtual Body Structures (W3D-VBS) is a next-generation immersive anatomical training system for teaching human anatomy over the Internet. It uses Visible Human data to dynamically explore, select, extract, visualize, manipulate, and stereoscopically palpate realistic virtual body structures with a haptic device. Tracking user’s progress through evaluation tools helps customize lesson plans. A self-guided “virtual tour” of the whole body allows investigation of labeled virtual dissections repetitively, at any time and place a user requires it. PMID:12223495

  4. Learning Relative Motion Concepts in Immersive and Non-Immersive Virtual Environments

    ERIC Educational Resources Information Center

    Kozhevnikov, Michael; Gurlitt, Johannes; Kozhevnikov, Maria

    2013-01-01

    The focus of the current study is to understand which unique features of an immersive virtual reality environment have the potential to improve learning relative motion concepts. Thirty-seven undergraduate students learned relative motion concepts using computer simulation either in immersive virtual environment (IVE) or non-immersive desktop…

  5. Calibrating camera and projector arrays for immersive 3D display

    NASA Astrophysics Data System (ADS)

    Baker, Harlyn; Li, Zeyu; Papadas, Constantin

    2009-02-01

    Advances in building high-performance camera arrays [1, 12] have opened the opportunity - and challenge - of using these devices for autostereoscopic display of live 3D content. Appropriate autostereo display requires calibration of these camera elements and those of the display facility for accurate placement (and perhaps resampling) of the acquired video stream. We present progress in exploiting a new approach to this calibration that capitalizes on high quality homographies between pairs of imagers to develop a global optimal solution delivering epipoles and fundamental matrices simultaneously for the entire system [2]. Adjustment of the determined camera models to deliver minimal vertical misalignment in an epipolar sense is used to permit ganged rectification of the separate streams for transitive positioning in the visual field. Individual homographies [6] are obtained for a projector array that presents the video on a holographically-diffused retroreflective surface for participant autostereo viewing. The camera model adjustment means vertical epipolar disparities of the captured signal are minimized, and the projector calibration means the display will retain these alignments despite projector pose variations. The projector calibration also permits arbitrary alignment shifts to accommodate focus-of-attention vengeance, should that information be available.

  6. Creating an Immersive Mars Experience Using Unity3D

    NASA Technical Reports Server (NTRS)

    Miles, Sarah

    2011-01-01

    Between the two Mars Exploration Rovers, Spirit and Opportunity, NASA has collected over 280,000 images while studying the Martian surface. This number will continue to grow, with Opportunity continuing to send images and with another rover, Curiosity, launching soon. Using data collected by and for these Mars rovers, I am contributing to the creation of virtual experiences that will expose the general public to Mars. These experiences not only work to increase public knowledge, but they attempt to do so in an engaging manner more conducive to knowledge retention by letting others view Mars through the rovers' eyes. My contributions include supporting image viewing (for example, allowing users to click on panoramic images of the Martian surface to access closer range photos) as well as enabling tagging of points of interest. By creating a more interactive way of viewing the information we have about Mars, we are not just educating the public about a neighboring planet. We are showing the importance of doing such research.

  7. Real-time 3D video compression for tele-immersive environments

    NASA Astrophysics Data System (ADS)

    Yang, Zhenyu; Cui, Yi; Anwar, Zahid; Bocchino, Robert; Kiyanclar, Nadir; Nahrstedt, Klara; Campbell, Roy H.; Yurcik, William

    2006-01-01

    Tele-immersive systems can improve productivity and aid communication by allowing distributed parties to exchange information via a shared immersive experience. The TEEVE research project at the University of Illinois at Urbana-Champaign and the University of California at Berkeley seeks to foster the development and use of tele-immersive environments by a holistic integration of existing components that capture, transmit, and render three-dimensional (3D) scenes in real time to convey a sense of immersive space. However, the transmission of 3D video poses significant challenges. First, it is bandwidth-intensive, as it requires the transmission of multiple large-volume 3D video streams. Second, existing schemes for 2D color video compression such as MPEG, JPEG, and H.263 cannot be applied directly because the 3D video data contains depth as well as color information. Our goal is to explore from a different angle of the 3D compression space with factors including complexity, compression ratio, quality, and real-time performance. To investigate these trade-offs, we present and evaluate two simple 3D compression schemes. For the first scheme, we use color reduction to compress the color information, which we then compress along with the depth information using zlib. For the second scheme, we use motion JPEG to compress the color information and run-length encoding followed by Huffman coding to compress the depth information. We apply both schemes to 3D videos captured from a real tele-immersive environment. Our experimental results show that: (1) the compressed data preserves enough information to communicate the 3D images effectively (min. PSNR > 40) and (2) even without inter-frame motion estimation, very high compression ratios (avg. > 15) are achievable at speeds sufficient to allow real-time communication (avg. ~ 13 ms per 3D video frame).

  8. The SEE Experience: Edutainment in 3D Virtual Worlds.

    ERIC Educational Resources Information Center

    Di Blas, Nicoletta; Paolini, Paolo; Hazan, Susan

    Shared virtual worlds are innovative applications where several users, represented by Avatars, simultaneously access via Internet a 3D space. Users cooperate through interaction with the environment and with each other, manipulating objects and chatting as they go. Apart from in the well documented online action games industry, now often played…

  9. Measuring Knowledge Acquisition in 3D Virtual Learning Environments.

    PubMed

    Nunes, Eunice P dos Santos; Roque, Licínio G; Nunes, Fatima de Lourdes dos Santos

    2016-01-01

    Virtual environments can contribute to the effective learning of various subjects for people of all ages. Consequently, they assist in reducing the cost of maintaining physical structures of teaching, such as laboratories and classrooms. However, the measurement of how learners acquire knowledge in such environments is still incipient in the literature. This article presents a method to evaluate the knowledge acquisition in 3D virtual learning environments (3D VLEs) by using the learner's interactions in the VLE. Three experiments were conducted that demonstrate the viability of using this method and its computational implementation. The results suggest that it is possible to automatically assess learning in predetermined contexts and that some types of user interactions in 3D VLEs are correlated with the user's learning differential.

  10. Re-Dimensional Thinking in Earth Science: From 3-D Virtual Reality Panoramas to 2-D Contour Maps

    ERIC Educational Resources Information Center

    Park, John; Carter, Glenda; Butler, Susan; Slykhuis, David; Reid-Griffin, Angelia

    2008-01-01

    This study examines the relationship of gender and spatial perception on student interactivity with contour maps and non-immersive virtual reality. Eighteen eighth-grade students elected to participate in a six-week activity-based course called "3-D GeoMapping." The course included nine days of activities related to topographic mapping.…

  11. Virtual image display as a backlight for 3D.

    PubMed

    Travis, Adrian; MacCrann, Niall; Emerton, Neil; Kollin, Joel; Georgiou, Andreas; Lanier, Jaron; Bathiche, Stephen

    2013-07-29

    We describe a device which has the potential to be used both as a virtual image display and as a backlight. The pupil of the emitted light fills the device approximately to its periphery and the collimated emission can be scanned both horizontally and vertically in the manner needed to illuminate an eye in any position. The aim is to reduce the power needed to illuminate a liquid crystal panel but also to enable a smooth transition from 3D to a virtual image as the user nears the screen.

  12. 3-D Sound for Virtual Reality and Multimedia

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Trejo, Leonard J. (Technical Monitor)

    2000-01-01

    Technology and applications for the rendering of virtual acoustic spaces are reviewed. Chapter 1 deals with acoustics and psychoacoustics. Chapters 2 and 3 cover cues to spatial hearing and review psychoacoustic literature. Chapter 4 covers signal processing and systems overviews of 3-D sound systems. Chapter 5 covers applications to computer workstations, communication systems, aeronautics and space, and sonic arts. Chapter 6 lists resources. This TM is a reprint of the 1994 book from Academic Press.

  13. Augmented Reality vs Virtual Reality for 3D Object Manipulation.

    PubMed

    Krichenbauer, Max; Yamamoto, Goshiro; Taketomi, Takafumi; Sandor, Christian; Kato, Hirokazu

    2017-01-25

    Virtual Reality (VR) Head-Mounted Displays (HMDs) are on the verge of becoming commodity hardware available to the average user and feasible to use as a tool for 3D work. Some HMDs include front-facing cameras, enabling Augmented Reality (AR) functionality. Apart from avoiding collisions with the environment, interaction with virtual objects may also be affected by seeing the real environment. However, whether these effects are positive or negative has not yet been studied extensively. For most tasks it is unknown whether AR has any advantage over VR. In this work we present the results of a user study in which we compared user performance measured in task completion time on a 9 degrees of freedom object selection and transformation task performed either in AR or VR, both with a 3D input device and a mouse. Our results show faster task completion time in AR over VR. When using a 3D input device, a purely VR environment increased task completion time by 22.5% on average compared to AR (p < 0:024). Surprisingly, a similar effect occurred when using a mouse: users were about 17.3% slower in VR than in AR (p < 0:04). Mouse and 3D input device produced similar task completion times in each condition (AR or VR) respectively. We further found no differences in reported comfort.

  14. Declarative Knowledge Acquisition in Immersive Virtual Learning Environments

    ERIC Educational Resources Information Center

    Webster, Rustin

    2016-01-01

    The author investigated the interaction effect of immersive virtual reality (VR) in the classroom. The objective of the project was to develop and provide a low-cost, scalable, and portable VR system containing purposely designed and developed immersive virtual learning environments for the US Army. The purpose of the mixed design experiment was…

  15. Virtual performer: single camera 3D measuring system for interaction in virtual space

    NASA Astrophysics Data System (ADS)

    Sakamoto, Kunio; Taneji, Shoto

    2006-10-01

    The authors developed interaction media systems in the 3D virtual space. In these systems, the musician virtually plays an instrument like the theremin in the virtual space or the performer plays a show using the virtual character such as a puppet. This interactive virtual media system consists of the image capture, measuring performer's position, detecting and recognizing motions and synthesizing video image using the personal computer. In this paper, we propose some applications of interaction media systems; a virtual musical instrument and superimposing CG character. Moreover, this paper describes the measuring method of the positions of the performer, his/her head and both eyes using a single camera.

  16. Heard on The Street: GIS-Guided Immersive 3D Models as an Augmented Reality for Team Collaboration

    NASA Astrophysics Data System (ADS)

    Quinn, B. B.

    2007-12-01

    Grid computing can be configured to run physics simulations for spatially contiguous virtual 3D model spaces. Each cell is run by a single processor core simulating 1/16 square kilometer of surface and can contain up to 15,000 objects. In this work, a model of one urban block was constructed in the commercial 3D online digital world Second Life http://secondlife.com to prove concept that GIS data can guide the build of an accurate in-world model. Second Life simulators support terrain modeling at two-meter grid intervals. Access to the Second Life grid is worldwide if connections to the US-based servers are possible. This immersive 3D model allows visitors to explore the space at will, with physics simulated for object collisions, gravity, and wind forces about 40 times per second. Visitors view this world as renderings by their 3-D display card of graphic objects and raster textures that are streamed from the simulator grid to the Second Life client, based on that client's instantaneous field of view. Visitors to immersive 3D models experience a virtual world that engages their innate abilities to relate to the real immersive 3D world in which humans have evolved. These abilities enable far more complex and dynamic 3D environments to be quickly and accurately comprehended by more visitors than most non-immersive 3D environments. Objects of interest at ground surface and below can be walked around, possibly entered, viewed at arm's length or flown over at 500 meters above. Videos of renderings have been recorded (as machinima) to share a visit as part of public presentations. Key to this experience is that dozens of simultaneous visitors can experience the model at the same time, each exploring it at will and seeing (if not colliding with) one another---like twenty geology students on a virtual outcrop, where each student might fly if they chose to. This work modeled the downtown Berkeley, CA, transit station in the Second Life region "Gualala" near [170, 35, 35

  17. Virtually ostracized: studying ostracism in immersive virtual environments.

    PubMed

    Kassner, Matthew P; Wesselmann, Eric D; Law, Alvin Ty; Williams, Kipling D

    2012-08-01

    Electronic-based communication (such as Immersive Virtual Environments; IVEs) may offer new ways of satisfying the need for social connection, but they also provide ways this need can be thwarted. Ostracism, being ignored and excluded, is a common social experience that threatens fundamental human needs (i.e., belonging, control, self-esteem, and meaningful existence). Previous ostracism research has made use of a variety of paradigms, including minimal electronic-based interactions (e.g., Cyberball) and communication (e.g., chatrooms and Short Message Services). These paradigms, however, lack the mundane realism that many IVEs now offer. Further, IVE paradigms designed to measure ostracism may allow researchers to test more nuanced hypotheses about the effects of ostracism. We created an IVE in which ostracism could be manipulated experimentally, emulating a previously validated minimal ostracism paradigm. We found that participants who were ostracized in this IVE experienced the same negative effects demonstrated in other ostracism paradigms, providing, to our knowledge, the first evidence of the negative effects of ostracism in virtual environments. Though further research directly exploring these effects in online virtual environments is needed, this research suggests that individuals encountering ostracism in other virtual environments (such as massively multiplayer online role playing games; MMORPGs) may experience negative effects similar to those of being ostracized in real life. This possibility may have serious implications for individuals who are marginalized in their real life and turn to IVEs to satisfy their need for social connection.

  18. Comparing 3D virtual methods for hemimandibular body reconstruction.

    PubMed

    Benazzi, Stefano; Fiorenza, Luca; Kozakowski, Stephanie; Kullmer, Ottmar

    2011-07-01

    Reconstruction of fractured, distorted, or missing parts in human skeleton presents an equal challenge in the fields of paleoanthropology, bioarcheology, forensics, and medicine. This is particularly important within the disciplines such as orthodontics and surgery, when dealing with mandibular defects due to tumors, developmental abnormalities, or trauma. In such cases, proper restorations of both form (for esthetic purposes) and function (restoration of articulation, occlusion, and mastication) are required. Several digital approaches based on three-dimensional (3D) digital modeling, computer-aided design (CAD)/computer-aided manufacturing techniques, and more recently geometric morphometric methods have been used to solve this problem. Nevertheless, comparisons among their outcomes are rarely provided. In this contribution, three methods for hemimandibular body reconstruction have been tested. Two bone defects were virtually simulated in a 3D digital model of a human hemimandible. Accordingly, 3D digital scaffolds were obtained using the mirror copy of the unaffected hemimandible (Method 1), the thin plate spline (TPS) interpolation (Method 2), and the combination between TPS and CAD techniques (Method 3). The mirror copy of the unaffected hemimandible does not provide a suitable solution for bone restoration. The combination between TPS interpolation and CAD techniques (Method 3) produces an almost perfect-fitting 3D digital model that can be used for biocompatible custom-made scaffolds generated by rapid prototyping technologies.

  19. Enhanced LOD Concepts for Virtual 3d City Models

    NASA Astrophysics Data System (ADS)

    Benner, J.; Geiger, A.; Gröger, G.; Häfele, K.-H.; Löwner, M.-O.

    2013-09-01

    Virtual 3D city models contain digital three dimensional representations of city objects like buildings, streets or technical infrastructure. Because size and complexity of these models continuously grow, a Level of Detail (LoD) concept effectively supporting the partitioning of a complete model into alternative models of different complexity and providing metadata, addressing informational content, complexity and quality of each alternative model is indispensable. After a short overview on various LoD concepts, this paper discusses the existing LoD concept of the CityGML standard for 3D city models and identifies a number of deficits. Based on this analysis, an alternative concept is developed and illustrated with several examples. It differentiates between first, a Geometric Level of Detail (GLoD) and a Semantic Level of Detail (SLoD), and second between the interior building and its exterior shell. Finally, a possible implementation of the new concept is demonstrated by means of an UML model.

  20. Special Section: New Ways to Detect Colon Cancer 3-D virtual screening now being used

    MedlinePlus

    ... New Ways to Detect Colon Cancer 3-D virtual screening now being used Past Issues / Spring 2009 ... showcases a 3-D image generated by the virtual colonoscopy software he invented with a team of ...

  1. The virtual reality 3D city of Ningbo

    NASA Astrophysics Data System (ADS)

    Chen, Weimin; Wu, Dun

    2010-11-01

    In 2005, Ningbo Design Research Institute of Mapping & Surveying started the development of concepts and an implementation of Virtual Reality Ningbo System (VRNS). VRNS is being developed under the digital city technological framework and well supported by computing advances, space technologies, and commercial innovations. It has become the best solution for integrating, managing, presenting, and distributing complex city information. VRNS is not only a 3D-GIS launch project but also a technology innovation. The traditional domain of surveying and mapping has changed greatly in Ningbo. Geo-information systems are developing towards a more reality-, three dimension- and Service-Oriented Architecture-based system. The VRNS uses technology such as 3D modeling, user interface design, view scene modeling, real-time rendering and interactive roaming under a virtual environment. Two applications of VRNS already being used are for city planning and high-rise buildings' security management. The final purpose is to develop VRNS into a powerful public information platform, and to achieve that heterogeneous city information resources share this one single platform.

  2. Extended pie menus for immersive virtual environments.

    PubMed

    Gebhardt, Sascha; Pick, Sebastian; Leithold, Franziska; Hentschel, Bernd; Kuhlen, Torsten

    2013-04-01

    Pie menus are a well-known technique for interacting with 2D environments and so far a large body of research documents their usage and optimizations. Yet, comparatively little research has been done on the usability of pie menus in immersive virtual environments (IVEs). In this paper we reduce this gap by presenting an implementation and evaluation of an extended hierarchical pie menu system for IVEs that can be operated with a six-degrees-of-freedom input device. Following an iterative development process, we first developed and evaluated a basic hierarchical pie menu system. To better understand how pie menus should be operated in IVEs, we tested this system in a pilot user study with 24 participants and focus on item selection. Regarding the results of the study, the system was tweaked and elements like check boxes, sliders, and color map editors were added to provide extended functionality. An expert review with five experts was performed with the extended pie menus being integrated into an existing VR application to identify potential design issues. Overall results indicated high performance and efficient design.

  3. The Effects of Instructor-Avatar Immediacy in Second Life, an Immersive and Interactive Three-Dimensional Virtual Environment

    ERIC Educational Resources Information Center

    Lawless-Reljic, Sabine Karine

    2010-01-01

    Growing interest of educational institutions in desktop 3D graphic virtual environments for hybrid and distance education prompts questions on the efficacy of such tools. Virtual worlds, such as Second Life[R], enable computer-mediated immersion and interactions encompassing multimodal communication channels including audio, video, and text-.…

  4. A Voice and Mouse Input Interface for 3D Virtual Environments

    NASA Technical Reports Server (NTRS)

    Kao, David L.; Bryson, Steve T.

    2003-01-01

    There have been many successful stories on how 3D input devices can be fully integrated into an immersive virtual environment. Electromagnetic trackers, optical trackers, gloves, and flying mice are just some of these input devices. Though we can use existing 3D input devices that are commonly used for VR applications, there are several factors that prevent us from choosing these input devices for our applications. One main factor is that most of these tracking devices are not suitable for prolonged use due to human fatigue associated with using them. A second factor is that many of them would occupy additional office space. Another factor is that many of the 3D input devices are expensive due to the unusual hardware that are required. For our VR applications, we want a user interface that would work naturally with standard equipment. In this paper, we demonstrate applications or our proposed muitimodal interface using a 3D dome display. We also show that effective data analysis can be achieved while the scientists view their data rendered inside the dome display and perform user interactions simply using the mouse and voice input. Though the sphere coordinate grid seems to be ideal for interaction using a 3D dome display, we can also use other non-spherical grids as well.

  5. 3D Technology Selection for a Virtual Learning Environment by Blending ISO 9126 Standard and AHP

    ERIC Educational Resources Information Center

    Cetin, Aydin; Guler, Inan

    2011-01-01

    Web3D presents many opportunities for learners in a virtual world or virtual environment over the web. This is a great opportunity for open-distance education institutions to benefit from web3d technologies to create courses with interactive 3d materials. There are many open source and commercial products offering 3d technologies over the web…

  6. Second Life, a 3-D Animated Virtual World: An Alternative Platform for (Art) Education

    ERIC Educational Resources Information Center

    Han, Hsiao-Cheng

    2011-01-01

    3-D animated virtual worlds are no longer only for gaming. With the advance of technology, animated virtual worlds not only are found on every computer, but also connect users with the internet. Today, virtual worlds are created not only by companies, but also through the collaboration of users. Online 3-D animated virtual worlds provide a new…

  7. Load Assembly of the Ignitor Machine with 3D Interactive Virtual Reality

    NASA Astrophysics Data System (ADS)

    Migliori, S.; Pierattini, S.

    2003-10-01

    The main purpose of this work is to assist the Ignitor team in every phase of the project using the new Virtual Reality Technology (VR). Through the VR it is possible to see, plan and test the machine assembly sequence and the total layout. We are also planning to simulate in VR the remote handling systems. The complexity of the system requires a large and powerful graphical device. The ENEA?s "Advanced Visualization Technology" team has implemented a repository file data structure integrated with the CATIA drawing cams from the designer of Ignitor. The 3D virtual mockup software is used to view and analyze all objects that compose the mockup and also to analyze the correct assembly sequences. The ENEA?s 3D immersive system and software are fully integrated in the ENEA?s supercomputing GRID infrastructure. At any time all members of the Ignitor Project can view the status of the mockup in 3D (draft and/or final objects) through the net. During the conference examples of the assembly sequence and load assembly structure will be presented.

  8. Virtual Reality--Learning by Immersion.

    ERIC Educational Resources Information Center

    Dunning, Jeremy

    1998-01-01

    Discusses the use of virtual reality in educational software. Topics include CAVE (Computer-Assisted Virtual Environments); cost-effective virtual environment tools including QTVR (Quick Time Virtual Reality); interactive exercises; educational criteria for technology-based educational tools; and examples of screen displays. (LRW)

  9. Immersive Virtual Moon Scene System Based on Panoramic Camera Data of Chang'E-3

    NASA Astrophysics Data System (ADS)

    Gao, X.; Liu, J.; Mu, L.; Yan, W.; Zeng, X.; Zhang, X.; Li, C.

    2014-12-01

    The system "Immersive Virtual Moon Scene" is used to show the virtual environment of Moon surface in immersive environment. Utilizing stereo 360-degree imagery from panoramic camera of Yutu rover, the system enables the operator to visualize the terrain and the celestial background from the rover's point of view in 3D. To avoid image distortion, stereo 360-degree panorama stitched by 112 images is projected onto inside surface of sphere according to panorama orientation coordinates and camera parameters to build the virtual scene. Stars can be seen from the Moon at any time. So we render the sun, planets and stars according to time and rover's location based on Hipparcos catalogue as the background on the sphere. Immersing in the stereo virtual environment created by this imaged-based rendering technique, the operator can zoom, pan to interact with the virtual Moon scene and mark interesting objects. Hardware of the immersive virtual Moon system is made up of four high lumen projectors and a huge curve screen which is 31 meters long and 5.5 meters high. This system which take all panoramic camera data available and use it to create an immersive environment, enable operator to interact with the environment and mark interesting objects contributed heavily to establishment of science mission goals in Chang'E-3 mission. After Chang'E-3 mission, the lab with this system will be open to public. Besides this application, Moon terrain stereo animations based on Chang'E-1 and Chang'E-2 data will be showed to public on the huge screen in the lab. Based on the data of lunar exploration,we will made more immersive virtual moon scenes and animations to help the public understand more about the Moon in the future.

  10. Student Responses to Their Immersion in a Virtual Environment.

    ERIC Educational Resources Information Center

    Taylor, Wayne

    Undertaken in conjunction with a larger study that investigated the educational efficacy of students building their own virtual worlds, this study measures the reactions of students in grades 4-12 to the experience of being immersed in virtual reality (VR). The study investigated the sense of "presence" experienced by the students, the…

  11. The Components of Effective Teacher Training in the Use of Three-Dimensional Immersive Virtual Worlds for Learning and Instruction Purposes: A Literature Review

    ERIC Educational Resources Information Center

    Nussli, Natalie; Oh, Kevin

    2014-01-01

    The overarching question that guides this review is to identify the key components of effective teacher training in virtual schooling, with a focus on three-dimensional (3D) immersive virtual worlds (IVWs). The process of identifying the essential components of effective teacher training in the use of 3D IVWs will be described step-by-step. First,…

  12. Participatory Gis: Experimentations for a 3d Social Virtual Globe

    NASA Astrophysics Data System (ADS)

    Brovelli, M. A.; Minghini, M.; Zamboni, G.

    2013-08-01

    The dawn of GeoWeb 2.0, the geographic extension of Web 2.0, has opened new possibilities in terms of online dissemination and sharing of geospatial contents, thus laying the foundations for a fruitful development of Participatory GIS (PGIS). The purpose of the study is to investigate the extension of PGIS applications, which are quite mature in the traditional bi-dimensional framework, up to the third dimension. More in detail, the system should couple a powerful 3D visualization with an increase of public participation by means of a tool allowing data collecting from mobile devices (e.g. smartphones and tablets). The PGIS application, built using the open source NASA World Wind virtual globe, is focussed on the cultural and tourism heritage of Como city, located in Northern Italy. An authentication mechanism was implemented, which allows users to create and manage customized projects through cartographic mash-ups of Web Map Service (WMS) layers. Saved projects populate a catalogue which is available to the entire community. Together with historical maps and the current cartography of the city, the system is also able to manage geo-tagged multimedia data, which come from user field-surveys performed through mobile devices and report POIs (Points Of Interest). Each logged user can then contribute to POIs characterization by adding textual and multimedia information (e.g. images, audios and videos) directly on the globe. All in all, the resulting application allows users to create and share contributions as it usually happens on social platforms, additionally providing a realistic 3D representation enhancing the expressive power of data.

  13. VSViewer3D: a tool for interactive data mining of three-dimensional virtual screening data.

    PubMed

    Diller, Kyle I; Diller, David J

    2014-12-22

    The VSviewer3D is a simple Java tool for visual exploration of three-dimensional (3D) virtual screening data. The VSviewer3D brings together the ability to explore numerical data, such as calculated properties and virtual screening scores, structure depiction, interactive topological and 3D similarity searching, and 3D visualization. By doing so the user is better able to quickly identify outliers, assess tractability of large numbers of compounds, visualize hits of interest, annotate hits, and mix and match interesting scaffolds. We demonstrate the utility of the VSviewer3D by describing a use case in a docking based virtual screen.

  14. Visualization of reservoir simulation data with an immersive virtual reality system

    SciTech Connect

    Williams, B.K.

    1996-10-01

    This paper discusses an investigation into the use of an immersive virtual reality (VR) system to visualize reservoir simulation output data. The hardware and software configurations of the test-immersive VR system are described and compared to a nonimmersive VR system and to an existing workstation screen-based visualization system. The structure of 3D reservoir simulation data and the actions to be performed on the data within the VR system are discussed. The subjective results of the investigation are then presented, followed by a discussion of possible future work.

  15. Going Virtual… or Not: Development and Testing of a 3D Virtual Astronomy Environment

    NASA Astrophysics Data System (ADS)

    Ruzhitskaya, L.; Speck, A.; Ding, N.; Baldridge, S.; Witzig, S.; Laffey, J.

    2013-04-01

    We present our preliminary results of a pilot study of students' knowledge transfer of an astronomy concept into a new environment. We also share our discoveries on what aspects of a 3D environment students consider being motivational and discouraging for their learning. This study was conducted among 64 non-science major students enrolled in an astronomy laboratory course. During the course, students learned the concept and applications of Kepler's laws using a 2D interactive environment. Later in the semester, the students were placed in a 3D environment in which they were asked to conduct observations and to answers a set of questions pertaining to the Kepler's laws of planetary motion. In this study, we were interested in observing scrutinizing and assessing students' behavior: from choices that they made while creating their avatars (virtual representations) to tools they choose to use, to their navigational patterns, to their levels of discourse in the environment. These helped us to identify what features of the 3D environment our participants found to be helpful and interesting and what tools created unnecessary clutter and distraction. The students' social behavior patterns in the virtual environment together with their answers to the questions helped us to determine how well they understood Kepler's laws, how well they could transfer the concepts to a new situation, and at what point a motivational tool such as a 3D environment becomes a disruption to the constructive learning. Our founding confirmed that students construct deeper knowledge of a concept when they are fully immersed in the environment.

  16. Sensor Spatial Distortion, Visual Latency, and Update Rate Effects on 3D Tracking in Virtual Environments

    NASA Technical Reports Server (NTRS)

    Ellis, S. R.; Adelstein, B. D.; Baumeler, S.; Jense, G. J.; Jacoby, R. H.; Trejo, Leonard (Technical Monitor)

    1998-01-01

    Several common defects that we have sought to minimize in immersing virtual environments are: static sensor spatial distortion, visual latency, and low update rates. Human performance within our environments during large amplitude 3D tracking was assessed by objective and subjective methods in the presence and absence of these defects. Results show that 1) removal of our relatively small spatial sensor distortion had minor effects on the tracking activity, 2) an Adapted Cooper-Harper controllability scale proved the most sensitive subjective indicator of the degradation of dynamic fidelity caused by increasing latency and decreasing frame rates, and 3) performance, as measured by normalized RMS tracking error or subjective impressions, was more markedly influenced by changing visual latency than by update rate.

  17. Performance of dental students versus prosthodontics residents on a 3D immersive haptic simulator.

    PubMed

    Eve, Elizabeth J; Koo, Samuel; Alshihri, Abdulmonem A; Cormier, Jeremy; Kozhenikov, Maria; Donoff, R Bruce; Karimbux, Nadeem Y

    2014-04-01

    This study evaluated the performance of dental students versus prosthodontics residents on a simulated caries removal exercise using a newly designed, 3D immersive haptic simulator. The intent of this study was to provide an initial assessment of the simulator's construct validity, which in the context of this experiment was defined as its ability to detect a statistically significant performance difference between novice dental students (n=12) and experienced prosthodontics residents (n=14). Both groups received equivalent calibration training on the simulator and repeated the same caries removal exercise three times. Novice and experienced subjects' average performance differed significantly on the caries removal exercise with respect to the percentage of carious lesion removed and volume of surrounding sound tooth structure removed (p<0.05). Experienced subjects removed a greater portion of the carious lesion, but also a greater volume of the surrounding tooth structure. Efficiency, defined as percentage of carious lesion removed over drilling time, improved significantly over the course of the experiment for both novice and experienced subjects (p<0.001). Within the limitations of this study, experienced subjects removed a greater portion of carious lesion on a 3D immersive haptic simulator. These results are a first step in establishing the validity of this device.

  18. Light-absorbent liquid immersion angled exposure for patterning 3D samples with vertical sidewalls

    NASA Astrophysics Data System (ADS)

    Kumagai, Shinya; Kubo, Hironori; Sasaki, Minoru

    2017-02-01

    To make photolithography patterns on 3D samples, the angled (inclined) exposure technique has been used so far. However, technological issues have emerged in making photolithography patterns on the surface of trench structures. The surface of the trench structures can be covered with a photoresist film by spray-coating but the photoresist film deposited on the sidewalls and bottom of the trench is generally thin. The thin photoresist film deposited inside the trench has been easily overdosed. Moreover, irregular patterns have frequently been formed by the light reflected inside the trench. In this study, we have developed liquid immersion photolithography using a light-absorbent material. The light-reflection inside the trench was suppressed. Various patterns were transferred in the photoresist film deposited on the trench structures which had an aspect ratio of 0.74. Compared to immersion photolithography using pure water under p-polarization light control, the light-absorbent liquid immersion photolithography developed here patterned well the surfaces of the trench sidewalls and bottom.

  19. Liquid immersion thermal crosslinking of 3D polymer nanopatterns for direct carbonisation with high structural integrity

    PubMed Central

    Kang, Da-Young; Kim, Cheolho; Park, Gyurim; Moon, Jun Hyuk

    2015-01-01

    The direct pyrolytic carbonisation of polymer patterns has attracted interest for its use in obtaining carbon materials. In the case of carbonisation of nanopatterned polymers, the polymer flow and subsequent pattern change may occur in order to relieve their high surface energies. Here, we demonstrated that liquid immersion thermal crosslinking of polymer nanopatterns effectively enhanced the thermal resistance and maintained the structure integrity during the heat treatment. We employed the liquid immersion thermal crosslinking for 3D porous SU8 photoresist nanopatterns and successfully converted them to carbon nanopatterns while maintaining their porous features. The thermal crosslinking reaction and carbonisation of SU8 nanopatterns were characterised. The micro-crystallinity of the SU8-derived carbon nanopatterns was also characterised. The liquid immersion heat treatment can be extended to the carbonisation of various polymer or photoresist nanopatterns and also provide a facile way to control the surface energy of polymer nanopatterns for various purposes, for example, to block copolymer or surfactant self-assemblies. PMID:26677949

  20. Implementation of virtual models from sheet metal forming simulation into physical 3D colour models using 3D printing

    NASA Astrophysics Data System (ADS)

    Junk, S.

    2016-08-01

    Today the methods of numerical simulation of sheet metal forming offer a great diversity of possibilities for optimization in product development and in process design. However, the results from simulation are only available as virtual models. Because there are any forming tools available during the early stages of product development, physical models that could serve to represent the virtual results are therefore lacking. Physical 3D-models can be created using 3D-printing and serve as an illustration and present a better understanding of the simulation results. In this way, the results from the simulation can be made more “comprehensible” within a development team. This paper presents the possibilities of 3D-colour printing with particular consideration of the requirements regarding the implementation of sheet metal forming simulation. Using concrete examples of sheet metal forming, the manufacturing of 3D colour models will be expounded upon on the basis of simulation results.

  1. [3D virtual imaging of the upper airways].

    PubMed

    Ferretti, G; Coulomb, M

    2000-04-01

    The different three dimensional reconstructions of the upper airways that can be obtained with spiral computed tomograpy (CT) are presented here. The parameters indispensable to achieve as real as possible spiral CT images are recalled together with the advantages and disadvantages of the different techniues. Multislice reconstruction (MSR) produces slices in different planes of space with the high contrast of CT slices. They provide information similar to that obtained for the rare indications for thoracic MRI. Thick slice reconstructions with maximum intensity projection (MIP) or minimum intensity projection (minIP) give projection views where the contrast can be modified by selecting the more dense (MIP) or less dense (minIP) voxels. They find their application in the exploration of the upper airways. Surface and volume external 3D reconstructions can be obtained. They give an overall view of the upper airways, similar to a bronchogram. Virtual endoscopy reproduces real endoscopic images but cannot provide information on the aspect of the mucosa or biopsy specimens. It offers possible applications for preparing, guiding and controlling interventional fibroscopy procedures.

  2. The cranial nerve skywalk: A 3D tutorial of cranial nerves in a virtual platform.

    PubMed

    Richardson-Hatcher, April; Hazzard, Matthew; Ramirez-Yanez, German

    2014-01-01

    Visualization of the complex courses of the cranial nerves by students in the health-related professions is challenging through either diagrams in books or plastic models in the gross laboratory. Furthermore, dissection of the cranial nerves in the gross laboratory is an extremely meticulous task. Teaching and learning the cranial nerve pathways is difficult using two-dimensional (2D) illustrations alone. Three-dimensional (3D) models aid the teacher in describing intricate and complex anatomical structures and help students visualize them. The study of the cranial nerves can be supplemented with 3D, which permits the students to fully visualize their distribution within the craniofacial complex. This article describes the construction and usage of a virtual anatomy platform in Second Life™, which contains 3D models of the cranial nerves III, V, VII, and IX. The Cranial Nerve Skywalk features select cranial nerves and the associated autonomic pathways in an immersive online environment. This teaching supplement was introduced to groups of pre-healthcare professional students in gross anatomy courses at both institutions and student feedback is included.

  3. VR-Planets : a 3D immersive application for real-time flythrough images of planetary surfaces

    NASA Astrophysics Data System (ADS)

    Civet, François; Le Mouélic, Stéphane

    2015-04-01

    During the last two decades, a fleet of planetary probes has acquired several hundred gigabytes of images of planetary surfaces. Mars has been particularly well covered thanks to the Mars Global Surveyor, Mars Express and Mars Reconnaissance Orbiter spacecrafts. HRSC, CTX, HiRISE instruments allowed the computation of Digital Elevation Models with a resolution from hundreds of meters up to 1 meter per pixel, and corresponding orthoimages with a resolution from few hundred of meters up to 25 centimeters per pixel. The integration of such huge data sets into a system allowing user-friendly manipulation either for scientific investigation or for public outreach can represent a real challenge. We are investigating how innovative tools can be used to freely fly over reconstructed landscapes in real time, using technologies derived from the game industry and virtual reality. We have developed an application based on a game engine, using planetary data, to immerse users in real martian landscapes. The user can freely navigate in each scene at full spatial resolution using a game controller. The actual rendering is compatible with several visualization devices such as 3D active screen, virtual reality headsets (Oculus Rift), and android devices.

  4. EXPLORING ENVIRONMENTAL DATA IN A HIGHLY IMMERSIVE VIRTUAL REALITY ENVIRONMENT

    EPA Science Inventory

    Geography inherently fills a 3D space and yet we struggle with displaying geography using, primaarily, 2D display devices. Virtual environments offer a more realistically-dimensioned display space and this is being realized in the expanding area of research on 3D Geographic Infor...

  5. ETeach3D: Designing a 3D Virtual Environment for Evaluating the Digital Competence of Preservice Teachers

    ERIC Educational Resources Information Center

    Esteve-Mon, Francesc M.; Cela-Ranilla, Jose María; Gisbert-Cervera, Mercè

    2016-01-01

    The acquisition of teacher digital competence is a key aspect in the initial training of teachers. However, most existing evaluation instruments do not provide sufficient evidence of this teaching competence. In this study, we describe the design and development process of a three-dimensional (3D) virtual environment for evaluating the teacher…

  6. iVirtualWorld: A Domain-Oriented End-User Development Environment for Building 3D Virtual Chemistry Experiments

    ERIC Educational Resources Information Center

    Zhong, Ying

    2013-01-01

    Virtual worlds are well-suited for building virtual laboratories for educational purposes to complement hands-on physical laboratories. However, educators may face technical challenges because developing virtual worlds requires skills in programming and 3D design. Current virtual world building tools are developed for users who have programming…

  7. Effects of 3D Virtual Simulators in the Introductory Wind Energy Course: A Tool for Teaching Engineering Concepts

    SciTech Connect

    Do, Phuong T.; Moreland, John R.; Delgado, Catherine; Wilson, Kristina; Wang, Xiuling; Zhou, Chenn; Ice, Phil

    2013-01-01

    Our research provides an innovative solution for optimizing learning effectiveness and improving postsecondary education through the development of virtual simulators that can be easily used and integrated into existing wind energy curriculum. Two 3D virtual simulators are developed in our laboratory for use in an immersive 3D virtual reality (VR) system or for 3D display on a 2D screen. Our goal is to apply these prototypical simulators to train postsecondary students and professionals in wind energy education; and to offer experiential learning opportunities in 3D modeling, simulation, and visualization. The issue of transferring learned concepts to practical applications is a widespread problem in postsecondary education. Related to this issue is a critical demand to educate and train a generation of professionals for the wind energy industry. With initiatives such as the U.S. Department of Energy's “20% Wind Energy by 2030” outlining an exponential increase of wind energy capacity over the coming years, revolutionary educational reform is needed to meet the demand for education in the field of wind energy. These developments and implementation of Virtual Simulators and accompanying curriculum will propel national reforms, meeting the needs of the wind energy industrial movement and addressing broader educational issues that affect a number of disciplines.

  8. Effects of 3D Virtual Simulators in the Introductory Wind Energy Course: A Tool for Teaching Engineering Concepts

    DOE PAGES

    Do, Phuong T.; Moreland, John R.; Delgado, Catherine; ...

    2013-01-01

    Our research provides an innovative solution for optimizing learning effectiveness and improving postsecondary education through the development of virtual simulators that can be easily used and integrated into existing wind energy curriculum. Two 3D virtual simulators are developed in our laboratory for use in an immersive 3D virtual reality (VR) system or for 3D display on a 2D screen. Our goal is to apply these prototypical simulators to train postsecondary students and professionals in wind energy education; and to offer experiential learning opportunities in 3D modeling, simulation, and visualization. The issue of transferring learned concepts to practical applications is amore » widespread problem in postsecondary education. Related to this issue is a critical demand to educate and train a generation of professionals for the wind energy industry. With initiatives such as the U.S. Department of Energy's “20% Wind Energy by 2030” outlining an exponential increase of wind energy capacity over the coming years, revolutionary educational reform is needed to meet the demand for education in the field of wind energy. These developments and implementation of Virtual Simulators and accompanying curriculum will propel national reforms, meeting the needs of the wind energy industrial movement and addressing broader educational issues that affect a number of disciplines.« less

  9. Integration of the virtual 3D model of a control system with the virtual controller

    NASA Astrophysics Data System (ADS)

    Herbuś, K.; Ociepka, P.

    2015-11-01

    Nowadays the design process includes simulation analysis of different components of a constructed object. It involves the need for integration of different virtual object to simulate the whole investigated technical system. The paper presents the issues related to the integration of a virtual 3D model of a chosen control system of with a virtual controller. The goal of integration is to verify the operation of an adopted object of in accordance with the established control program. The object of the simulation work is the drive system of a tunneling machine for trenchless work. In the first stage of work was created an interactive visualization of functioning of the 3D virtual model of a tunneling machine. For this purpose, the software of the VR (Virtual Reality) class was applied. In the elaborated interactive application were created adequate procedures allowing controlling the drive system of a translatory motion, a rotary motion and the drive system of a manipulator. Additionally was created the procedure of turning on and off the output crushing head, mounted on the last element of the manipulator. In the elaborated interactive application have been established procedures for receiving input data from external software, on the basis of the dynamic data exchange (DDE), which allow controlling actuators of particular control systems of the considered machine. In the next stage of work, the program on a virtual driver, in the ladder diagram (LD) language, was created. The control program was developed on the basis of the adopted work cycle of the tunneling machine. The element integrating the virtual model of the tunneling machine for trenchless work with the virtual controller is the application written in a high level language (Visual Basic). In the developed application was created procedures responsible for collecting data from the running, in a simulation mode, virtual controller and transferring them to the interactive application, in which is verified the

  10. A Second Life for eHealth: Prospects for the Use of 3-D Virtual Worlds in Clinical Psychology

    PubMed Central

    Gaggioli, Andrea; Vigna, Cinzia; Riva, Giuseppe

    2008-01-01

    The aim of the present paper is to describe the role played by three-dimensional (3-D) virtual worlds in eHealth applications, addressing some potential advantages and issues related to the use of this emerging medium in clinical practice. Due to the enormous diffusion of the World Wide Web (WWW), telepsychology, and telehealth in general, have become accepted and validated methods for the treatment of many different health care concerns. The introduction of the Web 2.0 has facilitated the development of new forms of collaborative interaction between multiple users based on 3-D virtual worlds. This paper describes the development and implementation of a form of tailored immersive e-therapy called p-health whose key factor is interreality, that is, the creation of a hybrid augmented experience merging physical and virtual worlds. We suggest that compared with conventional telehealth applications such as emails, chat, and videoconferences, the interaction between real and 3-D virtual worlds may convey greater feelings of presence, facilitate the clinical communication process, positively influence group processes and cohesiveness in group-based therapies, and foster higher levels of interpersonal trust between therapists and patients. However, challenges related to the potentially addictive nature of such virtual worlds and questions related to privacy and personal safety will also be discussed. PMID:18678557

  11. Curvilinear Immersed Boundary Method for Simulating Fluid Structure Interaction with Complex 3D Rigid Bodies

    PubMed Central

    Borazjani, Iman; Ge, Liang; Sotiropoulos, Fotis

    2010-01-01

    The sharp-interface CURVIB approach of Ge and Sotiropoulos [L. Ge, F. Sotiropoulos, A Numerical Method for Solving the 3D Unsteady Incompressible Navier-Stokes Equations in Curvilinear Domains with Complex Immersed Boundaries, Journal of Computational Physics 225 (2007) 1782–1809] is extended to simulate fluid structure interaction (FSI) problems involving complex 3D rigid bodies undergoing large structural displacements. The FSI solver adopts the partitioned FSI solution approach and both loose and strong coupling strategies are implemented. The interfaces between immersed bodies and the fluid are discretized with a Lagrangian grid and tracked with an explicit front-tracking approach. An efficient ray-tracing algorithm is developed to quickly identify the relationship between the background grid and the moving bodies. Numerical experiments are carried out for two FSI problems: vortex induced vibration of elastically mounted cylinders and flow through a bileaflet mechanical heart valve at physiologic conditions. For both cases the computed results are in excellent agreement with benchmark simulations and experimental measurements. The numerical experiments suggest that both the properties of the structure (mass, geometry) and the local flow conditions can play an important role in determining the stability of the FSI algorithm. Under certain conditions unconditionally unstable iteration schemes result even when strong coupling FSI is employed. For such cases, however, combining the strong-coupling iteration with under-relaxation in conjunction with the Aitken’s acceleration technique is shown to effectively resolve the stability problems. A theoretical analysis is presented to explain the findings of the numerical experiments. It is shown that the ratio of the added mass to the mass of the structure as well as the sign of the local time rate of change of the force or moment imparted on the structure by the fluid determine the stability and convergence of the

  12. Rapid prototyping 3D virtual world interfaces within a virtual factory environment

    NASA Technical Reports Server (NTRS)

    Kosta, Charles Paul; Krolak, Patrick D.

    1993-01-01

    On-going work into user requirements analysis using CLIPS (NASA/JSC) expert systems as an intelligent event simulator has led to research into three-dimensional (3D) interfaces. Previous work involved CLIPS and two-dimensional (2D) models. Integral to this work was the development of the University of Massachusetts Lowell parallel version of CLIPS, called PCLIPS. This allowed us to create both a Software Bus and a group problem-solving environment for expert systems development. By shifting the PCLIPS paradigm to use the VEOS messaging protocol we have merged VEOS (HlTL/Seattle) and CLIPS into a distributed virtual worlds prototyping environment (VCLIPS). VCLIPS uses the VEOS protocol layer to allow multiple experts to cooperate on a single problem. We have begun to look at the control of a virtual factory. In the virtual factory there are actors and objects as found in our Lincoln Logs Factory of the Future project. In this artificial reality architecture there are three VCLIPS entities in action. One entity is responsible for display and user events in the 3D virtual world. Another is responsible for either simulating the virtual factory or communicating with the real factory. The third is a user interface expert. The interface expert maps user input levels, within the current prototype, to control information for the factory. The interface to the virtual factory is based on a camera paradigm. The graphics subsystem generates camera views of the factory on standard X-Window displays. The camera allows for view control and object control. Control or the factory is accomplished by the user reaching into the camera views to perform object interactions. All communication between the separate CLIPS expert systems is done through VEOS.

  13. CaveCAD: a tool for architectural design in immersive virtual environments

    NASA Astrophysics Data System (ADS)

    Schulze, Jürgen P.; Hughes, Cathleen E.; Zhang, Lelin; Edelstein, Eve; Macagno, Eduardo

    2014-02-01

    Existing 3D modeling tools were designed to run on desktop computers with monitor, keyboard and mouse. To make 3D modeling possible with mouse and keyboard, many 3D interactions, such as point placement or translations of geometry, had to be mapped to the 2D parameter space of the mouse, possibly supported by mouse buttons or keyboard keys. We hypothesize that had the designers of these existing systems had been able to assume immersive virtual reality systems as their target platforms, they would have been able to design 3D interactions much more intuitively. In collaboration with professional architects, we created a simple, but complete 3D modeling tool for virtual environments from the ground up and use direct 3D interaction wherever possible and adequate. In this publication, we present our approaches for interactions for typical 3D modeling functions, such as geometry creation, modification of existing geometry, and assignment of surface materials. We also discuss preliminary user experiences with this system.

  14. Situating Pedagogies, Positions and Practices in Immersive Virtual Worlds

    ERIC Educational Resources Information Center

    Savin-Baden, Maggi; Gourlay, Lesley; Tombs, Cathy; Steils, Nicole; Tombs, Gemma; Mawer, Matt

    2010-01-01

    Background: The literature on immersive virtual worlds and e-learning to date largely indicates that technology has led the pedagogy. Although rationales for implementing e-learning have included flexibility of provision and supporting diversity, none of these recommendations has helped to provide strong pedagogical location. Furthermore, there is…

  15. Immersive Virtual Worlds in University-Level Human Geography Courses

    ERIC Educational Resources Information Center

    Dittmer, Jason

    2010-01-01

    This paper addresses the potential for increased deployment of immersive virtual worlds in higher geographic education. An account of current practice regarding popular culture in the geography classroom is offered, focusing on the objectification of popular culture rather than its constitutive role vis-a-vis place. Current e-learning practice is…

  16. The Virtual Radiopharmacy Laboratory: A 3-D Simulation for Distance Learning

    ERIC Educational Resources Information Center

    Alexiou, Antonios; Bouras, Christos; Giannaka, Eri; Kapoulas, Vaggelis; Nani, Maria; Tsiatsos, Thrasivoulos

    2004-01-01

    This article presents Virtual Radiopharmacy Laboratory (VR LAB), a virtual laboratory accessible through the Internet. VR LAB is designed and implemented in the framework of the VirRAD European project. This laboratory represents a 3D simulation of a radio-pharmacy laboratory, where learners, represented by 3D avatars, can experiment on…

  17. 3D Inhabited Virtual Worlds: Interactivity and Interaction between Avatars, Autonomous Agents, and Users.

    ERIC Educational Resources Information Center

    Jensen, Jens F.

    This paper addresses some of the central questions currently related to 3-Dimensional Inhabited Virtual Worlds (3D-IVWs), their virtual interactions, and communication, drawing from the theory and methodology of sociology, interaction analysis, interpersonal communication, semiotics, cultural studies, and media studies. First, 3D-IVWs--seen as a…

  18. Intelligent Tutors in Immersive Virtual Environments

    ERIC Educational Resources Information Center

    Yan, Peng; Slator, Brian M.; Vender, Bradley; Jin, Wei; Kariluoma, Matti; Borchert, Otto; Hokanson, Guy; Aggarwal, Vaibhav; Cosmano, Bob; Cox, Kathleen T.; Pilch, André; Marry, Andrew

    2013-01-01

    Research into virtual role-based learning has progressed over the past decade. Modern issues include gauging the difficulty of designing a goal system capable of meeting the requirements of students with different knowledge levels, and the reasonability and possibility of taking advantage of the well-designed formula and techniques served in other…

  19. The Engelbourg's ruins: from 3D TLS point cloud acquisition to 3D virtual and historic models

    NASA Astrophysics Data System (ADS)

    Koehl, Mathieu; Berger, Solveig; Nobile, Sylvain

    2014-05-01

    The Castle of Engelbourg was built at the beginning of the 13th century, at the top of the Schlossberg. It is situated on the territory of the municipality of Thann (France), at the crossroads of Alsace and Lorraine, and dominates the outlet of the valley of Thur. Its strategic position was one of the causes of its systematic destructions during the 17th century, and Louis XIV finished his fate by ordering his demolition in 1673. Today only few vestiges remain, of which a section of the main tower from about 7m of diameter and 4m of wide laying on its slice, unique characteristic in the regional castral landscape. It is visible since the valley, was named "the Eye of the witch", and became a key attraction of the region. The site, which extends over approximately one hectare, is for several years the object of numerous archaeological studies and is at the heart of a project of valuation of the vestiges today. It was indeed a key objective, among the numerous planned works, to realize a 3D model of the site in its current state, in other words, a virtual model "such as seized", exploitable as well from a cultural and tourist point of view as by scientists and in archaeological researches. The team of the ICube/INSA lab had in responsibility the realization of this model, the acquisition of the data until the delivery of the virtual model, thanks to 3D TLS and topographic surveying methods. It was also planned to integrate into this 3D model, data of 2D archives, stemming from series of former excavations. The objectives of this project were the following ones: • Acquisition of 3D digital data of the site and 3D modelling • Digitization of the 2D archaeological data and integration in the 3D model • Implementation of a database connected to the 3D model • Virtual Visit of the site The obtained results allowed us to visualize every 3D object individually, under several forms (point clouds, 3D meshed objects and models, etc.) and at several levels of detail

  20. Interaction Design and Usability of Learning Spaces in 3D Multi-user Virtual Worlds

    NASA Astrophysics Data System (ADS)

    Minocha, Shailey; Reeves, Ahmad John

    Three-dimensional virtual worlds are multimedia, simulated environments, often managed over the Web, which users can 'inhabit' and interact via their own graphical, self-representations known as 'avatars'. 3D virtual worlds are being used in many applications: education/training, gaming, social networking, marketing and commerce. Second Life is the most widely used 3D virtual world in education. However, problems associated with usability, navigation and way finding in 3D virtual worlds may impact on student learning and engagement. Based on empirical investigations of learning spaces in Second Life, this paper presents design guidelines to improve the usability and ease of navigation in 3D spaces. Methods of data collection include semi-structured interviews with Second Life students, educators and designers. The findings have revealed that design principles from the fields of urban planning, Human- Computer Interaction, Web usability, geography and psychology can influence the design of spaces in 3D multi-user virtual environments.

  1. Implementation of 3d Tools and Immersive Experience Interaction for Supporting Learning in a Library-Archive Environment. Visions and Challenges

    NASA Astrophysics Data System (ADS)

    Angeletaki, A.; Carrozzino, M.; Johansen, S.

    2013-07-01

    In this paper we present an experimental environment of 3D books combined with a game application that has been developed by a collaboration project between the Norwegian University of Science and Technology in Trondheim, Norway the NTNU University Library, and the Percro laboratory of Santa Anna University in Pisa, Italy. MUBIL is an international research project involving museums, libraries and ICT academy partners aiming to develop a consistent methodology enabling the use of Virtual Environments as a metaphor to present manuscripts content through the paradigms of interaction and immersion, evaluating different possible alternatives. This paper presents the results of the application of two prototypes of books augmented with the use of XVR and IL technology. We explore immersive-reality design strategies in archive and library contexts for attracting new users. Our newly established Mubil-lab has invited school classes to test the books augmented with 3D models and other multimedia content in order to investigate whether the immersion in such environments can create wider engagement and support learning. The metaphor of 3D books and game designs in a combination allows the digital books to be handled through a tactile experience and substitute the physical browsing. In this paper we present some preliminary results about the enrichment of the user experience in such environment.

  2. Effectiveness of Collaborative Learning with 3D Virtual Worlds

    ERIC Educational Resources Information Center

    Cho, Young Hoan; Lim, Kenneth Y. T.

    2017-01-01

    Virtual worlds have affordances to enhance collaborative learning in authentic contexts. Despite the potential of collaborative learning with a virtual world, few studies investigated whether it is more effective in student achievements than teacher-directed instruction. This study investigated the effectiveness of collaborative problem solving…

  3. Virtual and Printed 3D Models for Teaching Crystal Symmetry and Point Groups

    ERIC Educational Resources Information Center

    Casas, Lluís; Estop, Euge`nia

    2015-01-01

    Both, virtual and printed 3D crystal models can help students and teachers deal with chemical education topics such as symmetry and point groups. In the present paper, two freely downloadable tools (interactive PDF files and a mobile app) are presented as examples of the application of 3D design to study point-symmetry. The use of 3D printing to…

  4. Immersive virtual reality for visualization of abdominal CT

    NASA Astrophysics Data System (ADS)

    Lin, Qiufeng; Xu, Zhoubing; Li, Bo; Baucom, Rebeccah; Poulose, Benjamin; Landman, Bennett A.; Bodenheimer, Robert E.

    2013-03-01

    Immersive virtual environments use a stereoscopic head-mounted display and data glove to create high fidelity virtual experiences in which users can interact with three-dimensional models and perceive relationships at their true scale. This stands in stark contrast to traditional PACS-based infrastructure in which images are viewed as stacks of two dimensional slices, or, at best, disembodied renderings. Although there has substantial innovation in immersive virtual environments for entertainment and consumer media, these technologies have not been widely applied in clinical applications. Here, we consider potential applications of immersive virtual environments for ventral hernia patients with abdominal computed tomography imaging data. Nearly a half million ventral hernias occur in the United States each year, and hernia repair is the most commonly performed general surgery operation worldwide. A significant problem in these conditions is communicating the urgency, degree of severity, and impact of a hernia (and potential repair) on patient quality of life. Hernias are defined by ruptures in the abdominal wall (i.e., the absence of healthy tissues) rather than a growth (e.g., cancer); therefore, understanding a hernia necessitates understanding the entire abdomen. Our environment allows surgeons and patients to view body scans at scale and interact with these virtual models using a data glove. This visualization and interaction allows users to perceive the relationship between physical structures and medical imaging data. The system provides close integration of PACS-based CT data with immersive virtual environments and creates opportunities to study and optimize interfaces for patient communication, operative planning, and medical education.

  5. Art-Science-Technology collaboration through immersive, interactive 3D visualization

    NASA Astrophysics Data System (ADS)

    Kellogg, L. H.

    2014-12-01

    At the W. M. Keck Center for Active Visualization in Earth Sciences (KeckCAVES), a group of geoscientists and computer scientists collaborate to develop and use of interactive, immersive, 3D visualization technology to view, manipulate, and interpret data for scientific research. The visual impact of immersion in a CAVE environment can be extremely compelling, and from the outset KeckCAVES scientists have collaborated with artists to bring this technology to creative works, including theater and dance performance, installations, and gamification. The first full-fledged collaboration designed and produced a performance called "Collapse: Suddenly falling down", choreographed by Della Davidson, which investigated the human and cultural response to natural and man-made disasters. Scientific data (lidar scans of disaster sites, such as landslides and mine collapses) were fully integrated into the performance by the Sideshow Physical Theatre. This presentation will discuss both the technological and creative characteristics of, and lessons learned from the collaboration. Many parallels between the artistic and scientific process emerged. We observed that both artists and scientists set out to investigate a topic, solve a problem, or answer a question. Refining that question or problem is an essential part of both the creative and scientific workflow. Both artists and scientists seek understanding (in this case understanding of natural disasters). Differences also emerged; the group noted that the scientists sought clarity (including but not limited to quantitative measurements) as a means to understanding, while the artists embraced ambiguity, also as a means to understanding. Subsequent art-science-technology collaborations have responded to evolving technology for visualization and include gamification as a means to explore data, and use of augmented reality for informal learning in museum settings.

  6. Evaluation of Home Delivery of Lectures Utilizing 3D Virtual Space Infrastructure

    ERIC Educational Resources Information Center

    Nishide, Ryo; Shima, Ryoichi; Araie, Hiromu; Ueshima, Shinichi

    2007-01-01

    Evaluation experiments have been essential in exploring home delivery of lectures for which users can experience campus lifestyle and distant learning through 3D virtual space. This paper discusses the necessity of virtual space for distant learners by examining the effects of virtual space. The authors have pursued the possibility of…

  7. Virtual reality and 3D animation in forensic visualization.

    PubMed

    Ma, Minhua; Zheng, Huiru; Lallie, Harjinder

    2010-09-01

    Computer-generated three-dimensional (3D) animation is an ideal media to accurately visualize crime or accident scenes to the viewers and in the courtrooms. Based upon factual data, forensic animations can reproduce the scene and demonstrate the activity at various points in time. The use of computer animation techniques to reconstruct crime scenes is beginning to replace the traditional illustrations, photographs, and verbal descriptions, and is becoming popular in today's forensics. This article integrates work in the areas of 3D graphics, computer vision, motion tracking, natural language processing, and forensic computing, to investigate the state-of-the-art in forensic visualization. It identifies and reviews areas where new applications of 3D digital technologies and artificial intelligence could be used to enhance particular phases of forensic visualization to create 3D models and animations automatically and quickly. Having discussed the relationships between major crime types and level-of-detail in corresponding forensic animations, we recognized that high level-of-detail animation involving human characters, which is appropriate for many major crime types but has had limited use in courtrooms, could be useful for crime investigation.

  8. Orchestrating learning during implementation of a 3D virtual world

    NASA Astrophysics Data System (ADS)

    Karakus, Turkan; Baydas, Ozlem; Gunay, Fatma; Coban, Murat; Goktas, Yuksel

    2016-10-01

    There are many issues to be considered when designing virtual worlds for educational purposes. In this study, the term orchestration has acquired a new definition as the moderation of problems encountered during the activity of turning a virtual world into an educational setting for winter sports. A development case showed that community plays a key role in both the emergence of challenges and in the determination of their solutions. The implications of this study showed that activity theory was a useful tool for understanding contextual issues. Therefore, instructional designers first developed relevant tools and community-based solutions. This study attempts to use activity theory in a prescriptive way, though it is known as a descriptive theory. Finally, since virtual world projects have many aspects, the variety of challenges and practical solutions presented in this study will provide practitioners with suggestions on how to overcome problems in future.

  9. Simulation of Physical Experiments in Immersive Virtual Environments

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Wasfy, Tamer M.

    2001-01-01

    An object-oriented event-driven immersive Virtual environment is described for the creation of virtual labs (VLs) for simulating physical experiments. Discussion focuses on a number of aspects of the VLs, including interface devices, software objects, and various applications. The VLs interface with output devices, including immersive stereoscopic screed(s) and stereo speakers; and a variety of input devices, including body tracking (head and hands), haptic gloves, wand, joystick, mouse, microphone, and keyboard. The VL incorporates the following types of primitive software objects: interface objects, support objects, geometric entities, and finite elements. Each object encapsulates a set of properties, methods, and events that define its behavior, appearance, and functions. A container object allows grouping of several objects. Applications of the VLs include viewing the results of the physical experiment, viewing a computer simulation of the physical experiment, simulation of the experiments procedure, computational steering, and remote control of the physical experiment. In addition, the VL can be used as a risk-free (safe) environment for training. The implementation of virtual structures testing machines, virtual wind tunnels, and a virtual acoustic testing facility is described.

  10. From Cognitive Capability to Social Reform? Shifting Perceptions of Learning in Immersive Virtual Worlds

    ERIC Educational Resources Information Center

    Savin-Baden, Maggi

    2008-01-01

    Learning in immersive virtual worlds (simulations and virtual worlds such as Second Life) could become a central learning approach in many curricula, but the socio-political impact of virtual world learning on higher education remains under-researched. Much of the recent research into learning in immersive virtual worlds centres around games and…

  11. 3D Object Recognition: Symmetry and Virtual Views

    DTIC Science & Technology

    1992-12-01

    NAME(S) AND ADDRESS(ES) 8. PERFORMING ORGANIZATIONI Artificial Intelligence Laboratory REPORT NUMBER 545 Technology Square AIM 1409 Cambridge... ARTIFICIAL INTELLIGENCE LABORATORY and CENTER FOR BIOLOGICAL AND COMPUTATIONAL LEARNING A.I. Memo No. 1409 December 1992 C.B.C.L. Paper No. 76 3D Object...research done within the Center for Biological and Computational Learning in the Department of Brain and Cognitive Sciences, and at the Artificial

  12. Cognitive factors associated with immersion in virtual environments

    NASA Technical Reports Server (NTRS)

    Psotka, Joseph; Davison, Sharon

    1993-01-01

    Immersion into the dataspace provided by a computer, and the feeling of really being there or 'presence', are commonly acknowledged as the uniquely important features of virtual reality environments. How immersed one feels appears to be determined by a complex set of physical components and affordances of the environment, and as yet poorly understood psychological processes. Pimentel and Teixeira say that the experience of being immersed in a computer-generated world involves the same mental shift of 'suspending your disbelief for a period of time' as 'when you get wrapped up in a good novel or become absorbed in playing a computer game'. That sounds as if it could be right, but it would be good to get some evidence for these important conclusions. It might be even better to try to connect these statements with theoretical positions that try to do justice to complex cognitive processes. The basic precondition for understanding Virtual Reality (VR) is understanding the spatial representation systems that localize our bodies or egocenters in space. The effort to understand these cognitive processes is being driven with new energy by the pragmatic demands of successful virtual reality environments, but the literature is largely sparse and anecdotal.

  13. Unstructured Cartesian refinement with sharp interface immersed boundary method for 3D unsteady incompressible flows

    NASA Astrophysics Data System (ADS)

    Angelidis, Dionysios; Chawdhary, Saurabh; Sotiropoulos, Fotis

    2016-11-01

    A novel numerical method is developed for solving the 3D, unsteady, incompressible Navier-Stokes equations on locally refined fully unstructured Cartesian grids in domains with arbitrarily complex immersed boundaries. Owing to the utilization of the fractional step method on an unstructured Cartesian hybrid staggered/non-staggered grid layout, flux mismatch and pressure discontinuity issues are avoided and the divergence free constraint is inherently satisfied to machine zero. Auxiliary/hanging nodes are used to facilitate the discretization of the governing equations. The second-order accuracy of the solver is ensured by using multi-dimension Lagrange interpolation operators and appropriate differencing schemes at the interface of regions with different levels of refinement. The sharp interface immersed boundary method is augmented with local near-boundary refinement to handle arbitrarily complex boundaries. The discrete momentum equation is solved with the matrix free Newton-Krylov method and the Krylov-subspace method is employed to solve the Poisson equation. The second-order accuracy of the proposed method on unstructured Cartesian grids is demonstrated by solving the Poisson equation with a known analytical solution. A number of three-dimensional laminar flow simulations of increasing complexity illustrate the ability of the method to handle flows across a range of Reynolds numbers and flow regimes. Laminar steady and unsteady flows past a sphere and the oblique vortex shedding from a circular cylinder mounted between two end walls demonstrate the accuracy, the efficiency and the smooth transition of scales and coherent structures across refinement levels. Large-eddy simulation (LES) past a miniature wind turbine rotor, parameterized using the actuator line approach, indicates the ability of the fully unstructured solver to simulate complex turbulent flows. Finally, a geometry resolving LES of turbulent flow past a complete hydrokinetic turbine illustrates

  14. 3D Virtual Images and Forensic Identification Training

    DTIC Science & Technology

    2010-08-04

    properly trained for these duties, as a minimum all Air Force Dental Residency programs (13 sites) require a course in forensic dentistry including...Nuinber: FKE20080002E 2. Title: ŗD Virtual Images and Forensic Identification Training" 3. Principal Investigator (PI): Stephanie A. Stouder, Lt Col...47XX identifies the requirement for initial and annual training in forensic identification for all AF Dentists. Currently, to ensure that dentists are

  15. Introducing an Avatar Acceptance Model: Student Intention to Use 3D Immersive Learning Tools in an Online Learning Classroom

    ERIC Educational Resources Information Center

    Kemp, Jeremy William

    2011-01-01

    This quantitative survey study examines the willingness of online students to adopt an immersive virtual environment as a classroom tool and compares this with their feelings about more traditional learning modes including our ANGEL learning management system and the Elluminate live Web conferencing tool. I surveyed 1,108 graduate students in…

  16. Ontological implications of being in immersive virtual environments

    NASA Astrophysics Data System (ADS)

    Morie, Jacquelyn F.

    2008-02-01

    The idea of Virtual Reality once conjured up visions of new territories to explore, and expectations of awaiting worlds of wonder. VR has matured to become a practical tool for therapy, medicine and commercial interests, yet artists, in particular, continue to expand the possibilities for the medium. Artistic virtual environments created over the past two decades probe the phenomenological nature of these virtual environments. When we inhabit a fully immersive virtual environment, we have entered into a new form of Being. Not only does our body continue to exist in the real, physical world, we are also embodied within the virtual by means of technology that translates our bodied actions into interactions with the virtual environment. Very few states in human existence allow this bifurcation of our Being, where we can exist simultaneously in two spaces at once, with the possible exception of meta-physical states such as shamanistic trance and out-of-body experiences. This paper discusses the nature of this simultaneous Being, how we enter the virtual space, what forms of persona we can don there, what forms of spaces we can inhabit, and what type of wondrous experiences we can both hope for and expect.

  17. Spilling the beans on java 3D: a tool for the virtual anatomist.

    PubMed

    Guttmann, G D

    1999-04-15

    The computing world has just provided the anatomist with another tool: Java 3D, within the Java 2 platform. On December 9, 1998, Sun Microsystems released Java 2. Java 3D classes are now included in the jar (Java Archive) archives of the extensions directory of Java 2. Java 3D is also a part of the Java Media Suite of APIs (Application Programming Interfaces). But what is Java? How does Java 3D work? How do you view Java 3D objects? A brief introduction to the concepts of Java and object-oriented programming is provided. Also, there is a short description of the tools of Java 3D and of the Java 3D viewer. Thus, the virtual anatomist has another set of computer tools to use for modeling various aspects of anatomy, such as embryological development. Also, the virtual anatomist will be able to assist the surgeon with virtual surgery using the tools found in Java 3D. Java 3D will be able to fulfill gaps, such as the lack of platform independence, interactivity, and manipulability of 3D images, currently existing in many anatomical computer-aided learning programs.

  18. Combination of Virtual Tours, 3d Model and Digital Data in a 3d Archaeological Knowledge and Information System

    NASA Astrophysics Data System (ADS)

    Koehl, M.; Brigand, N.

    2012-08-01

    The site of the Engelbourg ruined castle in Thann, Alsace, France, has been for some years the object of all the attention of the city, which is the owner, and also of partners like historians and archaeologists who are in charge of its study. The valuation of the site is one of the main objective, as well as its conservation and its knowledge. The aim of this project is to use the environment of the virtual tour viewer as new base for an Archaeological Knowledge and Information System (AKIS). With available development tools we add functionalities in particular through diverse scripts that convert the viewer into a real 3D interface. By beginning with a first virtual tour that contains about fifteen panoramic images, the site of about 150 times 150 meters can be completely documented by offering the user a real interactivity and that makes visualization very concrete, almost lively. After the choice of pertinent points of view, panoramic images were realized. For the documentation, other sets of images were acquired at various seasons and climate conditions, which allow documenting the site in different environments and states of vegetation. The final virtual tour was deducted from them. The initial 3D model of the castle, which is virtual too, was also joined in the form of panoramic images for completing the understanding of the site. A variety of types of hotspots were used to connect the whole digital documentation to the site, including videos (as reports during the acquisition phases, during the restoration works, during the excavations, etc.), digital georeferenced documents (archaeological reports on the various constituent elements of the castle, interpretation of the excavations and the searches, description of the sets of collected objects, etc.). The completely personalized interface of the system allows either to switch from a panoramic image to another one, which is the classic case of the virtual tours, or to go from a panoramic photographic image

  19. Anesthesiology training using 3D imaging and virtual reality

    NASA Astrophysics Data System (ADS)

    Blezek, Daniel J.; Robb, Richard A.; Camp, Jon J.; Nauss, Lee A.

    1996-04-01

    Current training for regional nerve block procedures by anesthesiology residents requires expert supervision and the use of cadavers; both of which are relatively expensive commodities in today's cost-conscious medical environment. We are developing methods to augment and eventually replace these training procedures with real-time and realistic computer visualizations and manipulations of the anatomical structures involved in anesthesiology procedures, such as nerve plexus injections (e.g., celiac blocks). The initial work is focused on visualizations: both static images and rotational renderings. From the initial results, a coherent paradigm for virtual patient and scene representation will be developed.

  20. A Parameterizable Framework for Replicated Experiments in Virtual 3D Environments

    NASA Astrophysics Data System (ADS)

    Biella, Daniel; Luther, Wolfram

    This paper reports on a parameterizable 3D framework that provides 3D content developers with an initial spatial starting configuration, metaphorical connectors for accessing exhibits or interactive 3D learning objects or experiments, and other optional 3D extensions, such as a multimedia room, a gallery, username identification tools and an avatar selection room. The framework is implemented in X3D and uses a Web-based content management system. It has been successfully used for an interactive virtual museum for key historical experiments and in two additional interactive e-learning implementations: an African arts museum and a virtual science centre. It can be shown that, by reusing the framework, the production costs for the latter two implementations can be significantly reduced and content designers can focus on developing educational content instead of producing cost-intensive out-of-focus 3D objects.

  1. Employing Virtual Humans for Education and Training in X3D/VRML Worlds

    ERIC Educational Resources Information Center

    Ieronutti, Lucio; Chittaro, Luca

    2007-01-01

    Web-based education and training provides a new paradigm for imparting knowledge; students can access the learning material anytime by operating remotely from any location. Web3D open standards, such as X3D and VRML, support Web-based delivery of Educational Virtual Environments (EVEs). EVEs have a great potential for learning and training…

  2. Mobile immersive virtual technologies for professional communities of practice.

    PubMed

    De Micheli, Caterina; Galimberti, Carlo

    2009-01-01

    This paper presents the development of an Immersive Virtual Technology (IVT) system serving a community of practice consisting of psychotherapists who use virtual environments for therapy and treatment of anxiety disorders. The psychosocial theoretical background includes the ethnomethodological approach, Situated Action Theory and the Intersubjectivity of the Utterance model. The dialogical importance promoted at each level of the analysis phases becomes the key to a deeper and more fluid understanding of the assumptions and meaning that guide the actions of and interactions between therapists and patients. The entire system design process is inspired by a dialogical perspective, which aims to effectively and non-rigidly integrate the design stages, analysis in context of use, ergonomic evaluation, creation of the virtual reality (VR) system, and final work on the clinical protocol in use.

  3. Transformed Social Interaction, Augmented Gaze, and Social Influence in Immersive Virtual Environments

    ERIC Educational Resources Information Center

    Bailenson, Jeremy N.; Beall, Andrew C.; Loomis, Jack; Blascovich, Jim; Turk, Matthew

    2005-01-01

    Immersive collaborative virtual environments (CVEs) are simulations in which geographically separated individuals interact in a shared, three-dimensional, digital space using immersive virtual environment technology. Unlike videoconference technology, which transmits direct video streams, immersive CVEs accurately track movements of interactants…

  4. SEISVIZ3D: Stereoscopic system for the representation of seismic data - Interpretation and Immersion

    NASA Astrophysics Data System (ADS)

    von Hartmann, Hartwig; Rilling, Stefan; Bogen, Manfred; Thomas, Rüdiger

    2015-04-01

    The seismic method is a valuable tool for getting 3D-images from the subsurface. Seismic data acquisition today is not only a topic for oil and gas exploration but is used also for geothermal exploration, inspections of nuclear waste sites and for scientific investigations. The system presented in this contribution may also have an impact on the visualization of 3D-data of other geophysical methods. 3D-seismic data can be displayed in different ways to give a spatial impression of the subsurface.They are a combination of individual vertical cuts, possibly linked to a cubical portion of the data volume, and the stereoscopic view of the seismic data. By these methods, the spatial perception for the structures and thus of the processes in the subsurface should be increased. Stereoscopic techniques are e. g. implemented in the CAVE and the WALL, both of which require a lot of space and high technical effort. The aim of the interpretation system shown here is stereoscopic visualization of seismic data at the workplace, i.e. at the personal workstation and monitor. The system was developed with following criteria in mind: • Fast rendering of large amounts of data so that a continuous view of the data when changing the viewing angle and the data section is possible, • defining areas in stereoscopic view to translate the spatial impression directly into an interpretation, • the development of an appropriate user interface, including head-tracking, for handling the increased degrees of freedom, • the possibility of collaboration, i.e. teamwork and idea exchange with the simultaneous viewing of a scene at remote locations. The possibilities offered by the use of a stereoscopic system do not replace a conventional interpretation workflow. Rather they have to be implemented into it as an additional step. The amplitude distribution of the seismic data is a challenge for the stereoscopic display because the opacity level and the scaling and selection of the data have to

  5. A Collaborative Virtual Environment for Situated Language Learning Using VEC3D

    ERIC Educational Resources Information Center

    Shih, Ya-Chun; Yang, Mau-Tsuen

    2008-01-01

    A 3D virtually synchronous communication architecture for situated language learning has been designed to foster communicative competence among undergraduate students who have studied English as a foreign language (EFL). We present an innovative approach that offers better e-learning than the previous virtual reality educational applications. The…

  6. The Virtual-casing Principle For 3D Toroidal Systems

    SciTech Connect

    Lazerson, Samuel A.

    2014-02-24

    The capability to calculate the magnetic field due to the plasma currents in a toroidally confined magnetic fusion equilibrium is of manifest relevance to equilibrium reconstruction and stellarator divertor design. Two methodologies arise for calculating such quantities. The first being a volume integral over the plasma current density for a given equilibrium. Such an integral is computationally expensive. The second is a surface integral over a surface current on the equilibrium boundary. This method is computationally desirable as the calculation does not grow as the radial resolution of the volume integral. This surface integral method has come to be known as the "virtual-casing principle". In this paper, a full derivation of this method is presented along with a discussion regarding its optimal application.

  7. Computer-assisted three-dimensional surgical planning: 3D virtual articulator: technical note.

    PubMed

    Ghanai, S; Marmulla, R; Wiechnik, J; Mühling, J; Kotrikova, B

    2010-01-01

    This study presents a computer-assisted planning system for dysgnathia treatment. It describes the process of information gathering using a virtual articulator and how the splints are constructed for orthognathic surgery. The deviation of the virtually planned splints is shown in six cases on the basis of conventionally planned cases. In all cases the plaster models were prepared and scanned using a 3D laser scanner. Successive lateral and posterior-anterior cephalometric images were used for reconstruction before surgery. By identifying specific points on the X-rays and marking them on the virtual models, it was possible to enhance the 2D images to create a realistic 3D environment and to perform virtual repositioning of the jaw. A hexapod was used to transfer the virtual planning to the real splints. Preliminary results showed that conventional repositioning could be replicated using the virtual articulator.

  8. On the Usability and Usefulness of 3d (geo)visualizations - a Focus on Virtual Reality Environments

    NASA Astrophysics Data System (ADS)

    Çöltekin, A.; Lokka, I.; Zahner, M.

    2016-06-01

    Whether and when should we show data in 3D is an on-going debate in communities conducting visualization research. A strong opposition exists in the information visualization (Infovis) community, and seemingly unnecessary/unwarranted use of 3D, e.g., in plots, bar or pie charts, is heavily criticized. The scientific visualization (Scivis) community, on the other hand, is more supportive of the use of 3D as it allows `seeing' invisible phenomena, or designing and printing things that are used in e.g., surgeries, educational settings etc. Geographic visualization (Geovis) stands between the Infovis and Scivis communities. In geographic information science, most visuo-spatial analyses have been sufficiently conducted in 2D or 2.5D, including analyses related to terrain and much of the urban phenomena. On the other hand, there has always been a strong interest in 3D, with similar motivations as in Scivis community. Among many types of 3D visualizations, a popular one that is exploited both for visual analysis and visualization is the highly realistic (geo)virtual environments. Such environments may be engaging and memorable for the viewers because they offer highly immersive experiences. However, it is not yet well-established if we should opt to show the data in 3D; and if yes, a) what type of 3D we should use, b) for what task types, and c) for whom. In this paper, we identify some of the central arguments for and against the use of 3D visualizations around these three considerations in a concise interdisciplinary literature review.

  9. Hand Controlled Manipulation of Single Molecules via a Scanning Probe Microscope with a 3D Virtual Reality Interface.

    PubMed

    Leinen, Philipp; Green, Matthew F B; Esat, Taner; Wagner, Christian; Tautz, F Stefan; Temirov, Ruslan

    2016-10-02

    Considering organic molecules as the functional building blocks of future nanoscale technology, the question of how to arrange and assemble such building blocks in a bottom-up approach is still open. The scanning probe microscope (SPM) could be a tool of choice; however, SPM-based manipulation was until recently limited to two dimensions (2D). Binding the SPM tip to a molecule at a well-defined position opens an opportunity of controlled manipulation in 3D space. Unfortunately, 3D manipulation is largely incompatible with the typical 2D-paradigm of viewing and generating SPM data on a computer. For intuitive and efficient manipulation we therefore couple a low-temperature non-contact atomic force/scanning tunneling microscope (LT NC-AFM/STM) to a motion capture system and fully immersive virtual reality goggles. This setup permits "hand controlled manipulation" (HCM), in which the SPM tip is moved according to the motion of the experimenter's hand, while the tip trajectories as well as the response of the SPM junction are visualized in 3D. HCM paves the way to the development of complex manipulation protocols, potentially leading to a better fundamental understanding of nanoscale interactions acting between molecules on surfaces. Here we describe the setup and the steps needed to achieve successful hand-controlled molecular manipulation within the virtual reality environment.

  10. How incorporation of scents could enhance immersive virtual experiences

    PubMed Central

    Ischer, Matthieu; Baron, Naëm; Mermoud, Christophe; Cayeux, Isabelle; Porcherot, Christelle; Sander, David; Delplanque, Sylvain

    2014-01-01

    Under normal everyday conditions, senses all work together to create experiences that fill a typical person's life. Unfortunately for behavioral and cognitive researchers who investigate such experiences, standard laboratory tests are usually conducted in a nondescript room in front of a computer screen. They are very far from replicating the complexity of real world experiences. Recently, immersive virtual reality (IVR) environments became promising methods to immerse people into an almost real environment that involves more senses. IVR environments provide many similarities to the complexity of the real world and at the same time allow experimenters to constrain experimental parameters to obtain empirical data. This can eventually lead to better treatment options and/or new mechanistic hypotheses. The idea that increasing sensory modalities improve the realism of IVR environments has been empirically supported, but the senses used did not usually include olfaction. In this technology report, we will present an odor delivery system applied to a state-of-the-art IVR technology. The platform provides a three-dimensional, immersive, and fully interactive visualization environment called “Brain and Behavioral Laboratory—Immersive System” (BBL-IS). The solution we propose can reliably deliver various complex scents during different virtual scenarios, at a precise time and space and without contamination of the environment. The main features of this platform are: (i) the limited cross-contamination between odorant streams with a fast odor delivery (< 500 ms), (ii) the ease of use and control, and (iii) the possibility to synchronize the delivery of the odorant with pictures, videos or sounds. How this unique technology could be used to investigate typical research questions in olfaction (e.g., emotional elicitation, memory encoding or attentional capture by scents) will also be addressed. PMID:25101017

  11. Understanding Human Perception of Building Categories in Virtual 3d Cities - a User Study

    NASA Astrophysics Data System (ADS)

    Tutzauer, P.; Becker, S.; Niese, T.; Deussen, O.; Fritsch, D.

    2016-06-01

    Virtual 3D cities are becoming increasingly important as a means of visually communicating diverse urban-related information. To get a deeper understanding of a human's cognitive experience of virtual 3D cities, this paper presents a user study on the human ability to perceive building categories (e.g. residential home, office building, building with shops etc.) from geometric 3D building representations. The study reveals various dependencies between geometric properties of the 3D representations and the perceptibility of the building categories. Knowledge about which geometries are relevant, helpful or obstructive for perceiving a specific building category is derived. The importance and usability of such knowledge is demonstrated based on a perception-guided 3D building abstraction process.

  12. Three-Dimensional User Interfaces for Immersive Virtual Reality

    NASA Technical Reports Server (NTRS)

    vanDam, Andries

    1997-01-01

    The focus of this grant was to experiment with novel user interfaces for immersive Virtual Reality (VR) systems, and thus to advance the state of the art of user interface technology for this domain. Our primary test application was a scientific visualization application for viewing Computational Fluid Dynamics (CFD) datasets. This technology has been transferred to NASA via periodic status reports and papers relating to this grant that have been published in conference proceedings. This final report summarizes the research completed over the past year, and extends last year's final report of the first three years of the grant.

  13. Simulation and visualization of mechanical systems in immersive virtual environments

    SciTech Connect

    Canfield, T. R.

    1998-04-17

    A prototype for doing real-time simulation of mechanical systems in immersive virtual environments has been developed to run in the CAVE and on the ImmersaDesk at Argonne National Laboratory. This system has three principal software components: a visualization component for rendering the model and providing a user interface, communications software, and mechanics simulation software. The system can display the three-dimensional objects in the CAVE and project various scalar fields onto the exterior surface of the objects during real-time execution.

  14. Integration of virtual and real scenes within an integral 3D imaging environment

    NASA Astrophysics Data System (ADS)

    Ren, Jinsong; Aggoun, Amar; McCormick, Malcolm

    2002-11-01

    The Imaging Technologies group at De Montfort University has developed an integral 3D imaging system, which is seen as the most likely vehicle for 3D television avoiding psychological effects. To create real fascinating three-dimensional television programs, a virtual studio that performs the task of generating, editing and integrating the 3D contents involving virtual and real scenes is required. The paper presents, for the first time, the procedures, factors and methods of integrating computer-generated virtual scenes with real objects captured using the 3D integral imaging camera system. The method of computer generation of 3D integral images, where the lens array is modelled instead of the physical camera is described. In the model each micro-lens that captures different elemental images of the virtual scene is treated as an extended pinhole camera. An integration process named integrated rendering is illustrated. Detailed discussion and deep investigation are focused on depth extraction from captured integral 3D images. The depth calculation method from the disparity and the multiple baseline method that is used to improve the precision of depth estimation are also presented. The concept of colour SSD and its further improvement in the precision is proposed and verified.

  15. A new approach towards image based virtual 3D city modeling by using close range photogrammetry

    NASA Astrophysics Data System (ADS)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2014-05-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing day to day for various engineering and non-engineering applications. Generally three main image based approaches are using for virtual 3D city models generation. In first approach, researchers used Sketch based modeling, second method is Procedural grammar based modeling and third approach is Close range photogrammetry based modeling. Literature study shows that till date, there is no complete solution available to create complete 3D city model by using images. These image based methods also have limitations This paper gives a new approach towards image based virtual 3D city modeling by using close range photogrammetry. This approach is divided into three sections. First, data acquisition process, second is 3D data processing, and third is data combination process. In data acquisition process, a multi-camera setup developed and used for video recording of an area. Image frames created from video data. Minimum required and suitable video image frame selected for 3D processing. In second section, based on close range photogrammetric principles and computer vision techniques, 3D model of area created. In third section, this 3D model exported to adding and merging of other pieces of large area. Scaling and alignment of 3D model was done. After applying the texturing and rendering on this model, a final photo-realistic textured 3D model created. This 3D model transferred into walk-through model or in movie form. Most of the processing steps are automatic. So this method is cost effective and less laborious. Accuracy of this model is good. For this research work, study area is the campus of department of civil engineering, Indian Institute of Technology, Roorkee. This campus acts as a prototype for city. Aerial photography is restricted in many country

  16. Nomad devices for interactions in immersive virtual environments

    NASA Astrophysics Data System (ADS)

    George, Paul; Kemeny, Andras; Merienne, Frédéric; Chardonnet, Jean-Rémy; Thouvenin, Indira Mouttapa; Posselt, Javier; Icart, Emmanuel

    2013-03-01

    Renault is currently setting up a new CAVE™, a 5 rear-projected wall virtual reality room with a combined 3D resolution of 100 Mpixels, distributed over sixteen 4k projectors and two 2k projector as well as an additional 3D HD collaborative powerwall. Renault's CAVE™ aims at answering needs of the various vehicle conception steps [1]. Starting from vehicle Design, through the subsequent Engineering steps, Ergonomic evaluation and perceived quality control, Renault has built up a list of use-cases and carried out an early software evaluation in the four sided CAVE™ of Institute Image, called MOVE. One goal of the project is to study interactions in a CAVE™, especially with nomad devices such as IPhone or IPad to manipulate virtual objects and to develop visualization possibilities. Inspired by nomad devices current uses (multi-touch gestures, IPhone UI look'n'feel and AR applications), we have implemented an early feature set taking advantage of these popular input devices. In this paper, we present its performance through measurement data collected in our test platform, a 4-sided homemade low-cost virtual reality room, powered by ultra-short-range and standard HD home projectors.

  17. The Effect Of 3D Audio And Other Audio Techniques On Virtual Reality Experience.

    PubMed

    Brinkman, Willem-Paul; Hoekstra, Allart R D; van Egmond, René

    2015-01-01

    Three studies were conducted to examine the effect of audio on people's experience in a virtual world. The first study showed that people could distinguish between mono, stereo, Dolby surround and 3D audio of a wasp. The second study found significant effects for audio techniques on people's self-reported anxiety, presence, and spatial perception. The third study found that adding sound to a visual virtual world had a significant effect on people's experience (including heart rate), while it found no difference in experience between stereo and 3D audio.

  18. Applying a 3D Situational Virtual Learning Environment to the Real World Business--An Extended Research in Marketing

    ERIC Educational Resources Information Center

    Wang, Shwu-huey

    2012-01-01

    In order to understand (1) what kind of students can be facilitated through the help of three-dimensional virtual learning environment (3D VLE), and (2) the relationship between a conventional test (ie, paper and pencil test) and the 3D VLE used in this study, the study designs a 3D virtual supermarket (3DVS) to help students transform their role…

  19. A numerical method for solving the 3D unsteady incompressible Navier Stokes equations in curvilinear domains with complex immersed boundaries

    NASA Astrophysics Data System (ADS)

    Ge, Liang; Sotiropoulos, Fotis

    2007-08-01

    A novel numerical method is developed that integrates boundary-conforming grids with a sharp interface, immersed boundary methodology. The method is intended for simulating internal flows containing complex, moving immersed boundaries such as those encountered in several cardiovascular applications. The background domain (e.g. the empty aorta) is discretized efficiently with a curvilinear boundary-fitted mesh while the complex moving immersed boundary (say a prosthetic heart valve) is treated with the sharp-interface, hybrid Cartesian/immersed-boundary approach of Gilmanov and Sotiropoulos [A. Gilmanov, F. Sotiropoulos, A hybrid cartesian/immersed boundary method for simulating flows with 3d, geometrically complex, moving bodies, Journal of Computational Physics 207 (2005) 457-492.]. To facilitate the implementation of this novel modeling paradigm in complex flow simulations, an accurate and efficient numerical method is developed for solving the unsteady, incompressible Navier-Stokes equations in generalized curvilinear coordinates. The method employs a novel, fully-curvilinear staggered grid discretization approach, which does not require either the explicit evaluation of the Christoffel symbols or the discretization of all three momentum equations at cell interfaces as done in previous formulations. The equations are integrated in time using an efficient, second-order accurate fractional step methodology coupled with a Jacobian-free, Newton-Krylov solver for the momentum equations and a GMRES solver enhanced with multigrid as preconditioner for the Poisson equation. Several numerical experiments are carried out on fine computational meshes to demonstrate the accuracy and efficiency of the proposed method for standard benchmark problems as well as for unsteady, pulsatile flow through a curved, pipe bend. To demonstrate the ability of the method to simulate flows with complex, moving immersed boundaries we apply it to calculate pulsatile, physiological flow

  20. Virtual surgical planning and 3D printing in repeat calvarial vault reconstruction for craniosynostosis: technical note.

    PubMed

    LoPresti, Melissa; Daniels, Bradley; Buchanan, Edward P; Monson, Laura; Lam, Sandi

    2017-02-03

    Repeat surgery for restenosis after initial nonsyndromic craniosynostosis intervention is sometimes needed. Calvarial vault reconstruction through a healed surgical bed adds a level of intraoperative complexity and may benefit from preoperative and intraoperative definitions of biometric and aesthetic norms. Computer-assisted design and manufacturing using 3D imaging allows the precise formulation of operative plans in anticipation of surgical intervention. 3D printing turns virtual plans into anatomical replicas, templates, or customized implants by using a variety of materials. The authors present a technical note illustrating the use of this technology: a repeat calvarial vault reconstruction that was planned and executed using computer-assisted design and 3D printed intraoperative guides.

  1. Assessment of radiation awareness training in immersive virtual environments

    NASA Astrophysics Data System (ADS)

    Whisker, Vaughn E., III

    The prospect of new nuclear power plant orders in the near future and the graying of the current workforce create a need to train new personnel faster and better. Immersive virtual reality (VR) may offer a solution to the training challenge. VR technology presented in a CAVE Automatic Virtual Environment (CAVE) provides a high-fidelity, one-to-one scale environment where areas of the power plant can be recreated and virtual radiation environments can be simulated, making it possible to safely expose workers to virtual radiation in the context of the actual work environment. The use of virtual reality for training is supported by many educational theories; constructivism and discovery learning, in particular. Educational theory describes the importance of matching the training to the task. Plant access training and radiation worker training, common forms of training in the nuclear industry, rely on computer-based training methods in most cases, which effectively transfer declarative knowledge, but are poor at transferring skills. If an activity were to be added, the training would provide personnel with the opportunity to develop skills and apply their knowledge so they could be more effective when working in the radiation environment. An experiment was developed to test immersive virtual reality's suitability for training radiation awareness. Using a mixed methodology of quantitative and qualitative measures, the subjects' performances before and after training were assessed. First, subjects completed a pre-test to measure their knowledge prior to completing any training. Next they completed unsupervised computer-based training, which consisted of a PowerPoint presentation and a PDF document. After completing a brief orientation activity in the virtual environment, one group of participants received supplemental radiation awareness training in a simulated radiation environment presented in the CAVE, while a second group, the control group, moved directly to the

  2. A parallel overset-curvilinear-immersed boundary framework for simulating complex 3D incompressible flows

    PubMed Central

    Borazjani, Iman; Ge, Liang; Le, Trung; Sotiropoulos, Fotis

    2013-01-01

    We develop an overset-curvilinear immersed boundary (overset-CURVIB) method in a general non-inertial frame of reference to simulate a wide range of challenging biological flow problems. The method incorporates overset-curvilinear grids to efficiently handle multi-connected geometries and increase the resolution locally near immersed boundaries. Complex bodies undergoing arbitrarily large deformations may be embedded within the overset-curvilinear background grid and treated as sharp interfaces using the curvilinear immersed boundary (CURVIB) method (Ge and Sotiropoulos, Journal of Computational Physics, 2007). The incompressible flow equations are formulated in a general non-inertial frame of reference to enhance the overall versatility and efficiency of the numerical approach. Efficient search algorithms to identify areas requiring blanking, donor cells, and interpolation coefficients for constructing the boundary conditions at grid interfaces of the overset grid are developed and implemented using efficient parallel computing communication strategies to transfer information among sub-domains. The governing equations are discretized using a second-order accurate finite-volume approach and integrated in time via an efficient fractional-step method. Various strategies for ensuring globally conservative interpolation at grid interfaces suitable for incompressible flow fractional step methods are implemented and evaluated. The method is verified and validated against experimental data, and its capabilities are demonstrated by simulating the flow past multiple aquatic swimmers and the systolic flow in an anatomic left ventricle with a mechanical heart valve implanted in the aortic position. PMID:23833331

  3. Supporting Distributed Team Working in 3D Virtual Worlds: A Case Study in Second Life

    ERIC Educational Resources Information Center

    Minocha, Shailey; Morse, David R.

    2010-01-01

    Purpose: The purpose of this paper is to report on a study into how a three-dimensional (3D) virtual world (Second Life) can facilitate socialisation and team working among students working on a team project at a distance. This models the situation in many commercial sectors where work is increasingly being conducted across time zones and between…

  4. Teaching Digital Natives: 3-D Virtual Science Lab in the Middle School Science Classroom

    ERIC Educational Resources Information Center

    Franklin, Teresa J.

    2008-01-01

    This paper presents the development of a 3-D virtual environment in Second Life for the delivery of standards-based science content for middle school students in the rural Appalachian region of Southeast Ohio. A mixed method approach in which quantitative results of improved student learning and qualitative observations of implementation within…

  5. Inertial Sensor-Based Touch and Shake Metaphor for Expressive Control of 3D Virtual Avatars.

    PubMed

    Patil, Shashidhar; Chintalapalli, Harinadha Reddy; Kim, Dubeom; Chai, Youngho

    2015-06-18

    In this paper, we present an inertial sensor-based touch and shake metaphor for expressive control of a 3D virtual avatar in a virtual environment. An intuitive six degrees-of-freedom wireless inertial motion sensor is used as a gesture and motion control input device with a sensor fusion algorithm. The algorithm enables user hand motions to be tracked in 3D space via magnetic, angular rate, and gravity sensors. A quaternion-based complementary filter is implemented to reduce noise and drift. An algorithm based on dynamic time-warping is developed for efficient recognition of dynamic hand gestures with real-time automatic hand gesture segmentation. Our approach enables the recognition of gestures and estimates gesture variations for continuous interaction. We demonstrate the gesture expressivity using an interactive flexible gesture mapping interface for authoring and controlling a 3D virtual avatar and its motion by tracking user dynamic hand gestures. This synthesizes stylistic variations in a 3D virtual avatar, producing motions that are not present in the motion database using hand gesture sequences from a single inertial motion sensor.

  6. Inertial Sensor-Based Touch and Shake Metaphor for Expressive Control of 3D Virtual Avatars

    PubMed Central

    Patil, Shashidhar; Chintalapalli, Harinadha Reddy; Kim, Dubeom; Chai, Youngho

    2015-01-01

    In this paper, we present an inertial sensor-based touch and shake metaphor for expressive control of a 3D virtual avatar in a virtual environment. An intuitive six degrees-of-freedom wireless inertial motion sensor is used as a gesture and motion control input device with a sensor fusion algorithm. The algorithm enables user hand motions to be tracked in 3D space via magnetic, angular rate, and gravity sensors. A quaternion-based complementary filter is implemented to reduce noise and drift. An algorithm based on dynamic time-warping is developed for efficient recognition of dynamic hand gestures with real-time automatic hand gesture segmentation. Our approach enables the recognition of gestures and estimates gesture variations for continuous interaction. We demonstrate the gesture expressivity using an interactive flexible gesture mapping interface for authoring and controlling a 3D virtual avatar and its motion by tracking user dynamic hand gestures. This synthesizes stylistic variations in a 3D virtual avatar, producing motions that are not present in the motion database using hand gesture sequences from a single inertial motion sensor. PMID:26094629

  7. Socialisation for Learning at a Distance in a 3-D Multi-User Virtual Environment

    ERIC Educational Resources Information Center

    Edirisingha, Palitha; Nie, Ming; Pluciennik, Mark; Young, Ruth

    2009-01-01

    This paper reports findings of a pilot study that examined the pedagogical potential of "Second Life" (SL), a popular three-dimensional multi-user virtual environment (3-D MUVE) developed by the Linden Lab. The study is part of a 1-year research and development project titled "Modelling of Secondlife Environments"…

  8. Exploring 3-D Virtual Reality Technology for Spatial Ability and Chemistry Achievement

    ERIC Educational Resources Information Center

    Merchant, Z.; Goetz, E. T.; Keeney-Kennicutt, W.; Cifuentes, L.; Kwok, O.; Davis, T. J.

    2013-01-01

    We investigated the potential of Second Life® (SL), a three-dimensional (3-D) virtual world, to enhance undergraduate students' learning of a vital chemistry concept. A quasi-experimental pre-posttest control group design was used to conduct the study. A total of 387 participants completed three assignment activities either in SL or using…

  9. The Cognitive Apprenticeship Theory for the Teaching of Mathematics in an Online 3D Virtual Environment

    ERIC Educational Resources Information Center

    Bouta, Hara; Paraskeva, Fotini

    2013-01-01

    Research spanning two decades shows that there is a continuing development of 3D virtual worlds and investment in such environments for educational purposes. Research stresses the need for these environments to be well-designed and for suitable pedagogies to be implemented in the teaching practice in order for these worlds to be fully effective.…

  10. Hyper-NPSNET: A Virtual World with an Integrated 3D Hypertext

    DTIC Science & Technology

    1992-03-26

    on a nuclear propulsion plant , for example, that offers the capability to travel to the various engineering spaces at an instant in time to see the...status of the plant from various perspectives. The same engineer could even enter the virtual reactor vessel and examine fluid levels, navigate the...34immerse" oneself in a distracting world, or even providing the gateway to the development of cyborgs and other machine-enhanced creatures [Foley87

  11. GEARS a 3D Virtual Learning Environment and Virtual Social and Educational World Used in Online Secondary Schools

    ERIC Educational Resources Information Center

    Barkand, Jonathan; Kush, Joseph

    2009-01-01

    Virtual Learning Environments (VLEs) are becoming increasingly popular in online education environments and have multiple pedagogical advantages over more traditional approaches to education. VLEs include 3D worlds where students can engage in simulated learning activities such as Second Life. According to Claudia L'Amoreaux at Linden Lab, "at…

  12. Novel Web-based Education Platforms for Information Communication utilizing Gamification, Virtual and Immersive Reality

    NASA Astrophysics Data System (ADS)

    Demir, I.

    2015-12-01

    Recent developments in internet technologies make it possible to manage and visualize large data on the web. Novel visualization techniques and interactive user interfaces allow users to create realistic environments, and interact with data to gain insight from simulations and environmental observations. This presentation showcase information communication interfaces, games, and virtual and immersive reality applications for supporting teaching and learning of concepts in atmospheric and hydrological sciences. The information communication platforms utilizes latest web technologies and allow accessing and visualizing large scale data on the web. The simulation system is a web-based 3D interactive learning environment for teaching hydrological and atmospheric processes and concepts. The simulation systems provides a visually striking platform with realistic terrain and weather information, and water simulation. The web-based simulation system provides an environment for students to learn about the earth science processes, and effects of development and human activity on the terrain. Users can access the system in three visualization modes including virtual reality, augmented reality, and immersive reality using heads-up display. The system provides various scenarios customized to fit the age and education level of various users.

  13. Using virtual 3D audio in multispeech channel and multimedia environments

    NASA Astrophysics Data System (ADS)

    Orosz, Michael D.; Karplus, Walter J.; Balakrishnan, Jerry D.

    2000-08-01

    The advantages and disadvantages of using virtual 3-D audio in mission-critical, multimedia display interfaces were evaluated. The 3D audio platform seems to be an especially promising candidate for aircraft cockpits, flight control rooms, and other command and control environments in which operators must make mission-critical decisions while handling demanding and routine tasks. Virtual audio signal processing creates the illusion for a listener wearing conventional earphones that each of a multiplicity of simultaneous speech or audio channels is originating from a different, program- specified location in virtual space. To explore the possible uses of this new, readily available technology, a test bed simulating some of the conditions experienced by the chief flight test coordinator at NASA's Dryden Flight Research Center was designed and implemented. Thirty test subjects simultaneously performed routine tasks requiring constant hand-eye coordination, while monitoring four speech channels, each generating continuous speech signals, for the occurrence of pre-specified keywords. Performance measures included accuracy in identifying the keywords, accuracy in identifying the speaker of the keyword, and response time. We found substantial improvements on all of these measures when comparing virtual audio with conventional, monaural transmissions. We also explored the effect on operator performance of different spatial configurations of the audio sources in 3-D space, simulated movement (dither) in the source locations, and of providing graphical redundancy. Some of these manipulations were less effective and may even decrease performance efficiency, even though they improve some aspects of the virtual space simulation.

  14. An Interactive 3D Virtual Anatomy Puzzle for Learning and Simulation - Initial Demonstration and Evaluation.

    PubMed

    Messier, Erik; Wilcox, Jascha; Dawson-Elli, Alexander; Diaz, Gabriel; Linte, Cristian A

    2016-01-01

    To inspire young students (grades 6-12) to become medical practitioners and biomedical engineers, it is necessary to expose them to key concepts of the field in a way that is both exciting and informative. Recent advances in medical image acquisition, manipulation, processing, visualization, and display have revolutionized the approach in which the human body and internal anatomy can be seen and studied. It is now possible to collect 3D, 4D, and 5D medical images of patient specific data, and display that data to the end user using consumer level 3D stereoscopic display technology. Despite such advancements, traditional 2D modes of content presentation such as textbooks and slides are still the standard didactic equipment used to teach young students anatomy. More sophisticated methods of display can help to elucidate the complex 3D relationships between structures that are so often missed when viewing only 2D media, and can instill in students an appreciation for the interconnection between medicine and technology. Here we describe the design, implementation, and preliminary evaluation of a 3D virtual anatomy puzzle dedicated to helping users learn the anatomy of various organs and systems by manipulating 3D virtual data. The puzzle currently comprises several components of the human anatomy and can be easily extended to include additional organs and systems. The 3D virtual anatomy puzzle game was implemented and piloted using three display paradigms - a traditional 2D monitor, a 3D TV with active shutter glass, and the DK2 version Oculus Rift, as well as two different user interaction devices - a space mouse and traditional keyboard controls.

  15. Effect of viewing mode on pathfinding in immersive Virtual Reality.

    PubMed

    White, Paul J; Byagowi, Ahmad; Moussavi, Zahra

    2015-08-01

    The use of Head Mounted Displays (HMDs) to view Virtual Reality Environments (VREs) has received much attention recently. This paper reports on the difference between actual humans' navigation in a VRE viewed through an HMD compared to that in the same VRE viewed on a laptop PC display. A novel Virtual Reality (VR) Navigation input device (VRNChair), designed by our team, was paired with an Oculus Rift DK2 Head-Mounted Display (HMD). People used the VRNChair to navigate a VRE, and we analyzed their navigational trajectories with and without the HMD to investigate plausible differences in performance due to the display device. It was found that people's navigational trajectories were more accurate while wearing the HMD compared to viewing an LCD monitor; however, the duration to complete a navigation task remained the same. This implies that increased immersion in VR results in an improvement in pathfinding. In addition, motion sickness caused by using an HMD can be reduced if one uses an input device such as our VRNChair. The VRNChair paired with an HMD provides vestibular stimulation as one moves in the VRE, because movements in the VRE are synchronized with movements in the real environment.

  16. Comparative analysis of video processing and 3D rendering for cloud video games using different virtualization technologies

    NASA Astrophysics Data System (ADS)

    Bada, Adedayo; Alcaraz-Calero, Jose M.; Wang, Qi; Grecos, Christos

    2014-05-01

    This paper describes a comprehensive empirical performance evaluation of 3D video processing employing the physical/virtual architecture implemented in a cloud environment. Different virtualization technologies, virtual video cards and various 3D benchmarks tools have been utilized in order to analyse the optimal performance in the context of 3D online gaming applications. This study highlights 3D video rendering performance under each type of hypervisors, and other factors including network I/O, disk I/O and memory usage. Comparisons of these factors under well-known virtual display technologies such as VNC, Spice and Virtual 3D adaptors reveal the strengths and weaknesses of the various hypervisors with respect to 3D video rendering and streaming.

  17. vPresent: A cloud based 3D virtual presentation environment for interactive product customization

    NASA Astrophysics Data System (ADS)

    Nan, Xiaoming; Guo, Fei; He, Yifeng; Guan, Ling

    2013-09-01

    In modern society, many companies offer product customization services to their customers. There are two major issues in providing customized products. First, product manufacturers need to effectively present their products to the customers who may be located in any geographical area. Second, customers need to be able to provide their feedbacks on the product in real-time. However, the traditional presentation approaches cannot effectively convey sufficient information for the product or efficiently adjust product design according to customers' real-time feedbacks. In order to address these issues, we propose vPresent , a cloud based 3D virtual presentation environment, in this paper. In vPresent, the product expert can show the 3D virtual product to the remote customers and dynamically customize the product based on customers' feedbacks, while customers can provide their opinions in real time when they are viewing a vivid 3D visualization of the product. Since the proposed vPresent is a cloud based system, the customers are able to access the customized virtual products from anywhere at any time, via desktop, laptop, or even smart phone. The proposed vPresent is expected to effectively deliver 3D visual information to customers and provide an interactive design platform for the development of customized products.

  18. MAT3D: a virtual reality modeling language environment for the teaching and learning of mathematics.

    PubMed

    Pasqualotti, Adriano; dal Sasso Freitas, Carla Maria

    2002-10-01

    Virtual Reality Modeling Language (VRML) is an independent platform language that allows the creation of nonimmersive virtual environments (VEs) and their use through the Internet. In these VEs, the viewer may navigate and interact with virtual objects, moving around and visualizing them from different angles. Students can benefit from this technology, because it permits them access to objects, which describe the topics covered in their studies in addition to oral and written information. In this work, we investigate the aspects involved in the use of VEs in teaching and learning and propose a conceptual model, called MAT3D, as a learning environment that can be used for the teaching and learning of mathematics. A case study is also presented, in which students use a virtual environment modeled in VRML. Data resulting from this study is analyzed statistically to evaluate the impact of this prototype when applied to the actual teaching and learning of mathematics.

  19. 3D-e-Chem-VM: Structural Cheminformatics Research Infrastructure in a Freely Available Virtual Machine.

    PubMed

    McGuire, Ross; Verhoeven, Stefan; Vass, Márton; Vriend, Gerrit; de Esch, Iwan J P; Lusher, Scott J; Leurs, Rob; Ridder, Lars; Kooistra, Albert J; Ritschel, Tina; de Graaf, Chris

    2017-02-27

    3D-e-Chem-VM is an open source, freely available Virtual Machine ( http://3d-e-chem.github.io/3D-e-Chem-VM/ ) that integrates cheminformatics and bioinformatics tools for the analysis of protein-ligand interaction data. 3D-e-Chem-VM consists of software libraries, and database and workflow tools that can analyze and combine small molecule and protein structural information in a graphical programming environment. New chemical and biological data analytics tools and workflows have been developed for the efficient exploitation of structural and pharmacological protein-ligand interaction data from proteomewide databases (e.g., ChEMBLdb and PDB), as well as customized information systems focused on, e.g., G protein-coupled receptors (GPCRdb) and protein kinases (KLIFS). The integrated structural cheminformatics research infrastructure compiled in the 3D-e-Chem-VM enables the design of new approaches in virtual ligand screening (Chemdb4VS), ligand-based metabolism prediction (SyGMa), and structure-based protein binding site comparison and bioisosteric replacement for ligand design (KRIPOdb).

  20. 3D-e-Chem-VM: Structural Cheminformatics Research Infrastructure in a Freely Available Virtual Machine

    PubMed Central

    2017-01-01

    3D-e-Chem-VM is an open source, freely available Virtual Machine (http://3d-e-chem.github.io/3D-e-Chem-VM/) that integrates cheminformatics and bioinformatics tools for the analysis of protein–ligand interaction data. 3D-e-Chem-VM consists of software libraries, and database and workflow tools that can analyze and combine small molecule and protein structural information in a graphical programming environment. New chemical and biological data analytics tools and workflows have been developed for the efficient exploitation of structural and pharmacological protein–ligand interaction data from proteomewide databases (e.g., ChEMBLdb and PDB), as well as customized information systems focused on, e.g., G protein-coupled receptors (GPCRdb) and protein kinases (KLIFS). The integrated structural cheminformatics research infrastructure compiled in the 3D-e-Chem-VM enables the design of new approaches in virtual ligand screening (Chemdb4VS), ligand-based metabolism prediction (SyGMa), and structure-based protein binding site comparison and bioisosteric replacement for ligand design (KRIPOdb). PMID:28125221

  1. fVisiOn: glasses-free tabletop 3D display to provide virtual 3D media naturally alongside real media

    NASA Astrophysics Data System (ADS)

    Yoshida, Shunsuke

    2012-06-01

    A novel glasses-free tabletop 3D display, named fVisiOn, floats virtual 3D objects on an empty, flat, tabletop surface and enables multiple viewers to observe raised 3D images from any angle at 360° Our glasses-free 3D image reproduction method employs a combination of an optical device and an array of projectors and produces continuous horizontal parallax in the direction of a circular path located above the table. The optical device shapes a hollow cone and works as an anisotropic diffuser. The circularly arranged projectors cast numerous rays into the optical device. Each ray represents a particular ray that passes a corresponding point on a virtual object's surface and orients toward a viewing area around the table. At any viewpoint on the ring-shaped viewing area, both eyes collect fractional images from different projectors, and all the viewers around the table can perceive the scene as 3D from their perspectives because the images include binocular disparity. The entire principle is installed beneath the table, so the tabletop area remains clear. No ordinary tabletop activities are disturbed. Many people can naturally share the 3D images displayed together with real objects on the table. In our latest prototype, we employed a handmade optical device and an array of over 100 tiny projectors. This configuration reproduces static and animated 3D scenes for a 130° viewing area and allows 5-cm-tall virtual characters to play soccer and dance on the table.

  2. CROSS DRIVE: A New Interactive and Immersive Approach for Exploring 3D Time-Dependent Mars Atmospheric Data in Distributed Teams

    NASA Astrophysics Data System (ADS)

    Gerndt, Andreas M.; Engelke, Wito; Giuranna, Marco; Vandaele, Ann C.; Neary, Lori; Aoki, Shohei; Kasaba, Yasumasa; Garcia, Arturo; Fernando, Terrence; Roberts, David; CROSS DRIVE Team

    2016-10-01

    Atmospheric phenomena of Mars can be highly dynamic and have daily and seasonal variations. Planetary-scale wavelike disturbances, for example, are frequently observed in Mars' polar winter atmosphere. Possible sources of the wave activity were suggested to be dynamical instabilities and quasi-stationary planetary waves, i.e. waves that arise predominantly via zonally asymmetric surface properties. For a comprehensive understanding of these phenomena, single layers of altitude have to be analyzed carefully and relations between different atmospheric quantities and interaction with the surface of Mars have to be considered. The CROSS DRIVE project tries to address the presentation of those data with a global view by means of virtual reality techniques. Complex orbiter data from spectrometer and observation data from Earth are combined with global circulation models and high-resolution terrain data and images available from Mars Express or MRO instruments. Scientists can interactively extract features from those dataset and can change visualization parameters in real-time in order to emphasize findings. Stereoscopic views allow for perception of the actual 3D behavior of Mars's atmosphere. A very important feature of the visualization system is the possibility to connect distributed workspaces together. This enables discussions between distributed working groups. The workspace can scale from virtual reality systems to expert desktop applications to web-based project portals. If multiple virtual environments are connected, the 3D position of each individual user is captured and used to depict the scientist as an avatar in the virtual world. The appearance of the avatar can also scale from simple annotations to complex avatars using tele-presence technology to reconstruct the users in 3D. Any change of the feature set (annotations, cutplanes, volume rendering, etc.) within the VR is immediately exchanged between all connected users. This allows that everybody is always

  3. Image-based 3D reconstruction and virtual environmental walk-through

    NASA Astrophysics Data System (ADS)

    Sun, Jifeng; Fang, Lixiong; Luo, Ying

    2001-09-01

    We present a 3D reconstruction method, which combines geometry-based modeling, image-based modeling and rendering techniques. The first component is an interactive geometry modeling method which recovery of the basic geometry of the photographed scene. The second component is model-based stereo algorithm. We discus the image processing problems and algorithms of walking through in virtual space, then designs and implement a high performance multi-thread wandering algorithm. The applications range from architectural planning and archaeological reconstruction to virtual environments and cinematic special effects.

  4. The Learner Characteristics, Features of Desktop 3D Virtual Reality Environments, and College Chemistry Instruction: A Structural Equation Modeling Analysis

    ERIC Educational Resources Information Center

    Merchant, Zahira; Goetz, Ernest T.; Keeney-Kennicutt, Wendy; Kwok, Oi-man; Cifuentes, Lauren; Davis, Trina J.

    2012-01-01

    We examined a model of the impact of a 3D desktop virtual reality environment on the learner characteristics (i.e. perceptual and psychological variables) that can enhance chemistry-related learning achievements in an introductory college chemistry class. The relationships between the 3D virtual reality features and the chemistry learning test as…

  5. The Rufous Hummingbird in hovering flight -- full-body 3D immersed boundary simulation

    NASA Astrophysics Data System (ADS)

    Ferreira de Sousa, Paulo; Luo, Haoxiang; Bocanegra Evans, Humberto

    2009-11-01

    Hummingbirds are an interesting case study for the development of micro-air vehicles since they combine the high flight stability of insects with the low metabolic power per unit of body mass of bats, during hovering flight. In this study, simulations of a full-body hummingbird in hovering flight were performed at a Reynolds number around 3600. The simulations employ a versatile sharp-interface immersed boundary method recently enhanced at our lab that can treat thin membranes and solid bodies alike. Implemented on a Cartesian mesh, the numerical method allows us to capture the vortex dynamics of the wake accurately and efficiently. The whole-body simulation will allow us to clearly identify the three general patterns of flow velocity around the body of the hummingbird referred in Altshuler et al. (Exp Fluids 46 (5), 2009). One focus of the current study is to understand the interaction between the wakes of the two wings at the end of the upstroke, and how the tail actively defects the flow to contribute to pitch stability. Another focus of the study will be to identify the pair of unconnected loops underneath each wing.

  6. Using the CAVE virtual-reality environment as an aid to 3-D electromagnetic field computation

    SciTech Connect

    Turner, L.R.; Levine, D.; Huang, M.; Papka, M; Kettunen, L.

    1995-08-01

    One of the major problems in three-dimensional (3-D) field computation is visualizing the resulting 3-D field distributions. A virtual-reality environment, such as the CAVE, (CAVE Automatic Virtual Environment) is helping to overcome this problem, thus making the results of computation more usable for designers and users of magnets and other electromagnetic devices. As a demonstration of the capabilities of the CAVE, the elliptical multipole wiggler (EMW), an insertion device being designed for the Advanced Photon Source (APS) now being commissioned at Argonne National Laboratory (ANL), wa made visible, along with its fields and beam orbits. Other uses of the CAVE in preprocessing and postprocessing computation for electromagnetic applications are also discussed.

  7. Analysis of scalability of high-performance 3D image processing platform for virtual colonoscopy

    PubMed Central

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli

    2014-01-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. For this purpose, we previously developed a software platform for high-performance 3D medical image processing, called HPC 3D-MIP platform, which employs increasingly available and affordable commodity computing systems such as the multicore, cluster, and cloud computing systems. To achieve scalable high-performance computing, the platform employed size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D-MIP algorithms, supported task scheduling for efficient load distribution and balancing, and consisted of a layered parallel software libraries that allow image processing applications to share the common functionalities. We evaluated the performance of the HPC 3D-MIP platform by applying it to computationally intensive processes in virtual colonoscopy. Experimental results showed a 12-fold performance improvement on a workstation with 12-core CPUs over the original sequential implementation of the processes, indicating the efficiency of the platform. Analysis of performance scalability based on the Amdahl’s law for symmetric multicore chips showed the potential of a high performance scalability of the HPC 3D-MIP platform when a larger number of cores is available. PMID:24910506

  8. An investigation into factors influencing immersion in interactive virtual reality environments.

    PubMed

    Bangay, S; Preston, L

    1998-01-01

    Two interactive virtual reality environments were used to identify factors that may affect, or be affected by, the degree of immersion in a virtual world. In particular, the level of stress in a "swimming with dolphins" simulation is measured, as is the degree of simulator sickness resulting form a virtual roller coaster. Analysis of the results indicates that a relationship between the degree of immersion and the following factors: excitement, comfort, quality and age. The following factors are found to depend on the degree of immersion: simulator sickness, control, excitement and desire to repeat the experience.

  9. Measurement Tools for the Immersive Visualization Environment: Steps Toward the Virtual Laboratory

    PubMed Central

    Hagedorn, John G.; Dunkers, Joy P.; Satterfield, Steven G.; Peskin, Adele P.; Kelso, John T.; Terrill, Judith E.

    2007-01-01

    This paper describes a set of tools for performing measurements of objects in a virtual reality based immersive visualization environment. These tools enable the use of the immersive environment as an instrument for extracting quantitative information from data representations that hitherto had be used solely for qualitative examination. We provide, within the virtual environment, ways for the user to analyze and interact with the quantitative data generated. We describe results generated by these methods to obtain dimensional descriptors of tissue engineered medical products. We regard this toolbox as our first step in the implementation of a virtual measurement laboratory within an immersive visualization environment. PMID:27110469

  10. The cognitive apprenticeship theory for the teaching of mathematics in an online 3D virtual environment

    NASA Astrophysics Data System (ADS)

    Bouta, Hara; Paraskeva, Fotini

    2013-03-01

    Research spanning two decades shows that there is a continuing development of 3D virtual worlds and investment in such environments for educational purposes. Research stresses the need for these environments to be well-designed and for suitable pedagogies to be implemented in the teaching practice in order for these worlds to be fully effective. To this end, we propose a pedagogical framework based on the cognitive apprenticeship for deriving principles and guidelines to inform the design, development and use of a 3D virtual environment. This study examines how the use of a 3D virtual world facilitates the teaching of mathematics in primary education by combining design principles and guidelines based on the Cognitive Apprenticeship Theory and the teaching methods that this theory introduces. We focus specifically on 5th and 6th grade students' engagement (behavioral, affective and cognitive) while learning fractional concepts over a period of two class sessions. Quantitative and qualitative analyses indicate considerable improvement in the engagement of the students who participated in the experiment. This paper presents the findings regarding students' cognitive engagement in the process of comprehending basic fractional concepts - notoriously hard for students to master. The findings are encouraging and suggestions are made for further research.

  11. Architecture of web services in the enhancement of real-time 3D video virtualization in cloud environment

    NASA Astrophysics Data System (ADS)

    Bada, Adedayo; Wang, Qi; Alcaraz-Calero, Jose M.; Grecos, Christos

    2016-04-01

    This paper proposes a new approach to improving the application of 3D video rendering and streaming by jointly exploring and optimizing both cloud-based virtualization and web-based delivery. The proposed web service architecture firstly establishes a software virtualization layer based on QEMU (Quick Emulator), an open-source virtualization software that has been able to virtualize system components except for 3D rendering, which is still in its infancy. The architecture then explores the cloud environment to boost the speed of the rendering at the QEMU software virtualization layer. The capabilities and inherent limitations of Virgil 3D, which is one of the most advanced 3D virtual Graphics Processing Unit (GPU) available, are analyzed through benchmarking experiments and integrated into the architecture to further speed up the rendering. Experimental results are reported and analyzed to demonstrate the benefits of the proposed approach.

  12. 3D Visualization of Cultural Heritage Artefacts with Virtual Reality devices

    NASA Astrophysics Data System (ADS)

    Gonizzi Barsanti, S.; Caruso, G.; Micoli, L. L.; Covarrubias Rodriguez, M.; Guidi, G.

    2015-08-01

    Although 3D models are useful to preserve the information about historical artefacts, the potential of these digital contents are not fully accomplished until they are not used to interactively communicate their significance to non-specialists. Starting from this consideration, a new way to provide museum visitors with more information was investigated. The research is aimed at valorising and making more accessible the Egyptian funeral objects exhibited in the Sforza Castle in Milan. The results of the research will be used for the renewal of the current exhibition, at the Archaeological Museum in Milan, by making it more attractive. A 3D virtual interactive scenario regarding the "path of the dead", an important ritual in ancient Egypt, was realized to augment the experience and the comprehension of the public through interactivity. Four important artefacts were considered for this scope: two ushabty, a wooden sarcophagus and a heart scarab. The scenario was realized by integrating low-cost Virtual Reality technologies, as the Oculus Rift DK2 and the Leap Motion controller, and implementing a specific software by using Unity. The 3D models were implemented by adding responsive points of interest in relation to important symbols or features of the artefact. This allows highlighting single parts of the artefact in order to better identify the hieroglyphs and provide their translation. The paper describes the process for optimizing the 3D models, the implementation of the interactive scenario and the results of some test that have been carried out in the lab.

  13. Early pregnancy placental bed and fetal vascular volume measurements using 3-D virtual reality.

    PubMed

    Reus, Averil D; Klop-van der Aa, Josine; Rifouna, Maria S; Koning, Anton H J; Exalto, Niek; van der Spek, Peter J; Steegers, Eric A P

    2014-08-01

    In this study, a new 3-D Virtual Reality (3D VR) technique for examining placental and uterine vasculature was investigated. The validity of placental bed vascular volume (PBVV) and fetal vascular volume (FVV) measurements was assessed and associations of PBVV and FVV with embryonic volume, crown-rump length, fetal birth weight and maternal parity were investigated. One hundred thirty-two patients were included in this study, and measurements were performed in 100 patients. Using V-Scope software, 100 3-D Power Doppler data sets of 100 pregnancies at 12 wk of gestation were analyzed with 3D VR in the I-Space Virtual Reality system. Volume measurements were performed with semi-automatic, pre-defined parameters. The inter-observer and intra-observer agreement was excellent with all intra-class correlation coefficients >0.93. PBVVs of multiparous women were significantly larger than the PBVVs of primiparous women (p = 0.008). In this study, no other associations were found. In conclusion, V-Scope offers a reproducible method for measuring PBVV and FVV at 12 wk of gestation, although we are unsure whether the volume measured represents the true volume of the vasculature. Maternal parity influences PBVV.

  14. Virtual 3D bladder reconstruction for augmented medical records from white light cystoscopy (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Lurie, Kristen L.; Zlatev, Dimitar V.; Angst, Roland; Liao, Joseph C.; Ellerbee, Audrey K.

    2016-02-01

    Bladder cancer has a high recurrence rate that necessitates lifelong surveillance to detect mucosal lesions. Examination with white light cystoscopy (WLC), the standard of care, is inherently subjective and data storage limited to clinical notes, diagrams, and still images. A visual history of the bladder wall can enhance clinical and surgical management. To address this clinical need, we developed a tool to transform in vivo WLC videos into virtual 3-dimensional (3D) bladder models using advanced computer vision techniques. WLC videos from rigid cystoscopies (1280 x 720 pixels) were recorded at 30 Hz followed by immediate camera calibration to control for image distortions. Video data were fed into an automated structure-from-motion algorithm that generated a 3D point cloud followed by a 3D mesh to approximate the bladder surface. The highest quality cystoscopic images were projected onto the approximated bladder surface to generate a virtual 3D bladder reconstruction. In intraoperative WLC videos from 36 patients undergoing transurethral resection of suspected bladder tumors, optimal reconstruction was achieved from frames depicting well-focused vasculature, when the bladder was maintained at constant volume with minimal debris, and when regions of the bladder wall were imaged multiple times. A significant innovation of this work is the ability to perform the reconstruction using video from a clinical procedure collected with standard equipment, thereby facilitating rapid clinical translation, application to other forms of endoscopy and new opportunities for longitudinal studies of cancer recurrence.

  15. Effects of 3D Virtual Reality of Plate Tectonics on Fifth Grade Students' Achievement and Attitude toward Science

    ERIC Educational Resources Information Center

    Kim, Paul

    2006-01-01

    This study examines the effects of a teaching method using 3D virtual reality simulations on achievement and attitude toward science. An experiment was conducted with fifth-grade students (N = 41) to examine the effects of 3D simulations, designed to support inquiry-based science curriculum. An ANOVA analysis revealed that the 3D group scored…

  16. 3D Virtual Worlds as Art Media and Exhibition Arenas: Students' Responses and Challenges in Contemporary Art Education

    ERIC Educational Resources Information Center

    Lu, Lilly

    2013-01-01

    3D virtual worlds (3D VWs) are considered one of the emerging learning spaces of the 21st century; however, few empirical studies have investigated educational applications and student learning aspects in art education. This study focused on students' responses to and challenges with 3D VWs in both aspects. The findings show that most participants…

  17. Visualization of CFD Results in Immersive Virtual Environments

    NASA Technical Reports Server (NTRS)

    Wasfy, Tamer M.; Noor Ahmed K.

    2001-01-01

    An object-oriented event-driven immersive virtual environment (VE) is described for the visualization of computational fluid dynamics (CFD) results. The VE incorporates the following types of primitive software objects: interface objects, support objects, geometric entities, and finite elements. The fluid domain is discretized using either a multi-block structured grid or an unstructured finite element mesh. The VE allows natural 'fly-through' visualization of the model, the CFD grid, and the model's surroundings. In order to help visualize the flow and its effects on the model, the VE incorporates the following objects: stream objects (lines, surface-restricted lines. ribbons. and volumes); colored surfaces; elevation surfaces; surface arrows; global and local iso-surfaces; vortex cores; and separation/attachment surfaces and lines. Most of these objects can be used for dynamically probing the flow. Particles and arrow animations can be displayed on top of stream objects. Primitive response quantities as well as derived quantities can be used. A recursive tree search algorithm is used for real-time point and value search in the CFD grid.

  18. Enabling Field Experiences in Introductory Geoscience Classes through the Use of Immersive Virtual Reality

    NASA Astrophysics Data System (ADS)

    Moysey, S. M.; Smith, E.; Sellers, V.; Wyant, P.; Boyer, D. M.; Mobley, C.; Brame, S.

    2015-12-01

    Although field experiences are an important aspect of geoscience education, the opportunity to provide physical world experiences to large groups of introductory students is often limited by access, logistical, and financial constraints. Our project (NSF IUSE 1504619) is investigating the use of immersive virtual reality (VR) technologies as a surrogate for real field experiences in introductory geosciences classes. We are developing a toolbox that leverages innovations in the field of VR, including the Oculus Rift and Google Cardboard, to enable every student in an introductory geology classroom the opportunity to have a first-person virtual field experience in the Grand Canyon. We have opted to structure our VR experience as an interactive game where students must explore the Canyon to accomplish a series of tasks designed to emphasize key aspects of geoscience learning. So far we have produced two demo products for the virtual field trip. The first is a standalone "Rock Box" app developed for the iPhone, which allows students to select different rock samples, examine them in 3D, and obtain basic information about the properties of each sample. The app can act as a supplement to the traditional rock box used in physical geology labs. The second product is a fully functioning VR environment for the Grand Canyon developed using satellite-based topographic and imagery data to retain real geologic features within the experience. Players can freely navigate to explore anywhere they desire within the Canyon, but are guided to points of interest where they are able to complete exercises that will be aligned with specific learning goals. To this point we have integrated elements of the "Rock Box" app within the VR environment, allowing players to examine 3D details of rock samples they encounter within the Grand Canyon. We plan to provide demos of both products and obtain user feedback during our presentation.

  19. Virtual Boutique: a 3D modeling and content-based management approach to e-commerce

    NASA Astrophysics Data System (ADS)

    Paquet, Eric; El-Hakim, Sabry F.

    2000-12-01

    The Virtual Boutique is made out of three modules: the decor, the market and the search engine. The decor is the physical space occupied by the Virtual Boutique. It can reproduce any existing boutique. For this purpose, photogrammetry is used. A set of pictures of a real boutique or space is taken and a virtual 3D representation of this space is calculated from them. Calculations are performed with software developed at NRC. This representation consists of meshes and texture maps. The camera used in the acquisition process determines the resolution of the texture maps. Decorative elements are added like painting, computer generated objects and scanned objects. The objects are scanned with laser scanner developed at NRC. This scanner allows simultaneous acquisition of range and color information based on white laser beam triangulation. The second module, the market, is made out of all the merchandises and the manipulators, which are used to manipulate and compare the objects. The third module, the search engine, can search the inventory based on an object shown by the customer in order to retrieve similar objects base don shape and color. The items of interest are displayed in the boutique by reconfiguring the market space, which mean that the boutique can be continuously customized according to the customer's needs. The Virtual Boutique is entirely written in Java 3D and can run in mono and stereo mode and has been optimized in order to allow high quality rendering.

  20. Investigating the interaction between positions and signals of height-channel loudspeakers in reproducing immersive 3d sound

    NASA Astrophysics Data System (ADS)

    Karampourniotis, Antonios

    Since transmission capacities have significantly increased over the past few years, researchers are now able to transmit a larger amount of data, namely multichannel audio content, in the consumer applications. What has not been investigated in a systematic way yet is how to deliver the multichannel content. Specifically, researchers' attention is focused on the quest of a standardized immersive reproduction format that incorporates height loudspeakers coupled with the new high-resolution and three-dimensional (3D) media content for a comprehensive 3D experience. To better understand and utilize the immersive audio reproduction, this research focused on the (1) interaction between the positioning of height loudspeakers and the signals fed to the loudspeakers, (2) investigation of the perceptual characteristics associated with the height ambiences, and (3) the influence of inverse filtering on perceived sound quality for the realistic 3D sound reproduction. The experiment utilized the existence of two layers of loudspeakers: horizontal layer following the ITU-R BS.775 five-channel loudspeaker configuration and height layer locating a total of twelve loudspeakers at the azimuth of +/-30°, +/-50°, +/-70°, +/-90°, +/-110° and +/-130° and elevation of 30°. Eight configurations were formed, each of which selected four height-loudspeakers from twelve. In the subjective evaluation, listeners compared, ranked and described the eight randomly presented configurations of 4-channel height ambiences. The stimuli for the experiment were four nine-channel (5 channels for the horizontal and 4 for the height loudspeakers) multichannel music. Moreover, an approach of Finite Impulse Response (FIR) inverse filtering was attempted, in order to remove the particular room's acoustic influence. Another set of trained professionals was informally asked to use descriptors to characterize the newly presented multichannel music with height ambiences rendered with inverse filtering. The

  1. Steady-State VEP-Based Brain-Computer Interface Control in an Immersive 3D Gaming Environment

    NASA Astrophysics Data System (ADS)

    Lalor, E. C.; Kelly, S. P.; Finucane, C.; Burke, R.; Smith, R.; Reilly, R. B.; McDarby, G.

    2005-12-01

    This paper presents the application of an effective EEG-based brain-computer interface design for binary control in a visually elaborate immersive 3D game. The BCI uses the steady-state visual evoked potential (SSVEP) generated in response to phase-reversing checkerboard patterns. Two power-spectrum estimation methods were employed for feature extraction in a series of offline classification tests. Both methods were also implemented during real-time game play. The performance of the BCI was found to be robust to distracting visual stimulation in the game and relatively consistent across six subjects, with 41 of 48 games successfully completed. For the best performing feature extraction method, the average real-time control accuracy across subjects was 89%. The feasibility of obtaining reliable control in such a visually rich environment using SSVEPs is thus demonstrated and the impact of this result is discussed.

  2. Using 3D Glyph Visualization to Explore Real-time Seismic Data on Immersive and High-resolution Display Systems

    NASA Astrophysics Data System (ADS)

    Nayak, A. M.; Lindquist, K.; Kilb, D.; Newman, R.; Vernon, F.; Leigh, J.; Johnson, A.; Renambot, L.

    2003-12-01

    The study of time-dependent, three-dimensional natural phenomena like earthquakes can be enhanced with innovative and pertinent 3D computer graphics. Here we display seismic data as 3D glyphs (graphics primitives or symbols with various geometric and color attributes), allowing us to visualize the measured, time-dependent, 3D wave field from an earthquake recorded by a certain seismic network. In addition to providing a powerful state-of-health diagnostic of the seismic network, the graphical result presents an intuitive understanding of the real-time wave field that is hard to achieve with traditional 2D visualization methods. We have named these 3D icons `seismoglyphs' to suggest visual objects built from three components of ground motion data (north-south, east-west, vertical) recorded by a seismic sensor. A seismoglyph changes color with time, spanning the spectrum, to indicate when the seismic amplitude is largest. The spatial extent of the glyph indicates the polarization of the wave field as it arrives at the recording station. We compose seismoglyphs using the real time ANZA broadband data (http://www.eqinfo.ucsd.edu) to understand the 3D behavior of a seismic wave field in Southern California. Fifteen seismoglyphs are drawn simultaneously with a 3D topography map of Southern California, as real time data is piped into the graphics software using the Antelope system. At each station location, the seismoglyph evolves with time and this graphical display allows a scientist to observe patterns and anomalies in the data. The display also provides visual clues to indicate wave arrivals and ~real-time earthquake detection. Future work will involve adding phase detections, network triggers and near real-time 2D surface shaking estimates. The visuals can be displayed in an immersive environment using the passive stereoscopic Geowall (http://www.geowall.org). The stereographic projection allows for a better understanding of attenuation due to distance and earth

  3. Blood Pool Segmentation Results in Superior Virtual Cardiac Models than Myocardial Segmentation for 3D Printing.

    PubMed

    Farooqi, Kanwal M; Lengua, Carlos Gonzalez; Weinberg, Alan D; Nielsen, James C; Sanz, Javier

    2016-08-01

    The method of cardiac magnetic resonance (CMR) three-dimensional (3D) image acquisition and post-processing which should be used to create optimal virtual models for 3D printing has not been studied systematically. Patients (n = 19) who had undergone CMR including both 3D balanced steady-state free precession (bSSFP) imaging and contrast-enhanced magnetic resonance angiography (MRA) were retrospectively identified. Post-processing for the creation of virtual 3D models involved using both myocardial (MS) and blood pool (BP) segmentation, resulting in four groups: Group 1-bSSFP/MS, Group 2-bSSFP/BP, Group 3-MRA/MS and Group 4-MRA/BP. The models created were assessed by two raters for overall quality (1-poor; 2-good; 3-excellent) and ability to identify predefined vessels (1-5: superior vena cava, inferior vena cava, main pulmonary artery, ascending aorta and at least one pulmonary vein). A total of 76 virtual models were created from 19 patient CMR datasets. The mean overall quality scores for Raters 1/2 were 1.63 ± 0.50/1.26 ± 0.45 for Group 1, 2.12 ± 0.50/2.26 ± 0.73 for Group 2, 1.74 ± 0.56/1.53 ± 0.61 for Group 3 and 2.26 ± 0.65/2.68 ± 0.48 for Group 4. The numbers of identified vessels for Raters 1/2 were 4.11 ± 1.32/4.05 ± 1.31 for Group 1, 4.90 ± 0.46/4.95 ± 0.23 for Group 2, 4.32 ± 1.00/4.47 ± 0.84 for Group 3 and 4.74 ± 0.56/4.63 ± 0.49 for Group 4. Models created using BP segmentation (Groups 2 and 4) received significantly higher ratings than those created using MS for both overall quality and number of vessels visualized (p < 0.05), regardless of the acquisition technique. There were no significant differences between Groups 1 and 3. The ratings for Raters 1 and 2 had good correlation for overall quality (ICC = 0.63) and excellent correlation for the total number of vessels visualized (ICC = 0.77). The intra-rater reliability was good for Rater A (ICC = 0.65). Three models were successfully printed

  4. Analysis of scalability of high-performance 3D image processing platform for virtual colonoscopy

    NASA Astrophysics Data System (ADS)

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli

    2014-03-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. For this purpose, we previously developed a software platform for high-performance 3D medical image processing, called HPC 3D-MIP platform, which employs increasingly available and affordable commodity computing systems such as the multicore, cluster, and cloud computing systems. To achieve scalable high-performance computing, the platform employed size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D-MIP algorithms, supported task scheduling for efficient load distribution and balancing, and consisted of a layered parallel software libraries that allow image processing applications to share the common functionalities. We evaluated the performance of the HPC 3D-MIP platform by applying it to computationally intensive processes in virtual colonoscopy. Experimental results showed a 12-fold performance improvement on a workstation with 12-core CPUs over the original sequential implementation of the processes, indicating the efficiency of the platform. Analysis of performance scalability based on the Amdahl's law for symmetric multicore chips showed the potential of a high performance scalability of the HPC 3DMIP platform when a larger number of cores is available.

  5. Web GIS in practice VII: stereoscopic 3-D solutions for online maps and virtual globes

    PubMed Central

    Boulos, Maged N Kamel; Robinson, Larry R

    2009-01-01

    Because our pupils are about 6.5 cm apart, each eye views a scene from a different angle and sends a unique image to the visual cortex, which then merges the images from both eyes into a single picture. The slight difference between the right and left images allows the brain to properly perceive the 'third dimension' or depth in a scene (stereopsis). However, when a person views a conventional 2-D (two-dimensional) image representation of a 3-D (three-dimensional) scene on a conventional computer screen, each eye receives essentially the same information. Depth in such cases can only be approximately inferred from visual clues in the image, such as perspective, as only one image is offered to both eyes. The goal of stereoscopic 3-D displays is to project a slightly different image into each eye to achieve a much truer and realistic perception of depth, of different scene planes, and of object relief. This paper presents a brief review of a number of stereoscopic 3-D hardware and software solutions for creating and displaying online maps and virtual globes (such as Google Earth) in "true 3D", with costs ranging from almost free to multi-thousand pounds sterling. A practical account is also given of the experience of the USGS BRD UMESC (United States Geological Survey's Biological Resources Division, Upper Midwest Environmental Sciences Center) in setting up a low-cost, full-colour stereoscopic 3-D system. PMID:19849837

  6. Web GIS in practice VII: stereoscopic 3-D solutions for online maps and virtual globes

    USGS Publications Warehouse

    Boulos, Maged N.K.; Robinson, Larry R.

    2009-01-01

    Because our pupils are about 6.5 cm apart, each eye views a scene from a different angle and sends a unique image to the visual cortex, which then merges the images from both eyes into a single picture. The slight difference between the right and left images allows the brain to properly perceive the 'third dimension' or depth in a scene (stereopsis). However, when a person views a conventional 2-D (two-dimensional) image representation of a 3-D (three-dimensional) scene on a conventional computer screen, each eye receives essentially the same information. Depth in such cases can only be approximately inferred from visual clues in the image, such as perspective, as only one image is offered to both eyes. The goal of stereoscopic 3-D displays is to project a slightly different image into each eye to achieve a much truer and realistic perception of depth, of different scene planes, and of object relief. This paper presents a brief review of a number of stereoscopic 3-D hardware and software solutions for creating and displaying online maps and virtual globes (such as Google Earth) in "true 3D", with costs ranging from almost free to multi-thousand pounds sterling. A practical account is also given of the experience of the USGS BRD UMESC (United States Geological Survey's Biological Resources Division, Upper Midwest Environmental Sciences Center) in setting up a low-cost, full-colour stereoscopic 3-D system.

  7. Web GIS in practice VII: stereoscopic 3-D solutions for online maps and virtual globes.

    PubMed

    Boulos, Maged N Kamel; Robinson, Larry R

    2009-10-22

    Because our pupils are about 6.5 cm apart, each eye views a scene from a different angle and sends a unique image to the visual cortex, which then merges the images from both eyes into a single picture. The slight difference between the right and left images allows the brain to properly perceive the 'third dimension' or depth in a scene (stereopsis). However, when a person views a conventional 2-D (two-dimensional) image representation of a 3-D (three-dimensional) scene on a conventional computer screen, each eye receives essentially the same information. Depth in such cases can only be approximately inferred from visual clues in the image, such as perspective, as only one image is offered to both eyes. The goal of stereoscopic 3-D displays is to project a slightly different image into each eye to achieve a much truer and realistic perception of depth, of different scene planes, and of object relief. This paper presents a brief review of a number of stereoscopic 3-D hardware and software solutions for creating and displaying online maps and virtual globes (such as Google Earth) in "true 3D", with costs ranging from almost free to multi-thousand pounds sterling. A practical account is also given of the experience of the USGS BRD UMESC (United States Geological Survey's Biological Resources Division, Upper Midwest Environmental Sciences Center) in setting up a low-cost, full-colour stereoscopic 3-D system.

  8. Dynamic WIFI-Based Indoor Positioning in 3D Virtual World

    NASA Astrophysics Data System (ADS)

    Chan, S.; Sohn, G.; Wang, L.; Lee, W.

    2013-11-01

    A web-based system based on the 3DTown project was proposed using Google Earth plug-in that brings information from indoor positioning devices and real-time sensors into an integrated 3D indoor and outdoor virtual world to visualize the dynamics of urban life within the 3D context of a city. We addressed limitation of the 3DTown project with particular emphasis on video surveillance camera used for indoor tracking purposes. The proposed solution was to utilize wireless local area network (WLAN) WiFi as a replacement technology for localizing objects of interest due to the wide spread availability and large coverage area of WiFi in indoor building spaces. Indoor positioning was performed using WiFi without modifying existing building infrastructure or introducing additional access points (AP)s. A hybrid probabilistic approach was used for indoor positioning based on previously recorded WiFi fingerprint database in the Petrie Science and Engineering building at York University. In addition, we have developed a 3D building modeling module that allows for efficient reconstruction of outdoor building models to be integrated with indoor building models; a sensor module for receiving, distributing, and visualizing real-time sensor data; and a web-based visualization module for users to explore the dynamic urban life in a virtual world. In order to solve the problems in the implementation of the proposed system, we introduce approaches for integration of indoor building models with indoor positioning data, as well as real-time sensor information and visualization on the web-based system. In this paper we report the preliminary results of our prototype system, demonstrating the system's capability for implementing a dynamic 3D indoor and outdoor virtual world that is composed of discrete modules connected through pre-determined communication protocols.

  9. NASA Virtual Glovebox: An Immersive Virtual Desktop Environment for Training Astronauts in Life Science Experiments

    NASA Technical Reports Server (NTRS)

    Twombly, I. Alexander; Smith, Jeffrey; Bruyns, Cynthia; Montgomery, Kevin; Boyle, Richard

    2003-01-01

    The International Space Station will soon provide an unparalleled research facility for studying the near- and longer-term effects of microgravity on living systems. Using the Space Station Glovebox Facility - a compact, fully contained reach-in environment - astronauts will conduct technically challenging life sciences experiments. Virtual environment technologies are being developed at NASA Ames Research Center to help realize the scientific potential of this unique resource by facilitating the experimental hardware and protocol designs and by assisting the astronauts in training. The Virtual GloveboX (VGX) integrates high-fidelity graphics, force-feedback devices and real- time computer simulation engines to achieve an immersive training environment. Here, we describe the prototype VGX system, the distributed processing architecture used in the simulation environment, and modifications to the visualization pipeline required to accommodate the display configuration.

  10. An Examination of the Effects of Collaborative Scientific Visualization via Model-based Reasoning on Science, Technology, Engineering, and Mathematics (STEM) Learning Within an Immersive 3D World

    NASA Astrophysics Data System (ADS)

    Soleimani, Ali

    Immersive 3D worlds can be designed to effectively engage students in peer-to-peer collaborative learning activities, supported by scientific visualization, to help with understanding complex concepts associated with learning science, technology, engineering, and mathematics (STEM). Previous research studies have shown STEM learning benefits associated with the use of scientific visualization tools involving model-based reasoning (MBR). Little is known, however, about collaborative use of scientific visualization, via MBR, within an immersive 3D-world learning environment for helping to improve perceived value of STEM learning and knowledge acquisition in a targeted domain such as geothermal energy. Geothermal energy was selected as the study's STEM focus, because understanding in the domain is highly dependent on successfully integrating science and mathematics concepts. This study used a 2x2 Mixed ANOVA, with repeated measures, design to analyze collaborative usage of a geothermal energy MBR model and its effects on learning within an immersive 3D world. The immersive 3D world used for the study is supported by the Open Simulator platform. Findings from this study can suggest ways to improve STEM learning and inform the design of MBR activities when conducted within an immersive 3D world.

  11. A new 3-D diagnosis strategy for duodenal malignant lesions using multidetector row CT, CT virtual duodenoscopy, duodenography, and 3-D multicholangiography.

    PubMed

    Sata, N; Endo, K; Shimura, K; Koizumi, M; Nagai, H

    2007-01-01

    Recent advances in multidetector row computed tomography (MD-CT) technology provide new opportunities for clinical diagnoses of various diseases. Here we assessed CT virtual duodenoscopy, duodenography, and three-dimensional (3D) multicholangiography created by MD-CT for clinical diagnosis of duodenal malignant lesions. The study involved seven cases of periduodenal carcinoma (four ampullary carcinomas, two duodenal carcinomas, one pancreatic carcinoma). Biliary contrast medium was administered intravenously, followed by intravenous administration of an anticholinergic agent and oral administration of effervescent granules for expanding the upper gastrointestinal tract. Following intravenous administration of a nonionic contrast medium, an upper abdominal MD-CT scan was performed in the left lateral position. Scan data were processed on a workstation to create CT virtual duodenoscopy, duodenography, 3D multicholangiography, and various postprocessing images, which were then evaluated for their effectiveness as preoperative diagnostic tools. Carcinoma location and extent were clearly demonstrated as defects or colored low-density areas in 3-D multicholangiography images and as protruding lesions in virtual duodenography and duodenoscopy images. These findings were confirmed using multiplanar or curved planar reformation images. In conclusion, CT virtual duodenoscopy, doudenography, 3-D multicholangiography, and various images created by MD-CT alone provided necessary and adequate preoperative diagnostic information.

  12. Putting 3D modelling and 3D printing into practice: virtual surgery and preoperative planning to reconstruct complex post-traumatic skeletal deformities and defects

    PubMed Central

    Tetsworth, Kevin; Block, Steve; Glatt, Vaida

    2017-01-01

    3D printing technology has revolutionized and gradually transformed manufacturing across a broad spectrum of industries, including healthcare. Nowhere is this more apparent than in orthopaedics with many surgeons already incorporating aspects of 3D modelling and virtual procedures into their routine clinical practice. As a more extreme application, patient-specific 3D printed titanium truss cages represent a novel approach for managing the challenge of segmental bone defects. This review illustrates the potential indications of this innovative technique using 3D printed titanium truss cages in conjunction with the Masquelet technique. These implants are custom designed during a virtual surgical planning session with the combined input of an orthopaedic surgeon, an orthopaedic engineering professional and a biomedical design engineer. The ability to 3D model an identical replica of the original intact bone in a virtual procedure is of vital importance when attempting to precisely reconstruct normal anatomy during the actual procedure. Additionally, other important factors must be considered during the planning procedure, such as the three-dimensional configuration of the implant. Meticulous design is necessary to allow for successful implantation through the planned surgical exposure, while being aware of the constraints imposed by local anatomy and prior implants. This review will attempt to synthesize the current state of the art as well as discuss our personal experience using this promising technique. It will address implant design considerations including the mechanical, anatomical and functional aspects unique to each case. PMID:28220752

  13. Putting 3D modelling and 3D printing into practice: virtual surgery and preoperative planning to reconstruct complex post-traumatic skeletal deformities and defects.

    PubMed

    Tetsworth, Kevin; Block, Steve; Glatt, Vaida

    2017-01-01

    3D printing technology has revolutionized and gradually transformed manufacturing across a broad spectrum of industries, including healthcare. Nowhere is this more apparent than in orthopaedics with many surgeons already incorporating aspects of 3D modelling and virtual procedures into their routine clinical practice. As a more extreme application, patient-specific 3D printed titanium truss cages represent a novel approach for managing the challenge of segmental bone defects. This review illustrates the potential indications of this innovative technique using 3D printed titanium truss cages in conjunction with the Masquelet technique. These implants are custom designed during a virtual surgical planning session with the combined input of an orthopaedic surgeon, an orthopaedic engineering professional and a biomedical design engineer. The ability to 3D model an identical replica of the original intact bone in a virtual procedure is of vital importance when attempting to precisely reconstruct normal anatomy during the actual procedure. Additionally, other important factors must be considered during the planning procedure, such as the three-dimensional configuration of the implant. Meticulous design is necessary to allow for successful implantation through the planned surgical exposure, while being aware of the constraints imposed by local anatomy and prior implants. This review will attempt to synthesize the current state of the art as well as discuss our personal experience using this promising technique. It will address implant design considerations including the mechanical, anatomical and functional aspects unique to each case.

  14. The Road Less Travelled: The Journey of Immersion into the Virtual Field

    ERIC Educational Resources Information Center

    Fitzsimons, Sabrina

    2013-01-01

    This article provides an account of my experience of immersion as a third-level teacher into the three-dimensional multi-user virtual world Second Life for research purposes. An ethnographic methodology was employed. Three stages in this journey are identified: separation, transition and transformation. In presenting this journey of immersion, it…

  15. Research on 3D virtual campus scene modeling based on 3ds Max and VRML

    NASA Astrophysics Data System (ADS)

    Kang, Chuanli; Zhou, Yanliu; Liang, Xianyue

    2015-12-01

    With the rapid development of modem technology, the digital information management and the virtual reality simulation technology has become a research hotspot. Virtual campus 3D model can not only express the real world objects of natural, real and vivid, and can expand the campus of the reality of time and space dimension, the combination of school environment and information. This paper mainly uses 3ds Max technology to create three-dimensional model of building and on campus buildings, special land etc. And then, the dynamic interactive function is realized by programming the object model in 3ds Max by VRML .This research focus on virtual campus scene modeling technology and VRML Scene Design, and the scene design process in a variety of real-time processing technology optimization strategy. This paper guarantees texture map image quality and improve the running speed of image texture mapping. According to the features and architecture of Guilin University of Technology, 3ds Max, AutoCAD and VRML were used to model the different objects of the virtual campus. Finally, the result of virtual campus scene is summarized.

  16. Immersive virtual reality and environmental noise assessment: An innovative audio–visual approach

    SciTech Connect

    Ruotolo, Francesco; Maffei, Luigi; Di Gabriele, Maria; Iachini, Tina; Masullo, Massimiliano; Ruggiero, Gennaro; Senese, Vincenzo Paolo

    2013-07-15

    Several international studies have shown that traffic noise has a negative impact on people's health and that people's annoyance does not depend only on noise energetic levels, but rather on multi-perceptual factors. The combination of virtual reality technology and audio rendering techniques allow us to experiment a new approach for environmental noise assessment that can help to investigate in advance the potential negative effects of noise associated with a specific project and that in turn can help designers to make educated decisions. In the present study, the audio–visual impact of a new motorway project on people has been assessed by means of immersive virtual reality technology. In particular, participants were exposed to 3D reconstructions of an actual landscape without the projected motorway (ante operam condition), and of the same landscape with the projected motorway (post operam condition). Furthermore, individuals' reactions to noise were assessed by means of objective cognitive measures (short term verbal memory and executive functions) and subjective evaluations (noise and visual annoyance). Overall, the results showed that the introduction of a projected motorway in the environment can have immediate detrimental effects of people's well-being depending on the distance from the noise source. In particular, noise due to the new infrastructure seems to exert a negative influence on short term verbal memory and to increase both visual and noise annoyance. The theoretical and practical implications of these findings are discussed. -- Highlights: ► Impact of traffic noise on people's well-being depends on multi-perceptual factors. ► A multisensory virtual reality technology is used to simulate a projected motorway. ► Effects on short-term memory and auditory and visual subjective annoyance were found. ► The closer the distance from the motorway the stronger was the effect. ► Multisensory virtual reality methodologies can be used to study

  17. Exploring conformational search protocols for ligand-based virtual screening and 3-D QSAR modeling

    NASA Astrophysics Data System (ADS)

    Cappel, Daniel; Dixon, Steven L.; Sherman, Woody; Duan, Jianxin

    2015-02-01

    3-D ligand conformations are required for most ligand-based drug design methods, such as pharmacophore modeling, shape-based screening, and 3-D QSAR model building. Many studies of conformational search methods have focused on the reproduction of crystal structures (i.e. bioactive conformations); however, for ligand-based modeling the key question is how to generate a ligand alignment that produces the best results for a given query molecule. In this work, we study different conformation generation modes of ConfGen and the impact on virtual screening (Shape Screening and e-Pharmacophore) and QSAR predictions (atom-based and field-based). In addition, we develop a new search method, called common scaffold alignment, that automatically detects the maximum common scaffold between each screening molecule and the query to ensure identical coordinates of the common core, thereby minimizing the noise introduced by analogous parts of the molecules. In general, we find that virtual screening results are relatively insensitive to the conformational search protocol; hence, a conformational search method that generates fewer conformations could be considered "better" because it is more computationally efficient for screening. However, for 3-D QSAR modeling we find that more thorough conformational sampling tends to produce better QSAR predictions. In addition, significant improvements in QSAR predictions are obtained with the common scaffold alignment protocol developed in this work, which focuses conformational sampling on parts of the molecules that are not part of the common scaffold.

  18. Lead-oriented synthesis: Investigation of organolithium-mediated routes to 3-D scaffolds and 3-D shape analysis of a virtual lead-like library.

    PubMed

    Lüthy, Monique; Wheldon, Mary C; Haji-Cheteh, Chehasnah; Atobe, Masakazu; Bond, Paul S; O'Brien, Peter; Hubbard, Roderick E; Fairlamb, Ian J S

    2015-06-01

    Synthetic routes to six 3-D scaffolds containing piperazine, pyrrolidine and piperidine cores have been developed. The synthetic methodology focused on the use of N-Boc α-lithiation-trapping chemistry. Notably, suitably protected and/or functionalised medicinal chemistry building blocks were synthesised via concise, connective methodology. This represents a rare example of lead-oriented synthesis. A virtual library of 190 compounds was then enumerated from the six scaffolds. Of these, 92 compounds (48%) fit the lead-like criteria of: (i) -1⩽AlogP⩽3; (ii) 14⩽number of heavy atoms⩽26; (iii) total polar surface area⩾50Å(2). The 3-D shapes of the 190 compounds were analysed using a triangular plot of normalised principal moments of inertia (PMI). From this, 46 compounds were identified which had lead-like properties and possessed 3-D shapes in under-represented areas of pharmaceutical space. Thus, the PMI analysis of the 190 member virtual library showed that whilst scaffolds which may appear on paper to be 3-D in shape, only 24% of the compounds actually had 3-D structures in the more interesting areas of 3-D drug space.

  19. A virtual interface for interactions with 3D models of the human body.

    PubMed

    De Paolis, Lucio T; Pulimeno, Marco; Aloisio, Giovanni

    2009-01-01

    The developed system is the first prototype of a virtual interface designed to avoid contact with the computer so that the surgeon is able to visualize 3D models of the patient's organs more effectively during surgical procedure or to use this in the pre-operative planning. The doctor will be able to rotate, to translate and to zoom in on 3D models of the patient's organs simply by moving his finger in free space; in addition, it is possible to choose to visualize all of the organs or only some of them. All of the interactions with the models happen in real-time using the virtual interface which appears as a touch-screen suspended in free space in a position chosen by the user when the application is started up. Finger movements are detected by means of an optical tracking system and are used to simulate touch with the interface and to interact by pressing the buttons present on the virtual screen.

  20. Visualization of large scale geologically related data in virtual 3D scenes with OpenGL

    NASA Astrophysics Data System (ADS)

    Seng, Dewen; Liang, Xi; Wang, Hongxia; Yue, Guoying

    2007-11-01

    This paper demonstrates a method for three-dimensional (3D) reconstruction and visualization of large scale multidimensional surficial, geological and mine planning data with the programmable visualization environment OpenGL. A simulation system developed by the authors is presented for importing, filtering and visualizing of multidimensional geologically related data. The approach for the visual simulation of complicated mining engineering environment implemented in the system is described in detail. Aspects like presentations of multidimensional data with spatial dependence, navigation in the surficial and geological frame of reference and in time, interaction techniques are presented. The system supports real 3D landscape representations. Furthermore, the system provides many visualization methods for rendering multidimensional data within virtual 3D scenes and combines them with several navigation techniques. Real data derived from an iron mine in Wuhan City of China demonstrates the effectiveness and efficiency of the system. A case study with the results and benefits achieved by using real 3D representations and navigations of the system is given.

  1. 3D QSAR studies, pharmacophore modeling and virtual screening on a series of steroidal aromatase inhibitors.

    PubMed

    Xie, Huiding; Qiu, Kaixiong; Xie, Xiaoguang

    2014-11-14

    Aromatase inhibitors are the most important targets in treatment of estrogen-dependent cancers. In order to search for potent steroidal aromatase inhibitors (SAIs) with lower side effects and overcome cellular resistance, comparative molecular field analysis (CoMFA) and comparative molecular similarity indices analysis (CoMSIA) were performed on a series of SAIs to build 3D QSAR models. The reliable and predictive CoMFA and CoMSIA models were obtained with statistical results (CoMFA: q² = 0.636, r²(ncv) = 0.988, r²(pred) = 0.658; CoMSIA: q² = 0.843, r²(ncv) = 0.989, r²(pred) = 0.601). This 3D QSAR approach provides significant insights that can be used to develop novel and potent SAIs. In addition, Genetic algorithm with linear assignment of hypermolecular alignment of database (GALAHAD) was used to derive 3D pharmacophore models. The selected pharmacophore model contains two acceptor atoms and four hydrophobic centers, which was used as a 3D query for virtual screening against NCI2000 database. Six hit compounds were obtained and their biological activities were further predicted by the CoMFA and CoMSIA models, which are expected to design potent and novel SAIs.

  2. Virtual Sculpting and 3D Printing for Young People with Disabilities.

    PubMed

    Mcloughlin, Leigh; Fryazinov, Oleg; Moseley, Mark; Sanchez, Mathieu; Adzhiev, Valery; Comninos, Peter; Pasko, Alexander

    2016-01-01

    The SHIVA project was designed to provide virtual sculpting tools for young people with complex disabilities, allowing them to engage with artistic and creative activities that they might otherwise never be able to access. Their creations are then physically built using 3D printing. To achieve this, the authors built a generic, accessible GUI and a suitable geometric modeling system and used these to produce two prototype modeling exercises. These tools were deployed in a school for students with complex disabilities and are now being used for a variety of educational and developmental purposes. This article presents the project's motivations, approach, and implementation details together with initial results, including 3D printed objects designed by young people with disabilities.

  3. Virtual 3D tumor marking-exact intraoperative coordinate mapping improve post-operative radiotherapy

    PubMed Central

    2011-01-01

    The quality of the interdisciplinary interface in oncological treatment between surgery, pathology and radiotherapy is mainly dependent on reliable anatomical three-dimensional (3D) allocation of specimen and their context sensitive interpretation which defines further treatment protocols. Computer-assisted preoperative planning (CAPP) allows for outlining macroscopical tumor size and margins. A new technique facilitates the 3D virtual marking and mapping of frozen sections and resection margins or important surgical intraoperative information. These data could be stored in DICOM format (Digital Imaging and Communication in Medicine) in terms of augmented reality and transferred to communicate patient's specific tumor information (invasion to vessels and nerves, non-resectable tumor) to oncologists, radiotherapists and pathologists. PMID:22087558

  4. 3-D Imaging In Virtual Environment: A Scientific Clinical and Teaching Tool

    NASA Technical Reports Server (NTRS)

    Ross, Muriel D.; DeVincenzi, Donald L. (Technical Monitor)

    1996-01-01

    The advent of powerful graphics workstations and computers has led to the advancement of scientific knowledge through three-dimensional (3-D) reconstruction and imaging of biological cells and tissues. The Biocomputation Center at NASA Ames Research Center pioneered the effort to produce an entirely computerized method for reconstruction of objects from serial sections studied in a transmission electron microscope (TEM). The software developed, ROSS (Reconstruction of Serial Sections), is now being distributed to users across the United States through Space Act Agreements. The software is in widely disparate fields such as geology, botany, biology and medicine. In the Biocomputation Center, ROSS serves as the basis for development of virtual environment technologies for scientific and medical use. This report will describe the Virtual Surgery Workstation Project that is ongoing with clinicians at Stanford University Medical Center, and the role of the Visible Human data in the project.

  5. Fast generation of virtual X-ray images for reconstruction of 3D anatomy.

    PubMed

    Ehlke, Moritz; Ramm, Heiko; Lamecker, Hans; Hege, Hans-Christian; Zachow, Stefan

    2013-12-01

    We propose a novel GPU-based approach to render virtual X-ray projections of deformable tetrahedral meshes. These meshes represent the shape and the internal density distribution of a particular anatomical structure and are derived from statistical shape and intensity models (SSIMs). We apply our method to improve the geometric reconstruction of 3D anatomy (e.g. pelvic bone) from 2D X-ray images. For that purpose, shape and density of a tetrahedral mesh are varied and virtual X-ray projections are generated within an optimization process until the similarity between the computed virtual X-ray and the respective anatomy depicted in a given clinical X-ray is maximized. The OpenGL implementation presented in this work deforms and projects tetrahedral meshes of high resolution (200.000+ tetrahedra) at interactive rates. It generates virtual X-rays that accurately depict the density distribution of an anatomy of interest. Compared to existing methods that accumulate X-ray attenuation in deformable meshes, our novel approach significantly boosts the deformation/projection performance. The proposed projection algorithm scales better with respect to mesh resolution and complexity of the density distribution, and the combined deformation and projection on the GPU scales better with respect to the number of deformation parameters. The gain in performance allows for a larger number of cycles in the optimization process. Consequently, it reduces the risk of being stuck in a local optimum. We believe that our approach will improve treatments in orthopedics, where 3D anatomical information is essential.

  6. M3D (Media 3D): a new programming language for web-based virtual reality in E-Learning and Edutainment

    NASA Astrophysics Data System (ADS)

    Chakaveh, Sepideh; Skaley, Detlef; Laine, Patricia; Haeger, Ralf; Maad, Soha

    2003-01-01

    Today, interactive multimedia educational systems are well established, as they prove useful instruments to enhance one's learning capabilities. Hitherto, the main difficulty with almost all E-Learning systems was latent in the rich media implementation techniques. This meant that each and every system should be created individually as reapplying the media, be it only a part, or the whole content was not directly possible, as everything must be applied mechanically i.e. by hand. Consequently making E-learning systems exceedingly expensive to generate, both in time and money terms. Media-3D or M3D is a new platform independent programming language, developed at the Fraunhofer Institute Media Communication to enable visualisation and simulation of E-Learning multimedia content. M3D is an XML-based language, which is capable of distinguishing between the3D models from that of the 3D scenes, as well as handling provisions for animations, within the programme. Here we give a technical account of M3D programming language and briefly describe two specific application scenarios where M3D is applied to create virtual reality E-Learning content for training of technical personnel.

  7. A Framework for Aligning Instructional Design Strategies with Affordances of CAVE Immersive Virtual Reality Systems

    ERIC Educational Resources Information Center

    Ritz, Leah T.; Buss, Alan R.

    2016-01-01

    Increasing availability of immersive virtual reality (IVR) systems, such as the Cave Automatic Virtual Environment (CAVE) and head-mounted displays, for use in education contexts is providing new opportunities and challenges for instructional designers. By highlighting the affordances of IVR specific to the CAVE, the authors emphasize the…

  8. [3D-TECHNOLOGIES AS A CORE ELEMENT OF PLANNING AND IMPLEMENTATION OF VIRTUAL AND ACTUAL RENAL SURGERY].

    PubMed

    Glybochko, P V; Aljaev, Ju G; Bezrukov, E A; Sirota, E S; Proskura, A V

    2015-01-01

    The purpose of this article is to demonstrate the role of modern computer technologies in performing virtual and actual renal tumor surgery. Currently 3D modeling makes it possible to clearly define strategy and tactics of an individual patient treatment.

  9. Dynamic 3-D virtual fixtures for minimally invasive beating heart procedures.

    PubMed

    Ren, Jing; Patel, Rajni V; McIsaac, Kenneth A; Guiraudon, Gerard; Peters, Terry M

    2008-08-01

    Two-dimensional or 3-D visual guidance is often used for minimally invasive cardiac surgery and diagnosis. This visual guidance suffers from several drawbacks such as limited field of view, loss of signal from time to time, and in some cases, difficulty of interpretation. These limitations become more evident in beating-heart procedures when the surgeon has to perform a surgical procedure in the presence of heart motion. In this paper, we propose dynamic 3-D virtual fixtures (DVFs) to augment the visual guidance system with haptic feedback, to provide the surgeon with more helpful guidance by constraining the surgeon's hand motions thereby protecting sensitive structures. DVFs can be generated from preoperative dynamic magnetic resonance (MR) or computed tomograph (CT) images and then mapped to the patient during surgery. We have validated the feasibility of the proposed method on several simulated surgical tasks using a volunteer's cardiac image dataset. Validation results show that the integration of visual and haptic guidance can permit a user to perform surgical tasks more easily and with reduced error rate. We believe this is the first work presented in the field of virtual fixtures that explicitly considers heart motion.

  10. 3D virtual human atria: A computational platform for studying clinical atrial fibrillation.

    PubMed

    Aslanidi, Oleg V; Colman, Michael A; Stott, Jonathan; Dobrzynski, Halina; Boyett, Mark R; Holden, Arun V; Zhang, Henggui

    2011-10-01

    Despite a vast amount of experimental and clinical data on the underlying ionic, cellular and tissue substrates, the mechanisms of common atrial arrhythmias (such as atrial fibrillation, AF) arising from the functional interactions at the whole atria level remain unclear. Computational modelling provides a quantitative framework for integrating such multi-scale data and understanding the arrhythmogenic behaviour that emerges from the collective spatio-temporal dynamics in all parts of the heart. In this study, we have developed a multi-scale hierarchy of biophysically detailed computational models for the human atria--the 3D virtual human atria. Primarily, diffusion tensor MRI reconstruction of the tissue geometry and fibre orientation in the human sinoatrial node (SAN) and surrounding atrial muscle was integrated into the 3D model of the whole atria dissected from the Visible Human dataset. The anatomical models were combined with the heterogeneous atrial action potential (AP) models, and used to simulate the AP conduction in the human atria under various conditions: SAN pacemaking and atrial activation in the normal rhythm, break-down of regular AP wave-fronts during rapid atrial pacing, and the genesis of multiple re-entrant wavelets characteristic of AF. Contributions of different properties of the tissue to mechanisms of the normal rhythm and arrhythmogenesis were investigated. Primarily, the simulations showed that tissue heterogeneity caused the break-down of the normal AP wave-fronts at rapid pacing rates, which initiated a pair of re-entrant spiral waves; and tissue anisotropy resulted in a further break-down of the spiral waves into multiple meandering wavelets characteristic of AF. The 3D virtual atria model itself was incorporated into the torso model to simulate the body surface ECG patterns in the normal and arrhythmic conditions. Therefore, a state-of-the-art computational platform has been developed, which can be used for studying multi

  11. Combining in-situ lithography with 3D printed solid immersion lenses for single quantum dot spectroscopy

    PubMed Central

    Sartison, Marc; Portalupi, Simone Luca; Gissibl, Timo; Jetter, Michael; Giessen, Harald; Michler, Peter

    2017-01-01

    In the current study, we report on the deterministic fabrication of solid immersion lenses (SILs) on lithographically pre-selected semiconductor quantum dots (QDs). We demonstrate the combination of state-of-the-art low-temperature in-situ photolithography and femtosecond 3D direct laser writing. Several QDs are pre-selected with a localization accuracy of less than 2 nm with low-temperature lithography and three-dimensional laser writing is then used to deterministically fabricate hemispherical lenses on top of the quantum emitter with a submicrometric precision. Due to the printed lenses, the QD light extraction efficiency is enhanced by a factor of 2, the pumping laser is focused more, and the signal-to-noise ratio is increased, leading to an improved localization accuracy of the QD to well below 1 nm. Furthermore, modifications of the QD properties, i.e. strain and variation of internal quantum efficiency induced by the printed lenses, are also reported. PMID:28057941

  12. Combining in-situ lithography with 3D printed solid immersion lenses for single quantum dot spectroscopy

    NASA Astrophysics Data System (ADS)

    Sartison, Marc; Portalupi, Simone Luca; Gissibl, Timo; Jetter, Michael; Giessen, Harald; Michler, Peter

    2017-01-01

    In the current study, we report on the deterministic fabrication of solid immersion lenses (SILs) on lithographically pre-selected semiconductor quantum dots (QDs). We demonstrate the combination of state-of-the-art low-temperature in-situ photolithography and femtosecond 3D direct laser writing. Several QDs are pre-selected with a localization accuracy of less than 2 nm with low-temperature lithography and three-dimensional laser writing is then used to deterministically fabricate hemispherical lenses on top of the quantum emitter with a submicrometric precision. Due to the printed lenses, the QD light extraction efficiency is enhanced by a factor of 2, the pumping laser is focused more, and the signal-to-noise ratio is increased, leading to an improved localization accuracy of the QD to well below 1 nm. Furthermore, modifications of the QD properties, i.e. strain and variation of internal quantum efficiency induced by the printed lenses, are also reported.

  13. Scalable, high-performance 3D imaging software platform: system architecture and application to virtual colonoscopy.

    PubMed

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli; Brett, Bevin

    2012-01-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. In this work, we have developed a software platform that is designed to support high-performance 3D medical image processing for a wide range of applications using increasingly available and affordable commodity computing systems: multi-core, clusters, and cloud computing systems. To achieve scalable, high-performance computing, our platform (1) employs size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D image processing algorithms; (2) supports task scheduling for efficient load distribution and balancing; and (3) consists of a layered parallel software libraries that allow a wide range of medical applications to share the same functionalities. We evaluated the performance of our platform by applying it to an electronic cleansing system in virtual colonoscopy, with initial experimental results showing a 10 times performance improvement on an 8-core workstation over the original sequential implementation of the system.

  14. Laboratory-based x-ray phase-contrast tomography enables 3D virtual histology

    NASA Astrophysics Data System (ADS)

    Töpperwien, Mareike; Krenkel, Martin; Quade, Felix; Salditt, Tim

    2016-09-01

    Due to the large penetration depth and small wavelength hard x-rays offer a unique potential for 3D biomedical and biological imaging, combining capabilities of high resolution and large sample volume. However, in classical absorption-based computed tomography, soft tissue only shows a weak contrast, limiting the actual resolution. With the advent of phase-contrast methods, the much stronger phase shift induced by the sample can now be exploited. For high resolution, free space propagation behind the sample is particularly well suited to make the phase shift visible. Contrast formation is based on the self-interference of the transmitted beam, resulting in object-induced intensity modulations in the detector plane. As this method requires a sufficiently high degree of spatial coherence, it was since long perceived as a synchrotron-based imaging technique. In this contribution we show that by combination of high brightness liquid-metal jet microfocus sources and suitable sample preparation techniques, as well as optimized geometry, detection and phase retrieval, excellent three-dimensional image quality can be obtained, revealing the anatomy of a cobweb spider in high detail. This opens up new opportunities for 3D virtual histology of small organisms. Importantly, the image quality is finally augmented to a level accessible to automatic 3D segmentation.

  15. An Examination of the Effects of Collaborative Scientific Visualization via Model-Based Reasoning on Science, Technology, Engineering, and Mathematics (STEM) Learning within an Immersive 3D World

    ERIC Educational Resources Information Center

    Soleimani, Ali

    2013-01-01

    Immersive 3D worlds can be designed to effectively engage students in peer-to-peer collaborative learning activities, supported by scientific visualization, to help with understanding complex concepts associated with learning science, technology, engineering, and mathematics (STEM). Previous research studies have shown STEM learning benefits…

  16. Immersive video

    NASA Astrophysics Data System (ADS)

    Moezzi, Saied; Katkere, Arun L.; Jain, Ramesh C.

    1996-03-01

    Interactive video and television viewers should have the power to control their viewing position. To make this a reality, we introduce the concept of Immersive Video, which employs computer vision and computer graphics technologies to provide remote users a sense of complete immersion when viewing an event. Immersive Video uses multiple videos of an event, captured from different perspectives, to generate a full 3D digital video of that event. That is accomplished by assimilating important information from each video stream into a comprehensive, dynamic, 3D model of the environment. Using this 3D digital video, interactive viewers can then move around the remote environment and observe the events taking place from any desired perspective. Our Immersive Video System currently provides interactive viewing and `walkthrus' of staged karate demonstrations, basketball games, dance performances, and typical campus scenes. In its full realization, Immersive Video will be a paradigm shift in visual communication which will revolutionize television and video media, and become an integral part of future telepresence and virtual reality systems.

  17. Building a 3D Virtual Liver: Methods for Simulating Blood Flow and Hepatic Clearance on 3D Structures

    PubMed Central

    Rezania, Vahid; Tuszynski, Jack

    2016-01-01

    In this paper, we develop a spatio-temporal modeling approach to describe blood and drug flow, as well as drug uptake and elimination, on an approximation of the liver. Extending on previously developed computational approaches, we generate an approximation of a liver, which consists of a portal and hepatic vein vasculature structure, embedded in the surrounding liver tissue. The vasculature is generated via constrained constructive optimization, and then converted to a spatial grid of a selected grid size. Estimates for surrounding upscaled lobule tissue properties are then presented appropriate to the same grid size. Simulation of fluid flow and drug metabolism (hepatic clearance) are completed using discretized forms of the relevant convective-diffusive-reactive partial differential equations for these processes. This results in a single stage, uniformly consistent method to simulate equations for blood and drug flow, as well as drug metabolism, on a 3D structure representative of a liver. PMID:27649537

  18. Interactive Learning Environment: Web-based Virtual Hydrological Simulation System using Augmented and Immersive Reality

    NASA Astrophysics Data System (ADS)

    Demir, I.

    2014-12-01

    Recent developments in internet technologies make it possible to manage and visualize large data on the web. Novel visualization techniques and interactive user interfaces allow users to create realistic environments, and interact with data to gain insight from simulations and environmental observations. The hydrological simulation system is a web-based 3D interactive learning environment for teaching hydrological processes and concepts. The simulation systems provides a visually striking platform with realistic terrain information, and water simulation. Students can create or load predefined scenarios, control environmental parameters, and evaluate environmental mitigation alternatives. The web-based simulation system provides an environment for students to learn about the hydrological processes (e.g. flooding and flood damage), and effects of development and human activity in the floodplain. The system utilizes latest web technologies and graphics processing unit (GPU) for water simulation and object collisions on the terrain. Users can access the system in three visualization modes including virtual reality, augmented reality, and immersive reality using heads-up display. The system provides various scenarios customized to fit the age and education level of various users. This presentation provides an overview of the web-based flood simulation system, and demonstrates the capabilities of the system for various visualization and interaction modes.

  19. Accuracy and reproducibility of virtual cutting guides and 3D-navigation for osteotomies of the mandible and maxilla

    PubMed Central

    Bernstein, Jonathan M.; Daly, Michael J.; Chan, Harley; Qiu, Jimmy; Goldstein, David; Muhanna, Nidal; de Almeida, John R.; Irish, Jonathan C.

    2017-01-01

    Background We set out to determine the accuracy of 3D-navigated mandibular and maxillary osteotomies with the ultimate aim to integrate virtual cutting guides and 3D-navigation into ablative and reconstructive head and neck surgery. Methods Four surgeons (two attending, two clinical fellows) completed 224 unnavigated and 224 3D-navigated osteotomies on anatomical models according to preoperative 3D plans. The osteotomized bones were scanned and analyzed. Results Median distance from the virtual plan was 2.1 mm unnavigated (IQR 2.6 mm, ≥3 mm in 33%) and 1.2 mm 3D-navigated (IQR 1.1 mm, ≥3 mm in 6%) (P<0.0001); median pitch was 4.5° unnavigated (IQR 7.1°) and 3.5° 3D-navigated (IQR 4.0°) (P<0.0001); median roll was 7.4° unnavigated (IQR 8.5°) and 2.6° 3D-navigated (IQR 3.8°) (P<0.0001). Conclusion 3D-rendering enables osteotomy navigation. 3 mm is an appropriate planning distance. The next steps are translating virtual cutting guides to free bone flap reconstruction and clinical use. PMID:28249001

  20. 3D Printed Models and Navigation for Skull Base Surgery: Case Report and Virtual Validation.

    PubMed

    Ritacco, Lucas E; Di Lella, Federico; Mancino, Axel; Gonzalez Bernaldo de Quiros, Fernan; Boccio, Carlos; Milano, Federico E

    2015-01-01

    In recent years, computer-assisted surgery tools have become more versatile. Having access to a 3D printed model expands the possibility for surgeons to practice with the particular anatomy of a patient before surgery and improve their skills. Optical navigation is capable of guiding a surgeon according to a previously defined plan. These methods improve accuracy and safety at the moment of executing the operation. We intend to carry on a validation process for computed-assisted tools. The aim of this project is to propose a comparative validation method to enable physicians to evaluate differences between a virtual planned approach trajectory and a real executed course. Summarily, this project is focused on decoding data in order to obtain numerical values so as to establish the quality of surgical procedures.

  1. Predicting LER and LWR in SAQP with 3D virtual fabrication

    NASA Astrophysics Data System (ADS)

    Gu, Jiangjiang (Jimmy); Zhao, Dalong; Allampalli, Vasanth; Faken, Daniel; Greiner, Ken; Fried, David M.

    2016-03-01

    For the first time, process impact on line-edge roughness (LER) and line-width roughness (LWR) in a back-end-of-line (BEOL) self-aligned quadruple patterning (SAQP) flow has been systematically investigated through predictive 3D virtual fabrication. This frequency dependent LER study shows that both deposition and etching effectively reduce high frequency LER, while deposition is much more effective in reducing low frequency LER. Spacer-assisted patterning technology reduces LWR significantly by creating correlated edges, and further LWR improvement can be achieved by optimizing individual process effects on LER. Our study provides a guideline for the understanding and optimization of LER and LWR in advanced technology nodes.

  2. The Pixelated Professor: Faculty in Immersive Virtual Worlds

    ERIC Educational Resources Information Center

    Blackmon, Stephanie

    2015-01-01

    Online environments, particularly virtual worlds, can sometimes complicate issues of self expression. For example, the faculty member who loves punk rock has an opportunity, through hairstyle and attire choices in the virtual world, to share that part of herself with students. However, deciding to share that part of the self can depend on a number…

  3. Teaching Literature in Virtual Worlds: Immersive Learning in English Studies

    ERIC Educational Resources Information Center

    Webb, Allen, Ed.

    2011-01-01

    What are the realities and possibilities of utilizing on-line virtual worlds as teaching tools for specific literary works? Through engaging and surprising stories from classrooms where virtual worlds are in use, this book invites readers to understand and participate in this emerging and valuable pedagogy. It examines the experience of high…

  4. Using Immersive Virtual Reality for Electrical Substation Training

    ERIC Educational Resources Information Center

    Tanaka, Eduardo H.; Paludo, Juliana A.; Cordeiro, Carlúcio S.; Domingues, Leonardo R.; Gadbem, Edgar V.; Euflausino, Adriana

    2015-01-01

    Usually, distribution electricians are called upon to solve technical problems found in electrical substations. In this project, we apply problem-based learning to a training program for electricians, with the help of a virtual reality environment that simulates a real substation. Using this virtual substation, users may safely practice maneuvers…

  5. Virtual Worlds; Real Learning: Design Principles for Engaging Immersive Environments

    NASA Technical Reports Server (NTRS)

    Wu (u. Sjarpm)

    2012-01-01

    The EMDT master's program at Full Sail University embarked on a small project to use a virtual environment to teach graduate students. The property used for this project has evolved our several iterations and has yielded some basic design principles and pedagogy for virtual spaces. As a result, students are emerging from the program with a better grasp of future possibilities.

  6. The Virtual Pelvic Floor, a tele-immersive educational environment.

    PubMed Central

    Pearl, R. K.; Evenhouse, R.; Rasmussen, M.; Dech, F.; Silverstein, J. C.; Prokasy, S.; Panko, W. B.

    1999-01-01

    This paper describes the development of the Virtual Pelvic Floor, a new method of teaching the complex anatomy of the pelvic region utilizing virtual reality and advanced networking technology. Virtual reality technology allows improved visualization of three-dimensional structures over conventional media because it supports stereo vision, viewer-centered perspective, large angles of view, and interactivity. Two or more ImmersaDesk systems, drafting table format virtual reality displays, are networked together providing an environment where teacher and students share a high quality three-dimensional anatomical model, and are able to converse, see each other, and to point in three dimensions to indicate areas of interest. This project was realized by the teamwork of surgeons, medical artists and sculptors, computer scientists, and computer visualization experts. It demonstrates the future of virtual reality for surgical education and applications for the Next Generation Internet. Images Figure 1 Figure 2 Figure 3 PMID:10566378

  7. Enhanced Visual-Attention Model for Perceptually Improved 3D Object Modeling in Virtual Environments

    NASA Astrophysics Data System (ADS)

    Chagnon-Forget, Maude; Rouhafzay, Ghazal; Cretu, Ana-Maria; Bouchard, Stéphane

    2016-12-01

    Three-dimensional object modeling and interactive virtual environment applications require accurate, but compact object models that ensure real-time rendering capabilities. In this context, the paper proposes a 3D modeling framework employing visual attention characteristics in order to obtain compact models that are more adapted to human visual capabilities. An enhanced computational visual attention model with additional saliency channels, such as curvature, symmetry, contrast and entropy, is initially employed to detect points of interest over the surface of a 3D object. The impact of the use of these supplementary channels is experimentally evaluated. The regions identified as salient by the visual attention model are preserved in a selectively-simplified model obtained using an adapted version of the QSlim algorithm. The resulting model is characterized by a higher density of points in the salient regions, therefore ensuring a higher perceived quality, while at the same time ensuring a less complex and more compact representation for the object. The quality of the resulting models is compared with the performance of other interest point detectors incorporated in a similar manner in the simplification algorithm. The proposed solution results overall in higher quality models, especially at lower resolutions. As an example of application, the selectively-densified models are included in a continuous multiple level of detail (LOD) modeling framework, in which an original neural-network solution selects the appropriate size and resolution of an object.

  8. Reaching to virtual targets: The oblique effect reloaded in 3-D.

    PubMed

    Kaspiris-Rousellis, Christos; Siettos, Constantinos I; Evdokimidis, Ioannis; Smyrnis, Nikolaos

    2017-02-20

    Perceiving and reproducing direction of visual stimuli in 2-D space produces the visual oblique effect, which manifests as increased precision in the reproduction of cardinal compared to oblique directions. A second cognitive oblique effect emerges when stimulus information is degraded (such as when reproducing stimuli from memory) and manifests as a systematic distortion where reproduced directions close to the cardinal axes deviate toward the oblique, leading to space expansion at cardinal and contraction at oblique axes. We studied the oblique effect in 3-D using a virtual reality system to present a large number of stimuli, covering the surface of an imaginary half sphere, to which subjects had to reach. We used two conditions, one with no delay (no-memory condition) and one where a three-second delay intervened between stimulus presentation and movement initiation (memory condition). A visual oblique effect was observed for the reproduction of cardinal directions compared to oblique, which did not differ with memory condition. A cognitive oblique effect also emerged, which was significantly larger in the memory compared to the no-memory condition, leading to distortion of directional space with expansion near the cardinal axes and compression near the oblique axes on the hemispherical surface. This effect provides evidence that existing models of 2-D directional space categorization could be extended in the natural 3-D space.

  9. Combinatorial Pharmacophore-Based 3D-QSAR Analysis and Virtual Screening of FGFR1 Inhibitors

    PubMed Central

    Zhou, Nannan; Xu, Yuan; Liu, Xian; Wang, Yulan; Peng, Jianlong; Luo, Xiaomin; Zheng, Mingyue; Chen, Kaixian; Jiang, Hualiang

    2015-01-01

    The fibroblast growth factor/fibroblast growth factor receptor (FGF/FGFR) signaling pathway plays crucial roles in cell proliferation, angiogenesis, migration, and survival. Aberration in FGFRs correlates with several malignancies and disorders. FGFRs have proved to be attractive targets for therapeutic intervention in cancer, and it is of high interest to find FGFR inhibitors with novel scaffolds. In this study, a combinatorial three-dimensional quantitative structure-activity relationship (3D-QSAR) model was developed based on previously reported FGFR1 inhibitors with diverse structural skeletons. This model was evaluated for its prediction performance on a diverse test set containing 232 FGFR inhibitors, and it yielded a SD value of 0.75 pIC50 units from measured inhibition affinities and a Pearson’s correlation coefficient R2 of 0.53. This result suggests that the combinatorial 3D-QSAR model could be used to search for new FGFR1 hit structures and predict their potential activity. To further evaluate the performance of the model, a decoy set validation was used to measure the efficiency of the model by calculating EF (enrichment factor). Based on the combinatorial pharmacophore model, a virtual screening against SPECS database was performed. Nineteen novel active compounds were successfully identified, which provide new chemical starting points for further structural optimization of FGFR1 inhibitors. PMID:26110383

  10. Design and implementation of a 3D ocean virtual reality and visualization engine

    NASA Astrophysics Data System (ADS)

    Chen, Ge; Li, Bo; Tian, Fenglin; Ji, Pengbo; Li, Wenqing

    2012-12-01

    In this study, a 3D virtual reality and visualization engine for rendering the ocean, named VV-Ocean, is designed for marine applications. The design goals of VV-Ocean aim at high fidelity simulation of ocean environment, visualization of massive and multidimensional marine data, and imitation of marine lives. VV-Ocean is composed of five modules, i.e. memory management module, resources management module, scene management module, rendering process management module and interaction management module. There are three core functions in VV-Ocean: reconstructing vivid virtual ocean scenes, visualizing real data dynamically in real time, imitating and simulating marine lives intuitively. Based on VV-Ocean, we establish a sea-land integration platform which can reproduce drifting and diffusion processes of oil spilling from sea bottom to surface. Environment factors such as ocean current and wind field have been considered in this simulation. On this platform oil spilling process can be abstracted as movements of abundant oil particles. The result shows that oil particles blend with water well and the platform meets the requirement for real-time and interactive rendering. VV-Ocean can be widely used in ocean applications such as demonstrating marine operations, facilitating maritime communications, developing ocean games, reducing marine hazards, forecasting the weather over oceans, serving marine tourism, and so on. Finally, further technological improvements of VV-Ocean are discussed.

  11. 3D modeling of the Strasbourg's Cathedral basements for interdisciplinary research and virtual visits

    NASA Astrophysics Data System (ADS)

    Landes, T.; Kuhnle, G.; Bruna, R.

    2015-08-01

    On the occasion of the millennium celebration of Strasbourg Cathedral, a transdisciplinary research group composed of archaeologists, surveyors, architects, art historians and a stonemason revised the 1966-1972 excavations under the St. Lawrence's Chapel of the Cathedral having remains of Roman and medieval masonry. The 3D modeling of the Chapel has been realized based on the combination of conventional surveying techniques for the network creation, laser scanning for the model creation and photogrammetric techniques for the texturing of a few parts. According to the requirements and the end-user of the model, the level of detail and level of accuracy have been adapted and assessed for every floor. The basement has been acquired and modeled with more details and a higher accuracy than the other parts. Thanks to this modeling work, archaeologists can confront their assumptions to those of other disciplines by simulating constructions of other worship edifices on the massive stones composing the basement. The virtual reconstructions provided evidence in support of these assumptions and served for communication via virtual visits.

  12. Analytical 3D views and virtual globes — scientific results in a familiar spatial context

    NASA Astrophysics Data System (ADS)

    Tiede, Dirk; Lang, Stefan

    In this paper we introduce analytical three-dimensional (3D) views as a means for effective and comprehensible information delivery, using virtual globes and the third dimension as an additional information carrier. Four case studies are presented, in which information extraction results from very high spatial resolution (VHSR) satellite images were conditioned and aggregated or disaggregated to regular spatial units. The case studies were embedded in the context of: (1) urban life quality assessment (Salzburg/Austria); (2) post-disaster assessment (Harare/Zimbabwe); (3) emergency response (Lukole/Tanzania); and (4) contingency planning (faked crisis scenario/Germany). The results are made available in different virtual globe environments, using the implemented contextual data (such as satellite imagery, aerial photographs, and auxiliary geodata) as valuable additional context information. Both day-to-day users and high-level decision makers are addressees of this tailored information product. The degree of abstraction required for understanding a complex analytical content is balanced with the ease and appeal by which the context is conveyed.

  13. Evaluation of human behavior in collision avoidance: a study inside immersive virtual reality.

    PubMed

    Ouellette, Michel; Chagnon, Miguel; Faubert, Jocelyn

    2009-04-01

    During our daily displacements, we should consider the individuals advancing toward us in order to avoid a possible collision with our congeneric. We developed an experimental design in a virtual immersion room, which allows us to evaluate human capacities for avoiding collisions with other people. In addition, the design allows participants to interact naturally inside this immersive virtual reality setup when a pedestrian is moving toward them, creating a possible risk of collision. Results suggest that the performance is associated with visual and motor capacities and could be adjusted by cognitive social perception.

  14. 3D-ANTLERS: Virtual Reconstruction and Three-Dimensional Measurement

    NASA Astrophysics Data System (ADS)

    Barba, S.; Fiorillo, F.; De Feo, E.

    2013-02-01

    . In the ARTEC digital mock-up for example, it shows the ability to select the individual frames, already polygonal and geo-referenced at the time of capture; however, it is not possible to make an automated texturization differently from the low-cost environment which allows to produce a good graphics' definition. Once the final 3D models were obtained, we have proceeded to do a geometric and graphic comparison of the results. Therefore, in order to provide an accuracy requirement and an assessment for the 3D reconstruction we have taken into account the following benchmarks: cost, captured points, noise (local and global), shadows and holes, operability, degree of definition, quality and accuracy. Subsequently, these studies carried out in an empirical way on the virtual reconstructions, a 3D documentation was codified with a procedural method endorsing the use of terrestrial sensors for the documentation of antlers. The results thus pursued were compared with the standards set by the current provisions (see "Manual de medición" of Government of Andalusia-Spain); to date, in fact, the identification is based on data such as length, volume, colour, texture, openness, tips, structure, etc. Data, which is currently only appreciated with traditional instruments, such as tape measure, would be well represented by a process of virtual reconstruction and cataloguing.

  15. Inclusion of Immersive Virtual Learning Environments and Visual Control Systems to Support the Learning of Students with Asperger Syndrome

    ERIC Educational Resources Information Center

    Lorenzo, Gonzalo; Pomares, Jorge; Lledo, Asuncion

    2013-01-01

    This paper presents the use of immersive virtual reality systems in the educational intervention with Asperger students. The starting points of this study are features of these students' cognitive style that requires an explicit teaching style supported by visual aids and highly structured environments. The proposed immersive virtual reality…

  16. Numerical simulation of X-wing type biplane flapping wings in 3D using the immersed boundary method.

    PubMed

    Tay, W B; van Oudheusden, B W; Bijl, H

    2014-09-01

    The numerical simulation of an insect-sized 'X-wing' type biplane flapping wing configuration is performed in 3D using an immersed boundary method solver at Reynolds numbers equal to 1000 (1 k) and 5 k, based on the wing's root chord length. This X-wing type flapping configuration draws its inspiration from Delfly, a bio-inspired ornithopter MAV which has two pairs of wings flapping in anti-phase in a biplane configuration. The objective of the present investigation is to assess the aerodynamic performance when the original Delfly flapping wing micro-aerial vehicle (FMAV) is reduced to the size of an insect. Results show that the X-wing configuration gives more than twice the average thrust compared with only flapping the upper pair of wings of the X-wing. However, the X-wing's average thrust is only 40% that of the upper wing flapping at twice the stroke angle. Despite this, the increased stability which results from the smaller lift and moment variation of the X-wing configuration makes it more suited for sharp image capture and recognition. These advantages make the X-wing configuration an attractive alternative design for insect-sized FMAVS compared to the single wing configuration. In the Reynolds number comparison, the vorticity iso-surface plot at a Reynolds number of 5 k revealed smaller, finer vortical structures compared to the simulation at 1 k, due to vortices' breakup. In comparison, the force output difference is much smaller between Re = 1 k and 5 k. Increasing the body inclination angle generates a uniform leading edge vortex instead of a conical one along the wingspan, giving higher lift. Understanding the force variation as the body inclination angle increases will allow FMAV designers to optimize the thrust and lift ratio for higher efficiency under different operational requirements. Lastly, increasing the spanwise flexibility of the wings increases the thrust slightly but decreases the efficiency. The thrust result is similar to one of the

  17. Visual appearance of a virtual upper limb modulates the temperature of the real hand: a thermal imaging study in Immersive Virtual Reality.

    PubMed

    Tieri, Gaetano; Gioia, Annamaria; Scandola, Michele; Pavone, Enea F; Aglioti, Salvatore M

    2017-02-21

    To explore the link between Sense of Embodiment (SoE) over a virtual hand and physiological regulation of skin temperature, 24 healthy participants were immersed in virtual reality through a Head Mounted Display and had their real limb temperature recorded by means of a high-sensitivity infrared camera. Participants observed a virtual right upper limb (appearing either normally, or with the hand detached from the forearm) or limb-shaped non-corporeal control objects (continuous or discontinuous wooden blocks) from a first-person perspective. Subjective ratings of SoE were collected in each observation condition, as well as temperatures of the right and left hand, wrist and forearm. The observation of these complex, body and body-related virtual scenes resulted in increased real hand temperature when compared to a baseline condition in which a 3d virtual ball was presented. Crucially, observation of non-natural appearances of the virtual limb (discontinuous limb) and limb-shaped non-corporeal objects elicited high increase in real hand temperature and low SoE. In contrast, observation of the full virtual limb caused high SoE and low temperature changes in the real hand with respect to the other conditions. Interestingly, the temperature difference across the different conditions occurred according to a topographic rule that included both hands. Our study sheds new light on the role of an external hand's visual appearance and suggests a tight link between higher-order bodily self-representations and topographic regulation of skin temperature.

  18. Measuring Flow Experience in an Immersive Virtual Environment for Collaborative Learning

    ERIC Educational Resources Information Center

    van Schaik, P.; Martin, S.; Vallance, M.

    2012-01-01

    In contexts other than immersive virtual environments, theoretical and empirical work has identified flow experience as a major factor in learning and human-computer interaction. Flow is defined as a "holistic sensation that people feel when they act with total involvement". We applied the concept of flow to modeling the experience of…

  19. The Utility of Using Immersive Virtual Environments for the Assessment of Science Inquiry Learning

    ERIC Educational Resources Information Center

    Code, Jillianne; Clarke-Midura, Jody; Zap, Nick; Dede, Chris

    2013-01-01

    Determining the effectiveness of any educational technology depends upon teachers' and learners' perception of the functional utility of that tool for teaching, learning, and assessment. The Virtual Performance project at Harvard University is developing and studying the feasibility of using immersive technology to develop performance…

  20. A Learning Evaluation for an Immersive Virtual Laboratory for Technical Training Applied into a Welding Workshop

    ERIC Educational Resources Information Center

    Torres, Francisco; Neira Tovar, Leticia A.; del Rio, Marta Sylvia

    2017-01-01

    This study aims to explore the results of welding virtual training performance, designed using a learning model based on cognitive and usability techniques, applying an immersive concept focused on person attention. Moreover, it also intended to demonstrate that exits a moderating effect of performance improvement when the user experience is taken…

  1. Cognitive Presence and Effect of Immersion in Virtual Learning Environment

    ERIC Educational Resources Information Center

    Katernyak, Ihor; Loboda, Viktoriya

    2016-01-01

    This paper presents the approach to successful application of two knowledge management techniques--community of practice and eLearning, in order to create and manage a competence-developing virtual learning environment. It explains how "4A" model of involving practitioners in eLearning process (through attention, actualization,…

  2. CamMedNP: Building the Cameroonian 3D structural natural products database for virtual screening

    PubMed Central

    2013-01-01

    Background Computer-aided drug design (CADD) often involves virtual screening (VS) of large compound datasets and the availability of such is vital for drug discovery protocols. We present CamMedNP - a new database beginning with more than 2,500 compounds of natural origin, along with some of their derivatives which were obtained through hemisynthesis. These are pure compounds which have been previously isolated and characterized using modern spectroscopic methods and published by several research teams spread across Cameroon. Description In the present study, 224 distinct medicinal plant species belonging to 55 plant families from the Cameroonian flora have been considered. About 80 % of these have been previously published and/or referenced in internationally recognized journals. For each compound, the optimized 3D structure, drug-like properties, plant source, collection site and currently known biological activities are given, as well as literature references. We have evaluated the “drug-likeness” of this database using Lipinski’s “Rule of Five”. A diversity analysis has been carried out in comparison with the ChemBridge diverse database. Conclusion CamMedNP could be highly useful for database screening and natural product lead generation programs. PMID:23590173

  3. Assessing endocranial variations in great apes and humans using 3D data from virtual endocasts.

    PubMed

    Bienvenu, Thibaut; Guy, Franck; Coudyzer, Walter; Gilissen, Emmanuel; Roualdès, Georges; Vignaud, Patrick; Brunet, Michel

    2011-06-01

    Modern humans are characterized by their large, complex, and specialized brain. Human brain evolution can be addressed through direct evidence provided by fossil hominid endocasts (i.e. paleoneurology), or through indirect evidence of extant species comparative neurology. Here we use the second approach, providing an extant comparative framework for hominid paleoneurological studies. We explore endocranial size and shape differences among great apes and humans, as well as between sexes. We virtually extracted 72 endocasts, sampling all extant great ape species and modern humans, and digitized 37 landmarks on each for 3D generalized Procrustes analysis. All species can be differentiated by their endocranial shape. Among great apes, endocranial shapes vary from short (orangutans) to long (gorillas), perhaps in relation to different facial orientations. Endocranial shape differences among African apes are partly allometric. Major endocranial traits distinguishing humans from great apes are endocranial globularity, reflecting neurological reorganization, and features linked to structural responses to posture and bipedal locomotion. Human endocasts are also characterized by posterior location of foramina rotunda relative to optic canals, which could be correlated to lesser subnasal prognathism compared to living great apes. Species with larger brains (gorillas and humans) display greater sexual dimorphism in endocranial size, while sexual dimorphism in endocranial shape is restricted to gorillas, differences between males and females being at least partly due to allometry. Our study of endocranial variations in extant great apes and humans provides a new comparative dataset for studies of fossil hominid endocasts.

  4. The effects of task difficulty on visual search strategy in virtual 3D displays

    PubMed Central

    Pomplun, Marc; Garaas, Tyler W.; Carrasco, Marisa

    2013-01-01

    Analyzing the factors that determine our choice of visual search strategy may shed light on visual behavior in everyday situations. Previous results suggest that increasing task difficulty leads to more systematic search paths. Here we analyze observers' eye movements in an “easy” conjunction search task and a “difficult” shape search task to study visual search strategies in stereoscopic search displays with virtual depth induced by binocular disparity. Standard eye-movement variables, such as fixation duration and initial saccade latency, as well as new measures proposed here, such as saccadic step size, relative saccadic selectivity, and x−y target distance, revealed systematic effects on search dynamics in the horizontal-vertical plane throughout the search process. We found that in the “easy” task, observers start with the processing of display items in the display center immediately after stimulus onset and subsequently move their gaze outwards, guided by extrafoveally perceived stimulus color. In contrast, the “difficult” task induced an initial gaze shift to the upper-left display corner, followed by a systematic left-right and top-down search process. The only consistent depth effect was a trend of initial saccades in the easy task with smallest displays to the items closest to the observer. The results demonstrate the utility of eye-movement analysis for understanding search strategies and provide a first step toward studying search strategies in actual 3D scenarios. PMID:23986539

  5. Workshop Report on Virtual Worlds and Immersive Environments

    NASA Technical Reports Server (NTRS)

    Langhoff, Stephanie R.; Cowan-Sharp, Jessy; Dodson, Karen E.; Damer, Bruce; Ketner, Bob

    2009-01-01

    The workshop revolved around three framing ideas or scenarios about the evolution of virtual environments: 1. Remote exploration: The ability to create high fidelity environments rendered from external data or models such that exploration, design and analysis that is truly interoperable with the physical world can take place within them. 2. We all get to go: The ability to engage anyone in being a part of or contributing to an experience (such as a space mission), no matter their training or location. It is the creation of a new paradigm for education, outreach, and the conduct of science in society that is truly participatory. 3. Become the data: A vision of a future where boundaries between the physical and the virtual have ceased to be meaningful. What would this future look like? Is this plausible? Is it desirable? Why and why not?

  6. Immersive virtual environment technology: a promising tool for future social and behavioral genomics research and practice.

    PubMed

    Persky, Susan; McBride, Colleen M

    2009-12-01

    Social and behavioral research needs to get started now if scientists are to direct genomic discoveries to address pressing public health problems. Advancing social and behavioral science will require innovative and rigorous communication methodologies that move researchers beyond reliance on traditional tools and their inherent limitations. One such emerging research tool is immersive virtual environment technology (virtual reality), a methodology that gives researchers the ability to maintain high experimental control and mundane realism of scenarios; portray and manipulate complex, abstract objects and concepts; and implement innovative implicit behavioral measurement. This report suggests the role that immersive virtual environment technology can play in furthering future research in genomics-related education, decision making, test intentions, behavior change, and health-care provider behaviors. Practical implementation and challenges are also discussed.

  7. Virtual immersion for post-stroke hand rehabilitation therapy.

    PubMed

    Tsoupikova, Daria; Stoykov, Nikolay S; Corrigan, Molly; Thielbar, Kelly; Vick, Randy; Li, Yu; Triandafilou, Kristen; Preuss, Fabian; Kamper, Derek

    2015-02-01

    Stroke is the leading cause of serious, long-term disability in the United States. Impairment of upper extremity function is a common outcome following stroke, often to the detriment of lifestyle and employment opportunities. While the upper extremity is a natural target for therapy, treatment may be hampered by limitations in baseline capability as lack of success may discourage arm and hand use. We developeda virtual reality (VR) system in order to encourage repetitive task practice. This system combined an assistive glove with a novel VR environment. A set of exercises for this system was developed to encourage specific movements. Six stroke survivors with chronic upper extremity hemiparesis volunteered to participate in a pilot study in which they completed 18 one-hour training sessions with the VR system. Performance with the system was recorded across the 18 training sessions. Clinical evaluations of motor control were conducted at three time points: prior to initiation of training, following the end of training, and 1 month later. Subjects displayed significant improvement on performance of the virtual tasks over the course of the training, although for the clinical outcome measures only lateral pinch showed significant improvement. Future expansion to multi-user virtual environments may extend the benefits of this system for stroke survivors with hemiparesis by furthering engagement in the rehabilitation exercises.

  8. A 3-D Virtual Reality Model of the Sun and the Moon for E-Learning at Elementary Schools

    ERIC Educational Resources Information Center

    Sun, Koun-Tem; Lin, Ching-Ling; Wang, Sheng-Min

    2010-01-01

    The relative positions of the sun, moon, and earth, their movements, and their relationships are abstract and difficult to understand astronomical concepts in elementary school science. This study proposes a three-dimensional (3-D) virtual reality (VR) model named the "Sun and Moon System." This e-learning resource was designed by…

  9. Virtual surgical planning and 3D printing in prosthetic orbital reconstruction with percutaneous implants: a technical case report.

    PubMed

    Huang, Yu-Hui; Seelaus, Rosemary; Zhao, Linping; Patel, Pravin K; Cohen, Mimis

    2016-01-01

    Osseointegrated titanium implants to the cranial skeleton for retention of facial prostheses have proven to be a reliable replacement for adhesive systems. However, improper placement of the implants can jeopardize prosthetic outcomes, and long-term success of an implant-retained prosthesis. Three-dimensional (3D) computer imaging, virtual planning, and 3D printing have become accepted components of the preoperative planning and design phase of treatment. Computer-aided design and computer-assisted manufacture that employ cone-beam computed tomography data offer benefits to patient treatment by contributing to greater predictability and improved treatment efficiencies with more reliable outcomes in surgical and prosthetic reconstruction. 3D printing enables transfer of the virtual surgical plan to the operating room by fabrication of surgical guides. Previous studies have shown that accuracy improves considerably with guided implantation when compared to conventional template or freehand implant placement. This clinical case report demonstrates the use of a 3D technological pathway for preoperative virtual planning through prosthesis fabrication, utilizing 3D printing, for a patient with an acquired orbital defect that was restored with an implant-retained silicone orbital prosthesis.

  10. Virtual surgical planning and 3D printing in prosthetic orbital reconstruction with percutaneous implants: a technical case report

    PubMed Central

    Huang, Yu-Hui; Seelaus, Rosemary; Zhao, Linping; Patel, Pravin K; Cohen, Mimis

    2016-01-01

    Osseointegrated titanium implants to the cranial skeleton for retention of facial prostheses have proven to be a reliable replacement for adhesive systems. However, improper placement of the implants can jeopardize prosthetic outcomes, and long-term success of an implant-retained prosthesis. Three-dimensional (3D) computer imaging, virtual planning, and 3D printing have become accepted components of the preoperative planning and design phase of treatment. Computer-aided design and computer-assisted manufacture that employ cone-beam computed tomography data offer benefits to patient treatment by contributing to greater predictability and improved treatment efficiencies with more reliable outcomes in surgical and prosthetic reconstruction. 3D printing enables transfer of the virtual surgical plan to the operating room by fabrication of surgical guides. Previous studies have shown that accuracy improves considerably with guided implantation when compared to conventional template or freehand implant placement. This clinical case report demonstrates the use of a 3D technological pathway for preoperative virtual planning through prosthesis fabrication, utilizing 3D printing, for a patient with an acquired orbital defect that was restored with an implant-retained silicone orbital prosthesis. PMID:27843356

  11. A cone-beam CT based technique to augment the 3D virtual skull model with a detailed dental surface.

    PubMed

    Swennen, G R J; Mommaerts, M Y; Abeloos, J; De Clercq, C; Lamoral, P; Neyt, N; Casselman, J; Schutyser, F

    2009-01-01

    Cone-beam computed tomography (CBCT) is used for maxillofacial imaging. 3D virtual planning of orthognathic and facial orthomorphic surgery requires detailed visualisation of the interocclusal relationship. This study aimed to introduce and evaluate the use of a double CBCT scan procedure with a modified wax bite wafer to augment the 3D virtual skull model with a detailed dental surface. The impressions of the dental arches and the wax bite wafer were scanned for ten patient separately using a high resolution standardized CBCT scanning protocol. Surface-based rigid registration using ICP (iterative closest points) was used to fit the virtual models on the wax bite wafer. Automatic rigid point-based registration of the wax bite wafer on the patient scan was performed to implement the digital virtual dental arches into the patient's skull model. Probability error histograms showed errors of < or =0.22 mm (25% percentile), < or =0.44 mm (50% percentile) and < or =1.09 mm (90% percentile) for ICP surface matching. The mean registration error for automatic point-based rigid registration was 0.18+/-0.10 mm (range 0.13-0.26 mm). The results show the potential for a double CBCT scan procedure with a modified wax bite wafer to set-up a 3D virtual augmented model of the skull with detailed dental surface.

  12. Development of real-time motion capture system for 3D on-line games linked with virtual character

    NASA Astrophysics Data System (ADS)

    Kim, Jong Hyeong; Ryu, Young Kee; Cho, Hyung Suck

    2004-10-01

    Motion tracking method is being issued as essential part of the entertainment, medical, sports, education and industry with the development of 3-D virtual reality. Virtual human character in the digital animation and game application has been controlled by interfacing devices; mouse, joysticks, midi-slider, and so on. Those devices could not enable virtual human character to move smoothly and naturally. Furthermore, high-end human motion capture systems in commercial market are expensive and complicated. In this paper, we proposed a practical and fast motion capturing system consisting of optic sensors, and linked the data with 3-D game character with real time. The prototype experiment setup is successfully applied to a boxing game which requires very fast movement of human character.

  13. 'My Virtual Dream': Collective Neurofeedback in an Immersive Art Environment.

    PubMed

    Kovacevic, Natasha; Ritter, Petra; Tays, William; Moreno, Sylvain; McIntosh, Anthony Randal

    2015-01-01

    While human brains are specialized for complex and variable real world tasks, most neuroscience studies reduce environmental complexity, which limits the range of behaviours that can be explored. Motivated to overcome this limitation, we conducted a large-scale experiment with electroencephalography (EEG) based brain-computer interface (BCI) technology as part of an immersive multi-media science-art installation. Data from 523 participants were collected in a single night. The exploratory experiment was designed as a collective computer game where players manipulated mental states of relaxation and concentration with neurofeedback targeting modulation of relative spectral power in alpha and beta frequency ranges. Besides validating robust time-of-night effects, gender differences and distinct spectral power patterns for the two mental states, our results also show differences in neurofeedback learning outcome. The unusually large sample size allowed us to detect unprecedented speed of learning changes in the power spectrum (~ 1 min). Moreover, we found that participants' baseline brain activity predicted subsequent neurofeedback beta training, indicating state-dependent learning. Besides revealing these training effects, which are relevant for BCI applications, our results validate a novel platform engaging art and science and fostering the understanding of brains under natural conditions.

  14. The Immersive Virtual Reality Experience: A Typology of Users Revealed Through Multiple Correspondence Analysis Combined with Cluster Analysis Technique.

    PubMed

    Rosa, Pedro J; Morais, Diogo; Gamito, Pedro; Oliveira, Jorge; Saraiva, Tomaz

    2016-03-01

    Immersive virtual reality is thought to be advantageous by leading to higher levels of presence. However, and despite users getting actively involved in immersive three-dimensional virtual environments that incorporate sound and motion, there are individual factors, such as age, video game knowledge, and the predisposition to immersion, that may be associated with the quality of virtual reality experience. Moreover, one particular concern for users engaged in immersive virtual reality environments (VREs) is the possibility of side effects, such as cybersickness. The literature suggests that at least 60% of virtual reality users report having felt symptoms of cybersickness, which reduces the quality of the virtual reality experience. The aim of this study was thus to profile the right user to be involved in a VRE through head-mounted display. To examine which user characteristics are associated with the most effective virtual reality experience (lower cybersickness), a multiple correspondence analysis combined with cluster analysis technique was performed. Results revealed three distinct profiles, showing that the PC gamer profile is more associated with higher levels of virtual reality effectiveness, that is, higher predisposition to be immersed and reduced cybersickness symptoms in the VRE than console gamer and nongamer. These findings can be a useful orientation in clinical practice and future research as they help identify which users are more predisposed to benefit from immersive VREs.

  15. Hsp90 inhibitors, part 1: definition of 3-D QSAutogrid/R models as a tool for virtual screening.

    PubMed

    Ballante, Flavio; Caroli, Antonia; Wickersham, Richard B; Ragno, Rino

    2014-03-24

    The multichaperone heat shock protein (Hsp) 90 complex mediates the maturation and stability of a variety of oncogenic signaling proteins. For this reason, Hsp90 has emerged as a promising target for anticancer drug development. Herein, we describe a complete computational procedure for building several 3-D QSAR models used as a ligand-based (LB) component of a comprehensive ligand-based (LB) and structure-based (SB) virtual screening (VS) protocol to identify novel molecular scaffolds of Hsp90 inhibitors. By the application of the 3-D QSAutogrid/R method, eight SB PLS 3-D QSAR models were generated, leading to a final multiprobe (MP) 3-D QSAR pharmacophoric model capable of recognizing the most significant chemical features for Hsp90 inhibition. Both the monoprobe and multiprobe models were optimized, cross-validated, and tested against an external test set. The obtained statistical results confirmed the models as robust and predictive to be used in a subsequent VS.

  16. Second Life: an overview of the potential of 3-D virtual worlds in medical and health education.

    PubMed

    Boulos, Maged N Kamel; Hetherington, Lee; Wheeler, Steve

    2007-12-01

    This hybrid review-case study introduces three-dimensional (3-D) virtual worlds and their educational potential to medical/health librarians and educators. Second life (http://secondlife.com/) is perhaps the most popular virtual world platform in use today, with an emphasis on social interaction. We describe some medical and health education examples from Second Life, including Second Life Medical and Consumer Health Libraries (Healthinfo Island-funded by a grant from the US National Library of Medicine), and VNEC (Virtual Neurological Education Centre-developed at the University of Plymouth, UK), which we present as two detailed 'case studies'. The pedagogical potentials of Second Life are then discussed, as well as some issues and challenges related to the use of virtual worlds. We have also compiled an up-to-date resource page (http://healthcybermap.org/sl.htm), with additional online material and pointers to support and extend this study.

  17. Toward real-time endoscopically-guided robotic navigation based on a 3D virtual surgical field model

    NASA Astrophysics Data System (ADS)

    Gong, Yuanzheng; Hu, Danying; Hannaford, Blake; Seibel, Eric J.

    2015-03-01

    The challenge is to accurately guide the surgical tool within the three-dimensional (3D) surgical field for roboticallyassisted operations such as tumor margin removal from a debulked brain tumor cavity. The proposed technique is 3D image-guided surgical navigation based on matching intraoperative video frames to a 3D virtual model of the surgical field. A small laser-scanning endoscopic camera was attached to a mock minimally-invasive surgical tool that was manipulated toward a region of interest (residual tumor) within a phantom of a debulked brain tumor. Video frames from the endoscope provided features that were matched to the 3D virtual model, which were reconstructed earlier by raster scanning over the surgical field. Camera pose (position and orientation) is recovered by implementing a constrained bundle adjustment algorithm. Navigational error during the approach to fluorescence target (residual tumor) is determined by comparing the calculated camera pose to the measured camera pose using a micro-positioning stage. From these preliminary results, computation efficiency of the algorithm in MATLAB code is near real-time (2.5 sec for each estimation of pose), which can be improved by implementation in C++. Error analysis produced 3-mm distance error and 2.5 degree of orientation error on average. The sources of these errors come from 1) inaccuracy of the 3D virtual model, generated on a calibrated RAVEN robotic platform with stereo tracking; 2) inaccuracy of endoscope intrinsic parameters, such as focal length; and 3) any endoscopic image distortion from scanning irregularities. This work demonstrates feasibility of micro-camera 3D guidance of a robotic surgical tool.

  18. Cultivating imagination: development and pilot test of a therapeutic use of an immersive virtual reality CAVE.

    PubMed

    Brennan, Patricia Flatley; Nicolalde, F Daniel; Ponto, Kevin; Kinneberg, Megan; Freese, Vito; Paz, Dana

    2013-01-01

    As informatics applications grow from being data collection tools to platforms for action, the boundary between what constitutes informatics applications and therapeutic interventions begins to blur. Emerging computer-driven technologies such as virtual reality (VR) and mHealth apps may serve as clinical interventions. As part of a larger project intended to provide complements to cognitive behavioral approaches to health behavior change, an interactive scenario was designed to permit unstructured play inside an immersive 6-sided VR CAVE. In this pilot study we examined the technical and functional performance of the CAVE scenario, human tolerance of immersive CAVE experiences, and explored human imagination and the manner in which activity in the CAVE scenarios varied by an individual's level of imagination. Nine adult volunteers participated in a pilot-and-feasibility study. Participants tolerated 15 minute long exposure to the scenarios, and navigated through the virtual world. Relationship between personal characteristics and behaviors are reported and explored.

  19. Cultivating Imagination: Development and Pilot Test of a Therapeutic Use of an Immersive Virtual Reality CAVE

    PubMed Central

    Brennan, Patricia Flatley; Nicolalde, F. Daniel; Ponto, Kevin; Kinneberg, Megan; Freese, Vito; Paz, Dana

    2013-01-01

    As informatics applications grow from being data collection tools to platforms for action, the boundary between what constitutes informatics applications and therapeutic interventions begins to blur. Emerging computer-driven technologies such as virtual reality (VR) and mHealth apps may serve as clinical interventions. As part of a larger project intended to provide complements to cognitive behavioral approaches to health behavior change, an interactive scenario was designed to permit unstructured play inside an immersive 6-sided VR CAVE. In this pilot study we examined the technical and functional performance of the CAVE scenario, human tolerance of immersive CAVE experiences, and explored human imagination and the manner in which activity in the CAVE scenarios varied by an individual’s level of imagination. Nine adult volunteers participated in a pilot-and-feasibility study. Participants tolerated 15 minute long exposure to the scenarios, and navigated through the virtual world. Relationship between personal characteristics and behaviors are reported and explored. PMID:24551327

  20. The illusion of presence in immersive virtual reality during an fMRI brain scan.

    PubMed

    Hoffman, Hunter G; Richards, Todd; Coda, Barbara; Richards, Anne; Sharar, Sam R

    2003-04-01

    The essence of immersive virtual reality (VR) is the illusion it gives users that they are inside the computer-generated virtual environment. This unusually strong illusion is theorized to contribute to the successful pain reduction observed in burn patients who go into VR during woundcare (www.vrpain.com) and to successful VR exposure therapy for phobias and post-traumatic stress disorder (PTSD). The present study demonstrated for the first time that subjects could experience a strong illusion of presence during an fMRI despite the constraints of the fMRI magnet bore (i.e., immobilized head and loud ambient noise).

  1. Studying social interactions through immersive virtual environment technology: virtues, pitfalls, and future challenges

    PubMed Central

    Bombari, Dario; Schmid Mast, Marianne; Canadas, Elena; Bachmann, Manuel

    2015-01-01

    The goal of the present review is to explain how immersive virtual environment technology (IVET) can be used for the study of social interactions and how the use of virtual humans in immersive virtual environments can advance research and application in many different fields. Researchers studying individual differences in social interactions are typically interested in keeping the behavior and the appearance of the interaction partner constant across participants. With IVET researchers have full control over the interaction partners, can standardize them while still keeping the simulation realistic. Virtual simulations are valid: growing evidence shows that indeed studies conducted with IVET can replicate some well-known findings of social psychology. Moreover, IVET allows researchers to subtly manipulate characteristics of the environment (e.g., visual cues to prime participants) or of the social partner (e.g., his/her race) to investigate their influences on participants’ behavior and cognition. Furthermore, manipulations that would be difficult or impossible in real life (e.g., changing participants’ height) can be easily obtained with IVET. Beside the advantages for theoretical research, we explore the most recent training and clinical applications of IVET, its integration with other technologies (e.g., social sensing) and future challenges for researchers (e.g., making the communication between virtual humans and participants smoother). PMID:26157414

  2. Spatialized sound reproduction for telematic music performances in an immersive virtual environment

    NASA Astrophysics Data System (ADS)

    Chabot, Samuel R. V.

    Telematic performances connect musicians and artists at remote locations to form a single cohesive piece. As these performances become more ubiquitous as more people have access to very high-speed Internet connections, a variety of new technologies will enable the artists and musicians to create brand new styles of works. The development of the immersive virtual environment, including Rensselaer Polytechnic Institute's own Collaborative-Research Augmented Immersive Virtual Environment Laboratory, sets the stage for these original pieces. The ability to properly spatialize sound within these environments is important for having a complete set of tools. This project uses a local installation to exemplify the techniques and protocols that make this possible. Using the visual coding environment MaxMSP as a receiving client, patches are created to parse incoming commands and coordinate information for engaging sound sources. Their spatialization is done in conjunction with the Virtual Microphone Control system, which is then mapped to loudspeakers through a patch portable to various immersive environment setups.

  3. Using virtual reality technology and hand tracking technology to create software for training surgical skills in 3D game

    NASA Astrophysics Data System (ADS)

    Zakirova, A. A.; Ganiev, B. A.; Mullin, R. I.

    2015-11-01

    The lack of visible and approachable ways of training surgical skills is one of the main problems in medical education. Existing simulation training devices are not designed to teach students, and are not available due to the high cost of the equipment. Using modern technologies such as virtual reality and hands movements fixation technology we want to create innovative method of learning the technics of conducting operations in 3D game format, which can make education process interesting and effective. Creating of 3D format virtual simulator will allow to solve several conceptual problems at once: opportunity of practical skills improvement unlimited by the time without the risk for patient, high realism of environment in operational and anatomic body structures, using of game mechanics for information perception relief and memorization of methods acceleration, accessibility of this program.

  4. Calculation of the virtual current in an electromagnetic flow meter with one bubble using 3D model.

    PubMed

    Zhang, Xiao-Zhang; Li, Yantao

    2004-04-01

    Based on the theory of electromagnetic induction flow measurement, the Laplace equation in a complicated three-dimensional (3D) domain is solved by an alternating method. Virtual current potentials are obtained for an electromagnetic flow meter with one spherical bubble inside. The solutions are used to investigate the effects of bubble size and bubble position on the virtual current. Comparisons are done among the cases of 2D and 3D models, and of point electrode and large electrode. The results show that the 2D model overestimates the effect, while large electrodes are least sensitive to the bubble. This paper offers fundamentals for the study of the behavior of an electromagnetic flow meter in multiphase flow. For application, the results provide a possible way to estimate errors of the flow meter caused by multiphase flow.

  5. Effects of Exercise in Immersive Virtual Environments on Cortical Neural Oscillations and Mental State

    PubMed Central

    Vogt, Tobias; Herpers, Rainer; Askew, Christopher D.; Scherfgen, David; Strüder, Heiko K.; Schneider, Stefan

    2015-01-01

    Virtual reality environments are increasingly being used to encourage individuals to exercise more regularly, including as part of treatment those with mental health or neurological disorders. The success of virtual environments likely depends on whether a sense of presence can be established, where participants become fully immersed in the virtual environment. Exposure to virtual environments is associated with physiological responses, including cortical activation changes. Whether the addition of a real exercise within a virtual environment alters sense of presence perception, or the accompanying physiological changes, is not known. In a randomized and controlled study design, moderate-intensity Exercise (i.e., self-paced cycling) and No-Exercise (i.e., automatic propulsion) trials were performed within three levels of virtual environment exposure. Each trial was 5 minutes in duration and was followed by posttrial assessments of heart rate, perceived sense of presence, EEG, and mental state. Changes in psychological strain and physical state were generally mirrored by neural activation patterns. Furthermore, these changes indicated that exercise augments the demands of virtual environment exposures and this likely contributed to an enhanced sense of presence. PMID:26366305

  6. Effects of Exercise in Immersive Virtual Environments on Cortical Neural Oscillations and Mental State.

    PubMed

    Vogt, Tobias; Herpers, Rainer; Askew, Christopher D; Scherfgen, David; Strüder, Heiko K; Schneider, Stefan

    2015-01-01

    Virtual reality environments are increasingly being used to encourage individuals to exercise more regularly, including as part of treatment those with mental health or neurological disorders. The success of virtual environments likely depends on whether a sense of presence can be established, where participants become fully immersed in the virtual environment. Exposure to virtual environments is associated with physiological responses, including cortical activation changes. Whether the addition of a real exercise within a virtual environment alters sense of presence perception, or the accompanying physiological changes, is not known. In a randomized and controlled study design, moderate-intensity Exercise (i.e., self-paced cycling) and No-Exercise (i.e., automatic propulsion) trials were performed within three levels of virtual environment exposure. Each trial was 5 minutes in duration and was followed by posttrial assessments of heart rate, perceived sense of presence, EEG, and mental state. Changes in psychological strain and physical state were generally mirrored by neural activation patterns. Furthermore, these changes indicated that exercise augments the demands of virtual environment exposures and this likely contributed to an enhanced sense of presence.

  7. Crowd behaviour during high-stress evacuations in an immersive virtual environment

    PubMed Central

    Kapadia, Mubbasir; Thrash, Tyler; Sumner, Robert W.; Gross, Markus; Helbing, Dirk; Hölscher, Christoph

    2016-01-01

    Understanding the collective dynamics of crowd movements during stressful emergency situations is central to reducing the risk of deadly crowd disasters. Yet, their systematic experimental study remains a challenging open problem due to ethical and methodological constraints. In this paper, we demonstrate the viability of shared three-dimensional virtual environments as an experimental platform for conducting crowd experiments with real people. In particular, we show that crowds of real human subjects moving and interacting in an immersive three-dimensional virtual environment exhibit typical patterns of real crowds as observed in real-life crowded situations. These include the manifestation of social conventions and the emergence of self-organized patterns during egress scenarios. High-stress evacuation experiments conducted in this virtual environment reveal movements characterized by mass herding and dangerous overcrowding as they occur in crowd disasters. We describe the behavioural mechanisms at play under such extreme conditions and identify critical zones where overcrowding may occur. Furthermore, we show that herding spontaneously emerges from a density effect without the need to assume an increase of the individual tendency to imitate peers. Our experiments reveal the promise of immersive virtual environments as an ethical, cost-efficient, yet accurate platform for exploring crowd behaviour in high-risk situations with real human subjects. PMID:27605166

  8. Crowd behaviour during high-stress evacuations in an immersive virtual environment.

    PubMed

    Moussaïd, Mehdi; Kapadia, Mubbasir; Thrash, Tyler; Sumner, Robert W; Gross, Markus; Helbing, Dirk; Hölscher, Christoph

    2016-09-01

    Understanding the collective dynamics of crowd movements during stressful emergency situations is central to reducing the risk of deadly crowd disasters. Yet, their systematic experimental study remains a challenging open problem due to ethical and methodological constraints. In this paper, we demonstrate the viability of shared three-dimensional virtual environments as an experimental platform for conducting crowd experiments with real people. In particular, we show that crowds of real human subjects moving and interacting in an immersive three-dimensional virtual environment exhibit typical patterns of real crowds as observed in real-life crowded situations. These include the manifestation of social conventions and the emergence of self-organized patterns during egress scenarios. High-stress evacuation experiments conducted in this virtual environment reveal movements characterized by mass herding and dangerous overcrowding as they occur in crowd disasters. We describe the behavioural mechanisms at play under such extreme conditions and identify critical zones where overcrowding may occur. Furthermore, we show that herding spontaneously emerges from a density effect without the need to assume an increase of the individual tendency to imitate peers. Our experiments reveal the promise of immersive virtual environments as an ethical, cost-efficient, yet accurate platform for exploring crowd behaviour in high-risk situations with real human subjects.

  9. A Methodology for Elaborating Activities for Higher Education in 3D Virtual Worlds

    ERIC Educational Resources Information Center

    Bravo, Javier; García-Magariño, Iván

    2015-01-01

    Distance education started being limited in comparison to traditional education. Distance teachers and educational organizations have overcome most of these limits, but some other limits still remain as challenges. One of these challenges is to collaboratively learn concepts in an immersive way, similarly to the education "in situ".…

  10. Virtual screening and rational drug design method using structure generation system based on 3D-QSAR and docking.

    PubMed

    Chen, H F; Dong, X C; Zen, B S; Gao, K; Yuan, S G; Panaye, A; Doucet, J P; Fan, B T

    2003-08-01

    An efficient virtual and rational drug design method is presented. It combines virtual bioactive compound generation with 3D-QSAR model and docking. Using this method, it is possible to generate a lot of highly diverse molecules and find virtual active lead compounds. The method was validated by the study of a set of anti-tumor drugs. With the constraints of pharmacophore obtained by DISCO implemented in SYBYL 6.8, 97 virtual bioactive compounds were generated, and their anti-tumor activities were predicted by CoMFA. Eight structures with high activity were selected and screened by the 3D-QSAR model. The most active generated structure was further investigated by modifying its structure in order to increase the activity. A comparative docking study with telomeric receptor was carried out, and the results showed that the generated structures could form more stable complexes with receptor than the reference compound selected from experimental data. This investigation showed that the proposed method was a feasible way for rational drug design with high screening efficiency.

  11. Immersion factors affecting perception and behaviour in a virtual reality power wheelchair simulator.

    PubMed

    Alshaer, Abdulaziz; Regenbrecht, Holger; O'Hare, David

    2017-01-01

    Virtual Reality based driving simulators are increasingly used to train and assess users' abilities to operate vehicles in a controlled and safe way. For the development of those simulators it is important to identify and evaluate design factors affecting perception, behaviour, and driving performance. In an exemplary power wheelchair simulator setting we identified the three immersion factors display type (head-mounted display v monitor), ability to freely change the field of view (FOV), and the visualisation of the user's avatar as potentially affecting perception and behaviour. In a study with 72 participants we found all three factors affected the participants' sense of presence in the virtual environment. In particular the display type significantly affected both perceptual and behavioural measures whereas FOV only affected behavioural measures. Our findings could guide future Virtual Reality simulator designers to evoke targeted user behaviours and perceptions.

  12. A Fully Immersive Set-Up for Remote Interaction and Neurorehabilitation Based on Virtual Body Ownership

    PubMed Central

    Perez-Marcos, Daniel; Solazzi, Massimiliano; Steptoe, William; Oyekoya, Oyewole; Frisoli, Antonio; Weyrich, Tim; Steed, Anthony; Tecchia, Franco; Slater, Mel; Sanchez-Vives, Maria V.

    2012-01-01

    Although telerehabilitation systems represent one of the most technologically appealing clinical solutions for the immediate future, they still present limitations that prevent their standardization. Here we propose an integrated approach that includes three key and novel factors: (a) fully immersive virtual environments, including virtual body representation and ownership; (b) multimodal interaction with remote people and virtual objects including haptic interaction; and (c) a physical representation of the patient at the hospital through embodiment agents (e.g., as a physical robot). The importance of secure and rapid communication between the nodes is also stressed and an example implemented solution is described. Finally, we discuss the proposed approach with reference to the existing literature and systems. PMID:22787454

  13. Brave New (Interactive) Worlds: A Review of the Design Affordances and Constraints of Two 3D Virtual Worlds as Interactive Learning Environments

    ERIC Educational Resources Information Center

    Dickey, Michele D.

    2005-01-01

    Three-dimensional virtual worlds are an emerging medium currently being used in both traditional classrooms and for distance education. Three-dimensional (3D) virtual worlds are a combination of desk-top interactive Virtual Reality within a chat environment. This analysis provides an overview of Active Worlds Educational Universe and Adobe…

  14. Using a 3D Virtual Supermarket to Measure Food Purchase Behavior: A Validation Study

    PubMed Central

    Jiang, Yannan; Steenhuis, Ingrid Hendrika Margaretha; Ni Mhurchu, Cliona

    2015-01-01

    Background There is increasing recognition that supermarkets are an important environment for health-promoting interventions such as fiscal food policies or front-of-pack nutrition labeling. However, due to the complexities of undertaking such research in the real world, well-designed randomized controlled trials on these kinds of interventions are lacking. The Virtual Supermarket is a 3-dimensional computerized research environment designed to enable experimental studies in a supermarket setting without the complexity or costs normally associated with undertaking such research. Objective The primary objective was to validate the Virtual Supermarket by comparing virtual and real-life food purchasing behavior. A secondary objective was to obtain participant feedback on perceived sense of “presence” (the subjective experience of being in one place or environment even if physically located in another) in the Virtual Supermarket. Methods Eligible main household shoppers (New Zealand adults aged ≥18 years) were asked to conduct 3 shopping occasions in the Virtual Supermarket over 3 consecutive weeks, complete the validated Presence Questionnaire Items Stems, and collect their real supermarket grocery till receipts for that same period. Proportional expenditure (NZ$) and the proportion of products purchased over 18 major food groups were compared between the virtual and real supermarkets. Data were analyzed using repeated measures mixed models. Results A total of 123 participants consented to take part in the study. In total, 69.9% (86/123) completed 1 shop in the Virtual Supermarket, 64.2% (79/123) completed 2 shops, 60.2% (74/123) completed 3 shops, and 48.8% (60/123) returned their real supermarket till receipts. The 4 food groups with the highest relative expenditures were the same for the virtual and real supermarkets: fresh fruit and vegetables (virtual estimate: 14.3%; real: 17.4%), bread and bakery (virtual: 10.0%; real: 8.2%), dairy (virtual: 19.1%; real

  15. Taking Science Online: Evaluating Presence and Immersion through a Laboratory Experience in a Virtual Learning Environment for Entomology Students

    ERIC Educational Resources Information Center

    Annetta, Leonard; Klesath, Marta; Meyer, John

    2009-01-01

    A 3-D virtual field trip was integrated into an online college entomology course and developed as a trial for the possible incorporation of future virtual environments to supplement online higher education laboratories. This article provides an explanation of the rationale behind creating the virtual experience, the Bug Farm; the method and…

  16. Effects of 3D virtual haptics force feedback on brand personality perception: the mediating role of physical presence in advergames.

    PubMed

    Jin, Seung-A Annie

    2010-06-01

    This study gauged the effects of force feedback in the Novint Falcon haptics system on the sensory and cognitive dimensions of a virtual test-driving experience. First, in order to explore the effects of tactile stimuli with force feedback on users' sensory experience, feelings of physical presence (the extent to which virtual physical objects are experienced as actual physical objects) were measured after participants used the haptics interface. Second, to evaluate the effects of force feedback on the cognitive dimension of consumers' virtual experience, this study investigated brand personality perception. The experiment utilized the Novint Falcon haptics controller to induce immersive virtual test-driving through tactile stimuli. The author designed a two-group (haptics stimuli with force feedback versus no force feedback) comparison experiment (N = 238) by manipulating the level of force feedback. Users in the force feedback condition were exposed to tactile stimuli involving various force feedback effects (e.g., terrain effects, acceleration, and lateral forces) while test-driving a rally car. In contrast, users in the control condition test-drove the rally car using the Novint Falcon but were not given any force feedback. Results of ANOVAs indicated that (a) users exposed to force feedback felt stronger physical presence than those in the no force feedback condition, and (b) users exposed to haptics stimuli with force feedback perceived the brand personality of the car to be more rugged than those in the control condition. Managerial implications of the study for product trial in the business world are discussed.

  17. A virtually imaged defocused array (VIDA) for high-speed 3D microscopy.

    PubMed

    Schonbrun, Ethan; Di Caprio, Giuseppe

    2016-10-01

    We report a method to capture a multifocus image stack based on recording multiple reflections generated by imaging through a custom etalon. The focus stack is collected in a single camera exposure and consequently the information needed for 3D reconstruction is recorded in the camera integration time, which is only 100 µs. We have used the VIDA microscope to temporally resolve the multi-lobed 3D morphology of neutrophil nuclei as they rotate and deform through a microfluidic constriction. In addition, we have constructed a 3D imaging flow cytometer and quantified the nuclear morphology of nearly a thousand white blood cells flowing at a velocity of 3 mm per second. The VIDA microscope is compact and simple to construct, intrinsically achromatic, and the field-of-view and stack number can be easily reconfigured without redesigning diffraction gratings and prisms.

  18. Accident or homicide--virtual crime scene reconstruction using 3D methods.

    PubMed

    Buck, Ursula; Naether, Silvio; Räss, Beat; Jackowski, Christian; Thali, Michael J

    2013-02-10

    The analysis and reconstruction of forensically relevant events, such as traffic accidents, criminal assaults and homicides are based on external and internal morphological findings of the injured or deceased person. For this approach high-tech methods are gaining increasing importance in forensic investigations. The non-contact optical 3D digitising system GOM ATOS is applied as a suitable tool for whole body surface and wound documentation and analysis in order to identify injury-causing instruments and to reconstruct the course of event. In addition to the surface documentation, cross-sectional imaging methods deliver medical internal findings of the body. These 3D data are fused into a whole body model of the deceased. Additional to the findings of the bodies, the injury inflicting instruments and incident scene is documented in 3D. The 3D data of the incident scene, generated by 3D laser scanning and photogrammetry, is also included into the reconstruction. Two cases illustrate the methods. In the fist case a man was shot in his bedroom and the main question was, if the offender shot the man intentionally or accidentally, as he declared. In the second case a woman was hit by a car, driving backwards into a garage. It was unclear if the driver drove backwards once or twice, which would indicate that he willingly injured and killed the woman. With this work, we demonstrate how 3D documentation, data merging and animation enable to answer reconstructive questions regarding the dynamic development of patterned injuries, and how this leads to a real data based reconstruction of the course of event.

  19. Source fields reconstruction with 3D mapping by means of the virtual acoustic volume concept

    NASA Astrophysics Data System (ADS)

    Forget, S.; Totaro, N.; Guyader, J. L.; Schaeffer, M.

    2016-10-01

    This paper presents the theoretical framework of the virtual acoustic volume concept and two related inverse Patch Transfer Functions (iPTF) identification methods (called u-iPTF and m-iPTF depending on the chosen boundary conditions for the virtual volume). They are based on the application of Green's identity on an arbitrary closed virtual volume defined around the source. The reconstruction of sound source fields combines discrete acoustic measurements performed at accessible positions around the source with the modal behavior of the chosen virtual acoustic volume. The mode shapes of the virtual volume can be computed by a Finite Element solver to handle the geometrical complexity of the source. As a result, it is possible to identify all the acoustic source fields at the real surface of an irregularly shaped structure and irrespective of its acoustic environment. The m-iPTF method is introduced for the first time in this paper. Conversely to the already published u-iPTF method, the m-iPTF method needs only acoustic pressure and avoids particle velocity measurements. This paper is focused on its validation, both with numerical computations and by experiments on a baffled oil pan.

  20. Comparison of grasping movements made by healthy subjects in a 3-dimensional immersive virtual versus physical environment.

    PubMed

    Magdalon, Eliane C; Michaelsen, Stella M; Quevedo, Antonio A; Levin, Mindy F

    2011-09-01

    Virtual reality (VR) technology is being used with increasing frequency as a training medium for motor rehabilitation. However, before addressing training effectiveness in virtual environments (VEs), it is necessary to identify if movements made in such environments are kinematically similar to those made in physical environments (PEs) and the effect of provision of haptic feedback on these movement patterns. These questions are important since reach-to-grasp movements may be inaccurate when visual or haptic feedback is altered or absent. Our goal was to compare kinematics of reaching and grasping movements to three objects performed in an immersive three-dimensional (3D) VE with haptic feedback (cyberglove/grasp system) viewed through a head-mounted display to those made in an equivalent physical environment (PE). We also compared movements in PE made with and without wearing the cyberglove/grasp haptic feedback system. Ten healthy subjects (8 women, 62.1±8.8years) reached and grasped objects requiring 3 different grasp types (can, diameter 65.6mm, cylindrical grasp; screwdriver, diameter 31.6mm, power grasp; pen, diameter 7.5mm, precision grasp) in PE and visually similar virtual objects in VE. Temporal and spatial arm and trunk kinematics were analyzed. Movements were slower and grip apertures were wider when wearing the glove in both the PE and the VE compared to movements made in the PE without the glove. When wearing the glove, subjects used similar reaching trajectories in both environments, preserved the coordination between reaching and grasping and scaled grip aperture to object size for the larger object (cylindrical grasp). However, in VE compared to PE, movements were slower and had longer deceleration times, elbow extension was greater when reaching to the smallest object and apertures were wider for the power and precision grip tasks. Overall, the differences in spatial and temporal kinematics of movements between environments were greater than

  1. Proteopedia: A Collaborative, Virtual 3D Web-Resource for Protein and Biomolecule Structure and Function

    ERIC Educational Resources Information Center

    Hodis, Eran; Prilusky, Jaime, Sussman, Joel L.

    2010-01-01

    Protein structures are hard to represent on paper. They are large, complex, and three-dimensional (3D)--four-dimensional if conformational changes count! Unlike most of their substrates, which can easily be drawn out in full chemical formula, drawing every atom in a protein would usually be a mess. Simplifications like showing only the surface of…

  2. Virtually supportive: A feasibility pilot study of an online support group for dementia caregivers in a 3D virtual environment

    PubMed Central

    O’Connor, Mary-Frances; Arizmendi, Brian J.; Kaszniak, Alfred W.

    2014-01-01

    Caregiver support groups effectively reduce stress from caring for someone with dementia. These same demands can prevent participation in a group. The present feasibility study investigated a virtual online caregiver support group to bring the support group into the home. While online groups have been shown to be helpful, submissions to a message board (vs. live conversation) can feel impersonal. By using avatars, participants interacted via real-time chat in a virtual environment in an 8-week support group. Data indicated lower levels of perceived stress, depression and loneliness across participants. Importantly, satisfaction reports also indicate that caregivers overcame the barriers to participation, and had a strong sense of the group’s presence. This study provides the framework for an accessible and low cost online support group for a dementia caregiver. The study demonstrates the feasibility of interactive group in a virtual environment for engaging members in meaningful interaction. PMID:24984911

  3. WeaVR: a self-contained and wearable immersive virtual environment simulation system.

    PubMed

    Hodgson, Eric; Bachmann, Eric R; Vincent, David; Zmuda, Michael; Waller, David; Calusdian, James

    2015-03-01

    We describe WeaVR, a computer simulation system that takes virtual reality technology beyond specialized laboratories and research sites and makes it available in any open space, such as a gymnasium or a public park. Novel hardware and software systems enable HMD-based immersive virtual reality simulations to be conducted in any arbitrary location, with no external infrastructure and little-to-no setup or site preparation. The ability of the WeaVR system to provide realistic motion-tracked navigation for users, to improve the study of large-scale navigation, and to generate usable behavioral data is shown in three demonstrations. First, participants navigated through a full-scale virtual grocery store while physically situated in an open grass field. Trajectory data are presented for both normal tracking and for tracking during the use of redirected walking that constrained users to a predefined area. Second, users followed a straight path within a virtual world for distances of up to 2 km while walking naturally and being redirected to stay within the field, demonstrating the ability of the system to study large-scale navigation by simulating virtual worlds that are potentially unlimited in extent. Finally, the portability and pedagogical implications of this system were demonstrated by taking it to a regional high school for live use by a computer science class on their own school campus.

  4. Collaboration and Knowledge Sharing Using 3D Virtual World on "Second Life"

    ERIC Educational Resources Information Center

    Rahim, Noor Faridah A.

    2013-01-01

    A collaborative and knowledge sharing virtual activity on "Second Life" using a learner-centred teaching methodology was initiated between Temasek Polytechnic and The Hong Kong Polytechnic University (HK PolyU) in the October 2011 semester. This paper highlights the author's experience in designing and implementing this e-learning…

  5. The Use of 3D Virtual Learning Environments in Training Foreign Language Pre-Service Teachers

    ERIC Educational Resources Information Center

    Can, Tuncer; Simsek, Irfan

    2015-01-01

    The recent developments in computer and Internet technologies and in three dimensional modelling necessitates the new approaches and methods in the education field and brings new opportunities to the higher education. The Internet and virtual learning environments have changed the learning opportunities by diversifying the learning options not…

  6. Determinants of Presence in 3D Virtual Worlds: A Structural Equation Modelling Analysis

    ERIC Educational Resources Information Center

    Chow, Meyrick

    2016-01-01

    There is a growing body of evidence that feeling present in virtual environments contributes to effective learning. Presence is a psychological state of the user; hence, it is generally agreed that individual differences in user characteristics can lead to different experiences of presence. Despite the fact that user characteristics can play a…

  7. The Input-Interface of Webcam Applied in 3D Virtual Reality Systems

    ERIC Educational Resources Information Center

    Sun, Huey-Min; Cheng, Wen-Lin

    2009-01-01

    Our research explores a virtual reality application based on Web camera (Webcam) input-interface. The interface can replace with the mouse to control direction intention of a user by the method of frame difference. We divide a frame into nine grids from Webcam and make use of the background registration to compute the moving object. In order to…

  8. Virtual Presence and the Mind's Eye in 3-D Online Communities

    NASA Astrophysics Data System (ADS)

    Beacham, R. C.; Denard, H.; Baker, D.

    2011-09-01

    Digital technologies have introduced fundamental changes in the forms, content, and media of communication. Indeed, some have suggested we are in the early stages of a seismic shift comparable to that in antiquity with the transition from a primarily oral culture to one based upon writing. The digital transformation is rapidly displacing the long-standing hegemony of text, and restoring in part social, bodily, oral and spatial elements, but in radically reconfigured forms and formats. Contributing to and drawing upon such changes and possibilities, scholars and those responsible for sites preserving or displaying cultural heritage, have undertaken projects to explore the properties and potential of the online communities enabled by "Virtual Worlds" and related platforms for teaching, collaboration, publication, and new modes of disciplinary research. Others, keenly observing and evaluating such work, are poised to contribute to it. It is crucial that leadership be provided to ensure that serious and sustained investigation be undertaken by scholars who have experience, and achievements, in more traditional forms of research, and who perceive the emerging potential of Virtual World work to advance their investigations. The Virtual Museums Transnational Network will seek to engage such scholars and provide leadership in this emerging and immensely attractive new area of cultural heritage exploration and experience. This presentation reviews examples of the current "state of the art" in heritage based Virtual World initiatives, looking at the new modes of social interaction and experience enabled by such online communities, and some of the achievements and future aspirations of this work.

  9. "The Evolution of e-Learning in the Context of 3D Virtual Worlds"

    ERIC Educational Resources Information Center

    Kotsilieris, Theodore; Dimopoulou, Nikoletta

    2013-01-01

    Information and Communication Technologies (ICT) offer new approaches towards knowledge acquisition and collaboration through distance learning processes. Web-based Learning Management Systems (LMS) have transformed the way that education is conducted nowadays. At the same time, the adoption of Virtual Worlds in the educational process is of great…

  10. Identification of potential influenza virus endonuclease inhibitors through virtual screening based on the 3D-QSAR model.

    PubMed

    Kim, J; Lee, C; Chong, Y

    2009-01-01

    Influenza endonucleases have appeared as an attractive target of antiviral therapy for influenza infection. With the purpose of designing a novel antiviral agent with enhanced biological activities against influenza endonuclease, a three-dimensional quantitative structure-activity relationships (3D-QSAR) model was generated based on 34 influenza endonuclease inhibitors. The comparative molecular similarity index analysis (CoMSIA) with a steric, electrostatic and hydrophobic (SEH) model showed the best correlative and predictive capability (q(2) = 0.763, r(2) = 0.969 and F = 174.785), which provided a pharmacophore composed of the electronegative moiety as well as the bulky hydrophobic group. The CoMSIA model was used as a pharmacophore query in the UNITY search of the ChemDiv compound library to give virtual active compounds. The 3D-QSAR model was then used to predict the activity of the selected compounds, which identified three compounds as the most likely inhibitor candidates.

  11. Towards a Transcription System of Sign Language for 3D Virtual Agents

    NASA Astrophysics Data System (ADS)

    Do Amaral, Wanessa Machado; de Martino, José Mario

    Accessibility is a growing concern in computer science. Since virtual information is mostly presented visually, it may seem that access for deaf people is not an issue. However, for prelingually deaf individuals, those who were deaf since before acquiring and formally learn a language, written information is often of limited accessibility than if presented in signing. Further, for this community, signing is their language of choice, and reading text in a spoken language is akin to using a foreign language. Sign language uses gestures and facial expressions and is widely used by deaf communities. To enabling efficient production of signed content on virtual environment, it is necessary to make written records of signs. Transcription systems have been developed to describe sign languages in written form, but these systems have limitations. Since they were not originally designed with computer animation in mind, in general, the recognition and reproduction of signs in these systems is an easy task only to those who deeply know the system. The aim of this work is to develop a transcription system to provide signed content in virtual environment. To animate a virtual avatar, a transcription system requires explicit enough information, such as movement speed, signs concatenation, sequence of each hold-and-movement and facial expressions, trying to articulate close to reality. Although many important studies in sign languages have been published, the transcription problem remains a challenge. Thus, a notation to describe, store and play signed content in virtual environments offers a multidisciplinary study and research tool, which may help linguistic studies to understand the sign languages structure and grammar.

  12. Numerical study of the 3D-shape of a drop immersed in a fluid under an elongational flow with vorticity

    NASA Astrophysics Data System (ADS)

    Sanjuan, A. S.; Reyes, M. A. H.; Minzoni, A. A.; Geffroy, E.

    2017-01-01

    This work focuses on a three-dimensional analysis of the deformation of a drop — immersed in a Newtonian fluid— generated by a 2D elongational flow with vorticity. The study of steady-state deformations of the cross-section of the drop shows a prevalent non-circular shape. The axisymmetric idealization of the ellipsoid is not observed nor the linear dependency between capillary number and deformation of the drop, as Taylor and Cox theory predicted. Our numerical results are consistent with experiments and other numerical simulations. However, in the latter cases, measurements of the cross section of the drop are few while a limited class of flows is applied. In this work, deformations induced by general two-dimensional flows upon the 3D drop shape are presented with special emphasis about the length scale along the third axis —perpendicular to the plane of the applied flow field.

  13. Generic precise augmented reality guiding system and its calibration method based on 3D virtual model.

    PubMed

    Liu, Miao; Yang, Shourui; Wang, Zhangying; Huang, Shujun; Liu, Yue; Niu, Zhenqi; Zhang, Xiaoxuan; Zhu, Jigui; Zhang, Zonghua

    2016-05-30

    Augmented reality system can be applied to provide precise guidance for various kinds of manual works. The adaptability and guiding accuracy of such systems are decided by the computational model and the corresponding calibration method. In this paper, a novel type of augmented reality guiding system and the corresponding designing scheme are proposed. Guided by external positioning equipment, the proposed system can achieve high relative indication accuracy in a large working space. Meanwhile, the proposed system is realized with a digital projector and the general back projection model is derived with geometry relationship between digitized 3D model and the projector in free space. The corresponding calibration method is also designed for the proposed system to obtain the parameters of projector. To validate the proposed back projection model, the coordinate data collected by a 3D positioning equipment is used to calculate and optimize the extrinsic parameters. The final projecting indication accuracy of the system is verified with subpixel pattern projecting technique.

  14. Virtual touch 3D interactive system for autostereoscopic display with embedded optical sensor

    NASA Astrophysics Data System (ADS)

    Huang, Yi-Pai; Wang, Guo-Zhen; Ma, Ming-Ching; Tung, Shang-Yu; Huang, Shu-Yi; Tseng, Hung-Wei; Kuo, Chung-Hong; Li, Chun-Huai

    2011-06-01

    The traidational 3D interactive sysetm which uses CCD camera to capture image is difficult to operate on near range for mobile applications.Therefore, 3D interactive display with embedded optical sensor was proposed. Based on optical sensor based system, we proposed four different methods to support differenct functions. T mark algorithm can obtain 5- axis information (x, y, z,θ, and φ)of LED no matter where LED was vertical or inclined to panel and whatever it rotated. Sequential mark algorithm and color filter based algorithm can support mulit-user. Finally, bare finger touch system with sequential illuminator can achieve to interact with auto-stereoscopic images by bare finger. Furthermore, the proposed methods were verified on a 4-inch panel with embedded optical sensors.

  15. Model-based 3D human shape estimation from silhouettes for virtual fitting

    NASA Astrophysics Data System (ADS)

    Saito, Shunta; Kouchi, Makiko; Mochimaru, Masaaki; Aoki, Yoshimitsu

    2014-03-01

    We propose a model-based 3D human shape reconstruction system from two silhouettes. Firstly, we synthesize a deformable body model from 3D human shape database consists of a hundred whole body mesh models. Each mesh model is homologous, so that it has the same topology and same number of vertices among all models. We perform principal component analysis (PCA) on the database and synthesize an Active Shape Model (ASM). ASM allows changing the body type of the model with a few parameters. The pose changing of our model can be achieved by reconstructing the skeleton structures from implanted joints of the model. By applying pose changing after body type deformation, our model can represents various body types and any pose. We apply the model to the problem of 3D human shape reconstruction from front and side silhouette. Our approach is simply comparing the contours between the model's and input silhouettes', we then use only torso part contour of the model to reconstruct whole shape. We optimize the model parameters by minimizing the difference between corresponding silhouettes by using a stochastic, derivative-free non-linear optimization method, CMA-ES.

  16. Height, social comparison, and paranoia: an immersive virtual reality experimental study.

    PubMed

    Freeman, Daniel; Evans, Nicole; Lister, Rachel; Antley, Angus; Dunn, Graham; Slater, Mel

    2014-08-30

    Mistrust of others may build upon perceptions of the self as vulnerable, consistent with an association of paranoia with perceived lower social rank. Height is a marker of social status and authority. Therefore we tested the effect of manipulating height, as a proxy for social rank, on paranoia. Height was manipulated within an immersive virtual reality simulation. Sixty females who reported paranoia experienced a virtual reality train ride twice: at their normal and reduced height. Paranoia and social comparison were assessed. Reducing a person's height resulted in more negative views of the self in comparison with other people and increased levels of paranoia. The increase in paranoia was fully mediated by changes in social comparison. The study provides the first demonstration that reducing height in a social situation increases the occurrence of paranoia. The findings indicate that negative social comparison is a cause of mistrust.

  17. Immersive Virtual Environment Technology to Supplement Environmental Perception, Preference and Behavior Research: A Review with Applications.

    PubMed

    Smith, Jordan W

    2015-09-11

    Immersive virtual environment (IVE) technology offers a wide range of potential benefits to research focused on understanding how individuals perceive and respond to built and natural environments. In an effort to broaden awareness and use of IVE technology in perception, preference and behavior research, this review paper describes how IVE technology can be used to complement more traditional methods commonly applied in public health research. The paper also describes a relatively simple workflow for creating and displaying 360° virtual environments of built and natural settings and presents two freely-available and customizable applications that scientists from a variety of disciplines, including public health, can use to advance their research into human preferences, perceptions and behaviors related to built and natural settings.

  18. Immersive Virtual Environment Technology to Supplement Environmental Perception, Preference and Behavior Research: A Review with Applications

    PubMed Central

    Smith, Jordan W.

    2015-01-01

    Immersive virtual environment (IVE) technology offers a wide range of potential benefits to research focused on understanding how individuals perceive and respond to built and natural environments. In an effort to broaden awareness and use of IVE technology in perception, preference and behavior research, this review paper describes how IVE technology can be used to complement more traditional methods commonly applied in public health research. The paper also describes a relatively simple workflow for creating and displaying 360° virtual environments of built and natural settings and presents two freely-available and customizable applications that scientists from a variety of disciplines, including public health, can use to advance their research into human preferences, perceptions and behaviors related to built and natural settings. PMID:26378565

  19. Active Learning through the Use of Virtual Environments

    ERIC Educational Resources Information Center

    Mayrose, James

    2012-01-01

    Immersive Virtual Reality (VR) has seen explosive growth over the last decade. Immersive VR attempts to give users the sensation of being fully immersed in a synthetic environment by providing them with 3D hardware, and allowing them to interact with objects in virtual worlds. The technology is extremely effective for learning and exploration, and…

  20. Immersed Boundary Models for Quantifying Flow-Induced Mechanical Stimuli on Stem Cells Seeded on 3D Scaffolds in Perfusion Bioreactors

    PubMed Central

    Smeets, Bart; Odenthal, Tim; Luyten, Frank P.; Ramon, Herman; Papantoniou, Ioannis; Geris, Liesbet

    2016-01-01

    Perfusion bioreactors regulate flow conditions in order to provide cells with oxygen, nutrients and flow-associated mechanical stimuli. Locally, these flow conditions can vary depending on the scaffold geometry, cellular confluency and amount of extra cellular matrix deposition. In this study, a novel application of the immersed boundary method was introduced in order to represent a detailed deformable cell attached to a 3D scaffold inside a perfusion bioreactor and exposed to microscopic flow. The immersed boundary model permits the prediction of mechanical effects of the local flow conditions on the cell. Incorporating stiffness values measured with atomic force microscopy and micro-flow boundary conditions obtained from computational fluid dynamics simulations on the entire scaffold, we compared cell deformation, cortical tension, normal and shear pressure between different cell shapes and locations. We observed a large effect of the precise cell location on the local shear stress and we predicted flow-induced cortical tensions in the order of 5 pN/μm, at the lower end of the range reported in literature. The proposed method provides an interesting tool to study perfusion bioreactors processes down to the level of the individual cell’s micro-environment, which can further aid in the achievement of robust bioprocess control for regenerative medicine applications. PMID:27658116

  1. Integrated VR platform for 3D and image-based models: a step toward interactive image-based virtual environments

    NASA Astrophysics Data System (ADS)

    Yoon, Jayoung; Kim, Gerard J.

    2003-04-01

    Traditionally, three dimension models have been used for building virtual worlds, and a data structure called the "scene graph" is often employed to organize these 3D objects in the virtual space. On the other hand, image-based rendering has recently been suggested as a probable alternative VR platform for its photo-realism, however, due to limited interactivity, it has only been used for simple navigation systems. To combine the merits of these two approaches to object/scene representations, this paper proposes for a scene graph structure in which both 3D models and various image-based scenes/objects can be defined, traversed, and rendered together. In fact, as suggested by Shade et al., these different representations can be used as different LOD's for a given object. For instance, an object might be rendered using a 3D model at close range, a billboard at an intermediate range, and as part of an environment map at far range. The ultimate objective of this mixed platform is to breath more interactivity into the image based rendered VE's by employing 3D models as well. There are several technical challenges in devising such a platform: designing scene graph nodes for various types of image based techniques, establishing criteria for LOD/representation selection, handling their transitions, implementing appropriate interaction schemes, and correctly rendering the overall scene. Currently, we have extended the scene graph structure of the Sense8's WorldToolKit, to accommodate new node types for environment maps billboards, moving textures and sprites, "Tour-into-the-Picture" structure, and view interpolated objects. As for choosing the right LOD level, the usual viewing distance and image space criteria are used, however, the switching between the image and 3D model occurs at a distance from the user where the user starts to perceive the object's internal depth. Also, during interaction, regardless of the viewing distance, a 3D representation would be used, it if

  2. 3-D Localization of Virtual Sound Sources: Effects of Visual Environment, Pointing Method, and Training

    PubMed Central

    Majdak, Piotr; Goupell, Matthew J.; Laback, Bernhard

    2010-01-01

    The ability to localize sound sources in three-dimensional space was tested in humans. In experiment 1, naive subjects listened to noises filtered with subject-specific head-related transfer functions. The tested conditions included the pointing method (head or manual pointing) and the visual environment (VE) (darkness or virtual VE). The localization performance was not significantly different between the pointing methods. The virtual VE significantly improved the horizontal precision and reduced the number of front-back confusions. These results show the benefit of using a virtual VE in sound localization tasks. In experiment 2, subjects were provided sound localization training. Over the course of training, the performance improved for all subjects, with the largest improvements occurring during the first 400 trials. The improvements beyond the first 400 trials were smaller. After the training, there was still no significant effect of pointing method, showing that the choice of either head- or manual-pointing method plays a minor role in sound localization performance. The results of experiment 2 reinforce the importance of perceptual training for at least 400 trials in sound localization studies. PMID:20139459

  3. Smartphone applications for immersive virtual reality therapy for internet addiction and internet gaming disorder.

    PubMed

    Zhang, Melvyn W B; Ho, Roger C M

    2016-11-25

    There have been rapid advances in technologies over the past decade and virtual reality technology is an area which is increasingly utilized as a healthcare intervention in many disciplines including that of Medicine, Surgery and Psychiatry. In Psychiatry, most of the current interventions involving the usage of virtual reality technology is limited to its application for anxiety disorders. With the advances in technology, Internet addiction and Internet gaming disorders are increasingly prevalent. To date, these disorders are still being treated using conventional psychotherapy methods such as cognitive behavioural therapy. However, there is an increasing number of research combining various other therapies alongside with cognitive behavioural therapy, as an attempt possibly to reduce the drop-out rates and to make such interventions more relevant to the targeted group of addicts, who are mostly adolescents. To date, there has been a prior study done in Korea that has demonstrated the comparable efficacy of virtual reality therapy with that of cognitive behavioural therapy. However, the intervention requires the usage of specialized screens and devices. It is thus the objective of the current article to highlight how smartphone applications could be designed and be utilized for immersive virtual reality treatment, alongside low cost wearables.

  4. An exploratory fNIRS study with immersive virtual reality: a new method for technical implementation

    PubMed Central

    Seraglia, Bruno; Gamberini, Luciano; Priftis, Konstantinos; Scatturin, Pietro; Martinelli, Massimiliano; Cutini, Simone

    2011-01-01

    For over two decades Virtual Reality (VR) has been used as a useful tool in several fields, from medical and psychological treatments, to industrial and military applications. Only in recent years researchers have begun to study the neural correlates that subtend VR experiences. Even if the functional Magnetic Resonance Imaging (fMRI) is the most common and used technique, it suffers several limitations and problems. Here we present a methodology that involves the use of a new and growing brain imaging technique, functional Near-infrared Spectroscopy (fNIRS), while participants experience immersive VR. In order to allow a proper fNIRS probe application, a custom-made VR helmet was created. To test the adapted helmet, a virtual version of the line bisection task was used. Participants could bisect the lines in a virtual peripersonal or extrapersonal space, through the manipulation of a Nintendo Wiimote ® controller in order for the participants to move a virtual laser pointer. Although no neural correlates of the dissociation between peripersonal and extrapersonal space were found, a significant hemodynamic activity with respect to the baseline was present in the right parietal and occipital areas. Both advantages and disadvantages of the presented methodology are discussed. PMID:22207843

  5. Individual reactions to a multisensory immersive virtual environment: the impact of a wind farm on individuals.

    PubMed

    Ruotolo, Francesco; Senese, Vincenzo Paolo; Ruggiero, Gennaro; Maffei, Luigi; Masullo, Massimiliano; Iachini, Tina

    2012-08-01

    The aim of this study was to assess the impact of a wind farm on individuals by means of an audio-visual methodology that tried to simulate biologically plausible individual-environment interactions. To disentangle the effects of auditory and visual components on cognitive performances and subjective evaluations, unimodal (Audio or Video) and bimodal (Audio + Video) approaches were compared. Participants were assigned to three experimental conditions that reproduced a wind farm by means of an immersive virtual reality system: bimodal condition, reproducing scenarios with both acoustic and visual stimuli; unimodal visual condition, with only visual stimuli; unimodal auditory condition, with only auditory stimuli. While immersed in the virtual scenarios, participants performed tasks assessing verbal fluency, short-term verbal memory, backward counting, and distance estimations (egocentric: how far is the turbine from you?; allocentric: how far is the turbine from the target?). Afterwards, participants reported their degree of visual and noise annoyance. The results revealed that the presence of a visual scenario as compared to the only availability of auditory stimuli may exert a negative effect on resource-demanding cognitive tasks but a positive effect on perceived noise annoyance. This supports the idea that humans perceive the environment holistically and that auditory and visual features are processed in close interaction.

  6. A 3D immersed finite element method with non-homogeneous interface flux jump for applications in particle-in-cell simulations of plasma-lunar surface interactions

    NASA Astrophysics Data System (ADS)

    Han, Daoru; Wang, Pu; He, Xiaoming; Lin, Tao; Wang, Joseph

    2016-09-01

    Motivated by the need to handle complex boundary conditions efficiently and accurately in particle-in-cell (PIC) simulations, this paper presents a three-dimensional (3D) linear immersed finite element (IFE) method with non-homogeneous flux jump conditions for solving electrostatic field involving complex boundary conditions using structured meshes independent of the interface. This method treats an object boundary as part of the simulation domain and solves the electric field at the boundary as an interface problem. In order to resolve charging on a dielectric surface, a new 3D linear IFE basis function is designed for each interface element to capture the electric field jump on the interface. Numerical experiments are provided to demonstrate the optimal convergence rates in L2 and H1 norms of the IFE solution. This new IFE method is integrated into a PIC method for simulations involving charging of a complex dielectric surface in a plasma. A numerical study of plasma-surface interactions at the lunar terminator is presented to demonstrate the applicability of the new method.

  7. Development of microgravity, full body functional reach envelope using 3-D computer graphic models and virtual reality technology

    NASA Technical Reports Server (NTRS)

    Lindsey, Patricia F.

    1994-01-01

    In microgravity conditions mobility is greatly enhanced and body stability is difficult to achieve. Because of these difficulties, optimum placement and accessibility of objects and controls can be critical to required tasks on board shuttle flights or on the proposed space station. Anthropometric measurement of the maximum reach of occupants of a microgravity environment provide knowledge about maximum functional placement for tasking situations. Calculations for a full body, functional reach envelope for microgravity environments are imperative. To this end, three dimensional computer modeled human figures, providing a method of anthropometric measurement, were used to locate the data points that define the full body, functional reach envelope. Virtual reality technology was utilized to enable an occupant of the microgravity environment to experience movement within the reach envelope while immersed in a simulated microgravity environment.

  8. Combining Immersive Virtual Worlds and Virtual Learning Environments into an Integrated System for Hosting and Supporting Virtual Conferences

    NASA Astrophysics Data System (ADS)

    Polychronis, Nikolaos; Patrikakis, Charalampos; Voulodimos, Athanasios

    In this paper, a proposal for hosting and supporting virtual conferences based on the use of state of the art web technologies and computer mediated education software is presented. The proposed system consists of a virtual conference venue hosted in Second Life platform, targeted at hosting synchronous conference sessions, and of a web space created with the use of the e-learning platform Moodle, targeted at serving the needs of asynchronous communication, as well as user and content management. The use of Sloodle (the next generation of Moodle software incorporating virtual world supporting capabilities), which up to now has been used only in traditional education, enables the combination of the virtual conference venue and the conference supporting site into an integrated system that allows for the conduction of successful and cost-effective virtual conferences.

  9. Web-Based Immersive Virtual Patient Simulators: Positive Effect on Clinical Reasoning in Medical Education

    PubMed Central

    Heiermann, Nadine; Plum, Patrick Sven; Wahba, Roger; Chang, De-Hua; Maus, Martin; Chon, Seung-Hun; Hoelscher, Arnulf H; Stippel, Dirk Ludger

    2015-01-01

    Background Clinical reasoning is based on the declarative and procedural knowledge of workflows in clinical medicine. Educational approaches such as problem-based learning or mannequin simulators support learning of procedural knowledge. Immersive patient simulators (IPSs) go one step further as they allow an illusionary immersion into a synthetic world. Students can freely navigate an avatar through a three-dimensional environment, interact with the virtual surroundings, and treat virtual patients. By playful learning with IPS, medical workflows can be repetitively trained and internalized. As there are only a few university-driven IPS with a profound amount of medical knowledge available, we developed a university-based IPS framework. Our simulator is free to use and combines a high degree of immersion with in-depth medical content. By adding disease-specific content modules, the simulator framework can be expanded depending on the curricular demands. However, these new educational tools compete with the traditional teaching Objective It was our aim to develop an educational content module that teaches clinical and therapeutic workflows in surgical oncology. Furthermore, we wanted to examine how the use of this module affects student performance. Methods The new module was based on the declarative and procedural learning targets of the official German medical examination regulations. The module was added to our custom-made IPS named ALICE (Artificial Learning Interface for Clinical Education). ALICE was evaluated on 62 third-year students. Results Students showed a high degree of motivation when using the simulator as most of them had fun using it. ALICE showed positive impact on clinical reasoning as there was a significant improvement in determining the correct therapy after using the simulator. ALICE positively impacted the rise in declarative knowledge as there was improvement in answering multiple-choice questions before and after simulator use. Conclusions

  10. Comparative brain morphology of Neotropical parrots (Aves, Psittaciformes) inferred from virtual 3D endocasts.

    PubMed

    Carril, Julieta; Tambussi, Claudia Patricia; Degrange, Federico Javier; Benitez Saldivar, María Juliana; Picasso, Mariana Beatriz Julieta

    2016-08-01

    Psittaciformes are a very diverse group of non-passerine birds, with advanced cognitive abilities and highly developed locomotor and feeding behaviours. Using computed tomography and three-dimensional (3D) visualization software, the endocasts of 14 extant Neotropical parrots were reconstructed, with the aim of analysing, comparing and exploring the morphology of the brain within the clade. A 3D geomorphometric analysis was performed, and the encephalization quotient (EQ) was calculated. Brain morphology character states were traced onto a Psittaciformes tree in order to facilitate interpretation of morphological traits in a phylogenetic context. Our results indicate that: (i) there are two conspicuously distinct brain morphologies, one considered walnut type (quadrangular and wider than long) and the other rounded (narrower and rostrally tapered); (ii) Psittaciformes possess a noticeable notch between hemisphaeria that divides the bulbus olfactorius; (iii) the plesiomorphic and most frequently observed characteristics of Neotropical parrots are a rostrally tapered telencephalon in dorsal view, distinctly enlarged dorsal expansion of the eminentia sagittalis and conspicuous fissura mediana; (iv) there is a positive correlation between body mass and brain volume; (v) psittacids are characterized by high EQ values that suggest high brain volumes in relation to their body masses; and (vi) the endocranial morphology of the Psittaciformes as a whole is distinctive relative to other birds. This new knowledge of brain morphology offers much potential for further insight in paleoneurological, phylogenetic and evolutionary studies.

  11. Multisensory Stimulation Can Induce an Illusion of Larger Belly Size in Immersive Virtual Reality

    PubMed Central

    Normand, Jean-Marie; Giannopoulos, Elias; Spanlang, Bernhard; Slater, Mel

    2011-01-01

    Background Body change illusions have been of great interest in recent years for the understanding of how the brain represents the body. Appropriate multisensory stimulation can induce an illusion of ownership over a rubber or virtual arm, simple types of out-of-the-body experiences, and even ownership with respect to an alternate whole body. Here we use immersive virtual reality to investigate whether the illusion of a dramatic increase in belly size can be induced in males through (a) first person perspective position (b) synchronous visual-motor correlation between real and virtual arm movements, and (c) self-induced synchronous visual-tactile stimulation in the stomach area. Methodology Twenty two participants entered into a virtual reality (VR) delivered through a stereo head-tracked wide field-of-view head-mounted display. They saw from a first person perspective a virtual body substituting their own that had an inflated belly. For four minutes they repeatedly prodded their real belly with a rod that had a virtual counterpart that they saw in the VR. There was a synchronous condition where their prodding movements were synchronous with what they felt and saw and an asynchronous condition where this was not the case. The experiment was repeated twice for each participant in counter-balanced order. Responses were measured by questionnaire, and also a comparison of before and after self-estimates of belly size produced by direct visual manipulation of the virtual body seen from the first person perspective. Conclusions The results show that first person perspective of a virtual body that substitutes for the own body in virtual reality, together with synchronous multisensory stimulation can temporarily produce changes in body representation towards the larger belly size. This was demonstrated by (a) questionnaire results, (b) the difference between the self-estimated belly size, judged from a first person perspective, after and before the experimental

  12. Simulating Navigation with Virtual 3d Geovisualizations - a Focus on Memory Related Factors

    NASA Astrophysics Data System (ADS)

    Lokka, I.; Çöltekin, A.

    2016-06-01

    The use of virtual environments (VE) for navigation-related studies, such as spatial cognition and path retrieval has been widely adopted in cognitive psychology and related fields. What motivates the use of VEs for such studies is that, as opposed to real-world, we can control for the confounding variables in simulated VEs. When simulating a geographic environment as a virtual world with the intention to train navigational memory in humans, an effective and efficient visual design is important to facilitate the amount of recall. However, it is not yet clear what amount of information should be included in such visual designs intended to facilitate remembering: there can be too little or too much of it. Besides the amount of information or level of detail, the types of visual features (`elements' in a visual scene) that should be included in the representations to create memorable scenes and paths must be defined. We analyzed the literature in cognitive psychology, geovisualization and information visualization, and identified the key factors for studying and evaluating geovisualization designs for their function to support and strengthen human navigational memory. The key factors we identified are: i) the individual abilities and age of the users, ii) the level of realism (LOR) included in the representations and iii) the context in which the navigation is performed, thus specific tasks within a case scenario. Here we present a concise literature review and our conceptual development for follow-up experiments.

  13. Techniques for Revealing 3d Hidden Archeological Features: Morphological Residual Models as Virtual-Polynomial Texture Maps

    NASA Astrophysics Data System (ADS)

    Pires, H.; Martínez Rubio, J.; Elorza Arana, A.

    2015-02-01

    The recent developments in 3D scanning technologies are not been accompanied by visualization interfaces. We are still using the same types of visual codes as when maps and drawings were made by hand. The available information in 3D scanning data sets is not being fully exploited by current visualization techniques. In this paper we present recent developments regarding the use of 3D scanning data sets for revealing invisible information from archaeological sites. These sites are affected by a common problem, decay processes, such as erosion, that never ceases its action and endangers the persistence of last vestiges of some peoples and cultures. Rock art engravings, or epigraphical inscriptions, are among the most affected by these processes because they are, due to their one nature, carved at the surface of rocks often exposed to climatic agents. The study and interpretation of these motifs and texts is strongly conditioned by the degree of conservation of the imprints left by our ancestors. Every single detail in the remaining carvings can make a huge difference in the conclusions taken by specialists. We have selected two case-studies severely affected by erosion to present the results of the on-going work dedicated to explore in new ways the information contained in 3D scanning data sets. A new method for depicting subtle morphological features in the surface of objects or sites has been developed. It allows to contrast human patterns still present at the surface but invisible to naked eye or by any other archaeological inspection technique. It was called Morphological Residual Model (MRM) because of its ability to contrast the shallowest morphological details, to which we refer as residuals, contained in the wider forms of the backdrop. Afterwards, we have simulated the process of building Polynomial Texture Maps - a widespread technique that as been contributing to archaeological studies for some years - in a 3D virtual environment using the results of MRM

  14. A Combined Pharmacophore Modeling, 3D QSAR and Virtual Screening Studies on Imidazopyridines as B-Raf Inhibitors.

    PubMed

    Xie, Huiding; Chen, Lijun; Zhang, Jianqiang; Xie, Xiaoguang; Qiu, Kaixiong; Fu, Jijun

    2015-05-29

    B-Raf kinase is an important target in treatment of cancers. In order to design and find potent B-Raf inhibitors (BRIs), 3D pharmacophore models were created using the Genetic Algorithm with Linear Assignment of Hypermolecular Alignment of Database (GALAHAD). The best pharmacophore model obtained which was used in effective alignment of the data set contains two acceptor atoms, three donor atoms and three hydrophobes. In succession, comparative molecular field analysis (CoMFA) and comparative molecular similarity indices analysis (CoMSIA) were performed on 39 imidazopyridine BRIs to build three dimensional quantitative structure-activity relationship (3D QSAR) models based on both pharmacophore and docking alignments. The CoMSIA model based on the pharmacophore alignment shows the best result (q(2) = 0.621, r(2)(pred) = 0.885). This 3D QSAR approach provides significant insights that are useful for designing potent BRIs. In addition, the obtained best pharmacophore model was used for virtual screening against the NCI2000 database. The hit compounds were further filtered with molecular docking, and their biological activities were predicted using the CoMSIA model, and three potential BRIs with new skeletons were obtained.

  15. a Hand-Free Solution for the Interaction in AN Immersive Virtual Environment: the Case of the Agora of Segesta

    NASA Astrophysics Data System (ADS)

    Olivito, R.; Taccola, E.; Albertini, N.

    2015-02-01

    The paper illustrates the project of an interdisciplinary team composed of archaeologists and researchers of the Scuola Normale Superiore and the University of Pisa. The synergy between these Centres has recently allowed for a more articulated 3D simulation of the agora of Segesta. Here, the archaeological excavations have brought to light the remains of a huge public building (stoa) of the Late-Hellenistic Period. Computer graphics and image-based modeling have been used to monitor, document and record the different phases of the excavation activity (layers, findings, wall structures) and to create a 3D model of the whole site. In order to increase as much as possible the level of interaction, all the models can be managed by an application specially designed for an immersive virtual environment (CAVE-like system). By using hands tracking sensor (Leap) in a non-standard way, the application allows for a completely hand-free interaction with the simulation of the agora of Segesta and the different phases of the fieldwork activities. More specifically, the operator can use simple hand gestures to activate a natural interface, scroll and visualize the perfectly overlapped models of the archaeological layers, pop up the models of single meaningful objects discovered during the excavation, and obtain all the relative metadata (stored in a dedicated server) which are visualizable on external devices (e.g. tablets or monitors) without further wearable devices. All these functions are contextualized within the whole simulation of the agora, so that it is possible to verify old interpretations and enhance new ones in real-time, simulating within the CAVE the whole archaeological investigation, going over the different phases of the excavation in a more rapid way, getting information which could have been ignored during the fieldwork, and verifying, even ex-post, issues not correctly documented during the fieldwork. The opportunity to physically interact with the 3D model

  16. Level of Immersion in Virtual Environments Impacts the Ability to Assess and Teach Social Skills in Autism Spectrum Disorder

    PubMed Central

    Bugnariu, Nicoleta L.

    2016-01-01

    Abstract Virtual environments (VEs) may be useful for delivering social skills interventions to individuals with autism spectrum disorder (ASD). Immersive VEs provide opportunities for individuals with ASD to learn and practice skills in a controlled replicable setting. However, not all VEs are delivered using the same technology, and the level of immersion differs across settings. We group studies into low-, moderate-, and high-immersion categories by examining five aspects of immersion. In doing so, we draw conclusions regarding the influence of this technical manipulation on the efficacy of VEs as a tool for assessing and teaching social skills. We also highlight ways in which future studies can advance our understanding of how manipulating aspects of immersion may impact intervention success. PMID:26919157

  17. 3D chromosome rendering from Hi-C data using virtual reality

    NASA Astrophysics Data System (ADS)

    Zhu, Yixin; Selvaraj, Siddarth; Weber, Philip; Fang, Jennifer; Schulze, Jürgen P.; Ren, Bing

    2015-01-01

    Most genome browsers display DNA linearly, using single-dimensional depictions that are useful to examine certain epigenetic mechanisms such as DNA methylation. However, these representations are insufficient to visualize intrachromosomal interactions and relationships between distal genome features. Relationships between DNA regions may be difficult to decipher or missed entirely if those regions are distant in one dimension but could be spatially proximal when mapped to three-dimensional space. For example, the visualization of enhancers folding over genes is only fully expressed in three-dimensional space. Thus, to accurately understand DNA behavior during gene expression, a means to model chromosomes is essential. Using coordinates generated from Hi-C interaction frequency data, we have created interactive 3D models of whole chromosome structures and its respective domains. We have also rendered information on genomic features such as genes, CTCF binding sites, and enhancers. The goal of this article is to present the procedure, findings, and conclusions of our models and renderings.

  18. Enhancing Scientific Collaboration, Transparency, and Public Access: Utilizing the Second Life Platform to Convene a Scientific Conference in 3-D Virtual Space

    NASA Astrophysics Data System (ADS)

    McGee, B. W.

    2006-12-01

    Recent studies reveal a general mistrust of science as well as a distorted perception of the scientific method by the public at-large. Concurrently, the number of science undergraduate and graduate students is in decline. By taking advantage of emergent technologies not only for direct public outreach but also to enhance public accessibility to the science process, it may be possible to both begin a reversal of popular scientific misconceptions and to engage a new generation of scientists. The Second Life platform is a 3-D virtual world produced and operated by Linden Research, Inc., a privately owned company instituted to develop new forms of immersive entertainment. Free and downloadable to the public, Second Life offers an imbedded physics engine, streaming audio and video capability, and unlike other "multiplayer" software, the objects and inhabitants of Second Life are entirely designed and created by its users, providing an open-ended experience without the structure of a traditional video game. Already, educational institutions, virtual museums, and real-world businesses are utilizing Second Life for teleconferencing, pre-visualization, and distance education, as well as to conduct traditional business. However, the untapped potential of Second Life lies in its versatility, where the limitations of traditional scientific meeting venues do not exist, and attendees need not be restricted by prohibitive travel costs. It will be shown that the Second Life system enables scientific authors and presenters at a "virtual conference" to display figures and images at full resolution, employ audio-visual content typically not available to conference organizers, and to perform demonstrations or premier three-dimensional renderings of objects, processes, or information. An enhanced presentation like those possible with Second Life would be more engaging to non- scientists, and such an event would be accessible to the general users of Second Life, who could have an

  19. Inspiring Equal Contribution and Opportunity in a 3D Multi-User Virtual Environment: Bringing Together Men Gamers and Women Non-Gamers in Second Life[R

    ERIC Educational Resources Information Center

    deNoyelles, Aimee; Seo, Kay Kyeong-Ju

    2012-01-01

    A 3D multi-user virtual environment holds promise to support and enhance student online learning communities due to its ability to promote global synchronous interaction and collaboration, rich multisensory experience and expression, and elaborate design capabilities. Second Life[R], a multi-user virtual environment intended for adult users 18 and…

  20. Place illusion and plausibility can lead to realistic behaviour in immersive virtual environments.

    PubMed

    Slater, Mel

    2009-12-12

    In this paper, I address the question as to why participants tend to respond realistically to situations and events portrayed within an immersive virtual reality system. The idea is put forward, based on the experience of a large number of experimental studies, that there are two orthogonal components that contribute to this realistic response. The first is 'being there', often called 'presence', the qualia of having a sensation of being in a real place. We call this place illusion (PI). Second, plausibility illusion (Psi) refers to the illusion that the scenario being depicted is actually occurring. In the case of both PI and Psi the participant knows for sure that they are not 'there' and that the events are not occurring. PI is constrained by the sensorimotor contingencies afforded by the virtual reality system. Psi is determined by the extent to which the system can produce events that directly relate to the participant, the overall credibility of the scenario being depicted in comparison with expectations. We argue that when both PI and Psi occur, participants will respond realistically to the virtual reality.

  1. Place illusion and plausibility can lead to realistic behaviour in immersive virtual environments

    PubMed Central

    Slater, Mel

    2009-01-01

    In this paper, I address the question as to why participants tend to respond realistically to situations and events portrayed within an immersive virtual reality system. The idea is put forward, based on the experience of a large number of experimental studies, that there are two orthogonal components that contribute to this realistic response. The first is ‘being there’, often called ‘presence’, the qualia of having a sensation of being in a real place. We call this place illusion (PI). Second, plausibility illusion (Psi) refers to the illusion that the scenario being depicted is actually occurring. In the case of both PI and Psi the participant knows for sure that they are not ‘there’ and that the events are not occurring. PI is constrained by the sensorimotor contingencies afforded by the virtual reality system. Psi is determined by the extent to which the system can produce events that directly relate to the participant, the overall credibility of the scenario being depicted in comparison with expectations. We argue that when both PI and Psi occur, participants will respond realistically to the virtual reality. PMID:19884149

  2. iVFTs - immersive virtual field trips for interactive learning about Earth's environment.

    NASA Astrophysics Data System (ADS)

    Bruce, G.; Anbar, A. D.; Semken, S. C.; Summons, R. E.; Oliver, C.; Buxner, S.

    2014-12-01

    Innovations in immersive interactive technologies are changing the way students explore Earth and its environment. State-of-the-art hardware has given developers the tools needed to capture high-resolution spherical content, 360° panoramic video, giga-pixel imagery, and unique viewpoints via unmanned aerial vehicles as they explore remote and physically challenging regions of our planet. Advanced software enables integration of these data into seamless, dynamic, immersive, interactive, content-rich, and learner-driven virtual field explorations, experienced online via HTML5. These surpass conventional online exercises that use 2-D static imagery and enable the student to engage in these virtual environments that are more like games than like lectures. Grounded in the active learning of exploration, inquiry, and application of knowledge as it is acquired, users interact non-linearly in conjunction with an intelligent tutoring system (ITS). The integration of this system allows the educational experience to be adapted to each individual student as they interact within the program. Such explorations, which we term "immersive virtual field trips" (iVFTs), are being integrated into cyber-learning allowing science teachers to take students to scientifically significant but inaccessible environments. Our team and collaborators are producing a diverse suite of freely accessible, iVFTs to teach key concepts in geology, astrobiology, ecology, and anthropology. Topics include Early Life, Biodiversity, Impact craters, Photosynthesis, Geologic Time, Stratigraphy, Tectonics, Volcanism, Surface Processes, The Rise of Oxygen, Origin of Water, Early Civilizations, Early Multicellular Organisms, and Bioarcheology. These diverse topics allow students to experience field sites all over the world, including, Grand Canyon (USA), Flinders Ranges (Australia), Shark Bay (Australia), Rainforests (Panama), Teotihuacan (Mexico), Upheaval Dome (USA), Pilbara (Australia), Mid-Atlantic Ridge

  3. Evaluation of historical museum interior lighting system using fully immersive virtual luminous environment

    NASA Astrophysics Data System (ADS)

    Navvab, Mojtaba; Bisegna, Fabio; Gugliermetti, Franco

    2013-05-01

    Saint Rocco Museum, a historical building in Venice, Italy is used as a case study to explore the performance of its' lighting system and visible light impact on viewing the large size art works. The transition from threedimensional architectural rendering to the three-dimensional virtual luminance mapping and visualization within a virtual environment is described as an integrated optical method for its application toward preservation of the cultural heritage of the space. Lighting simulation programs represent color as RGB triplets in a devicedependent color space such as ITU-R BT709. Prerequisite for this is a 3D-model which can be created within this computer aided virtual environment. The onsite measured surface luminance, chromaticity and spectral data were used as input to an established real-time indirect illumination and a physically based algorithms to produce the best approximation for RGB to be used as an input to generate the image of the objects. Conversion of RGB to and from spectra has been a major undertaking in order to match the infinite number of spectra to create the same colors that were defined by RGB in the program. The ability to simulate light intensity, candle power and spectral power distributions provide opportunity to examine the impact of color inter-reflections on historical paintings. VR offers an effective technique to quantify the visible light impact on human visual performance under precisely controlled representation of light spectrum that could be experienced in 3D format in a virtual environment as well as historical visual archives. The system can easily be expanded to include other measurements and stimuli.

  4. 3D graphics, virtual reality, and motion-onset visual evoked potentials in neurogaming.

    PubMed

    Beveridge, R; Wilson, S; Coyle, D

    2016-01-01

    A brain-computer interface (BCI) offers movement-free control of a computer application and is achieved by reading and translating the cortical activity of the brain into semantic control signals. Motion-onset visual evoked potentials (mVEP) are neural potentials employed in BCIs and occur when motion-related stimuli are attended visually. mVEP dynamics are correlated with the position and timing of the moving stimuli. To investigate the feasibility of utilizing the mVEP paradigm with video games of various graphical complexities including those of commercial quality, we conducted three studies over four separate sessions comparing the performance of classifying five mVEP responses with variations in graphical complexity and style, in-game distractions, and display parameters surrounding mVEP stimuli. To investigate the feasibility of utilizing contemporary presentation modalities in neurogaming, one of the studies compared mVEP classification performance when stimuli were presented using the oculus rift virtual reality headset. Results from 31 independent subjects were analyzed offline. The results show classification performances ranging up to 90% with variations in conditions in graphical complexity having limited effect on mVEP performance; thus, demonstrating the feasibility of using the mVEP paradigm within BCI-based neurogaming.

  5. A New Ionosphere Tomography Algorithm with Two-Grids Virtual Observations Constraints and 3D Velocity Profile

    NASA Astrophysics Data System (ADS)

    Kong, Jian; Yao, Yibin; Shum, Che-Kwan

    2014-05-01

    Due to the sparsity of world's GNSS stations and limitations of projection angles, GNSS-based ionosphere tomography is a typical ill-posed problem. There are two main ways to solve this problem. Firstly the joint inversion method combining multi-source data is one of the effective ways. Secondly using a priori or reference ionosphere models, e.g., IRI or GIM models, as the constraints to improve the state of normal equation is another effective approach. The traditional way for adding constraints with virtual observations can only solve the problem of sparse stations but the virtual observations still lack horizontal grid constraints therefore unable to fundamentally improve the near-singularity characteristic of the normal equation. In this paper, we impose a priori constraints by increasing the virtual observations in n-dimensional space, which can greatly reduce the condition number of the normal equation. Then after the inversion region is gridded, we can form a stable structure among the grids with loose constraints. We then further consider that the ionosphere indeed changes within certain temporal scale, e.g., two hours. In order to establish a more sophisticated and realistic ionosphere model and obtain the real time ionosphere electron density velocity (IEDV) information, we introduce the grid electron density velocity parameters, which can be estimated with electron density parameters simultaneously. The velocity parameters not only can enhance the temporal resolution of the ionosphere model thereby reflecting more elaborate structure (short-term disturbances) under ionosphere disturbances status, but also provide a new way for the real-time detection and prediction of ionosphere 3D changes. We applied the new algorithm to the GNSS data collected in Europe for tomography inversion for ionosphere electron density and velocity at 2-hour resolutions, which are consistent throughout the whole day variation. We then validate the resulting tomography model

  6. Global Warming and the Arctic in 3D: A Virtual Globe for Outreach

    NASA Astrophysics Data System (ADS)

    Manley, W. F.

    2006-12-01

    Virtual Globes provide a new way to capture and inform the public's interest in environmental change. As an example, a recent Google Earth presentation conveyed 'key findings' from the Arctic Climate Impact Assessment (ACIA, 2004) to middle school students during the 2006 INSTAAR/NSIDC Open House at the University of Colorado. The 20-minute demonstration to 180 eighth graders began with an introduction and a view of the Arctic from space, zooming into the North American Arctic, then to a placemark for the first key finding, 'Arctic climate is now warming rapidly and much larger changes are projected'. An embedded link then opened a custom web page, with brief explanatory text, along with an ACIA graphic illustrating the rise in Arctic temperature, global CO2 concentrations, and carbon emissions for the last millennium. The demo continued with an interactive tour of other key findings (Reduced Sea Ice, Changes for Animals, Melting Glaciers, Coastal Erosion, Changes in Vegetation, Melting Permafrost, and others). Each placemark was located somewhat arbitrarily (which may be a concern for some audiences), but the points represented the messages in a geographic sense and enabled a smooth visual tour of the northern latitudes. Each placemark was linked to custom web pages with photos and concise take-home messages. The demo ended with navigation to Colorado, then Boulder, then the middle school that the students attended, all the while speaking to implications as they live their lives locally. The demo piqued the students' curiosity, and in this way better conveyed important messages about the Arctic and climate change. The use of geospatial visualizations for outreach and education appears to be in its infancy, with much potential.

  7. Fast and Forceful: Modulation of Response Activation Induced by Shifts of Perceived Depth in Virtual 3D Space

    PubMed Central

    Plewan, Thorsten; Rinkenauer, Gerhard

    2016-01-01

    Reaction time (RT) can strongly be influenced by a number of stimulus properties. For instance, there was converging evidence that perceived size rather than physical (i.e., retinal) size constitutes a major determinant of RT. However, this view has recently been challenged since within a virtual three-dimensional (3D) environment retinal size modulation failed to influence RT. In order to further investigate this issue in the present experiments response force (RF) was recorded as a supplemental measure of response activation in simple reaction tasks. In two separate experiments participants’ task was to react as fast as possible to the occurrence of a target located close to the observer or farther away while the offset between target locations was increased from Experiment 1 to Experiment 2. At the same time perceived target size (by varying the retinal size across depth planes) and target type (sphere vs. soccer ball) were modulated. Both experiments revealed faster and more forceful reactions when targets were presented closer to the observers. Perceived size and target type barely affected RT and RF in Experiment 1 but differentially affected both variables in Experiment 2. Thus, the present findings emphasize the usefulness of RF as a supplement to conventional RT measurement. On a behavioral level the results confirm that (at least) within virtual 3D space perceived object size neither strongly influences RT nor RF. Rather the relative position within egocentric (body-centered) space presumably indicates an object’s behavioral relevance and consequently constitutes an important modulator of visual processing. PMID:28018273

  8. Development of a system based on 3D vision, interactive virtual environments, ergonometric signals and a humanoid for stroke rehabilitation.

    PubMed

    Ibarra Zannatha, Juan Manuel; Tamayo, Alejandro Justo Malo; Sánchez, Angel David Gómez; Delgado, Jorge Enrique Lavín; Cheu, Luis Eduardo Rodríguez; Arévalo, Wilson Alexander Sierra

    2013-11-01

    This paper presents a stroke rehabilitation (SR) system for the upper limbs, developed as an interactive virtual environment (IVE) based on a commercial 3D vision system (a Microsoft Kinect), a humanoid robot (an Aldebaran's Nao), and devices producing ergonometric signals. In one environment, the rehabilitation routines, developed by specialists, are presented to the patient simultaneously by the humanoid and an avatar inside the IVE. The patient follows the rehabilitation task, while his avatar copies his gestures that are captured by the Kinect 3D vision system. The information of the patient movements, together with the signals obtained from the ergonometric measurement devices, is used also to supervise and to evaluate the rehabilitation progress. The IVE can also present an RGB image of the patient. In another environment, that uses the same base elements, four game routines--Touch the balls 1 and 2, Simon says, and Follow the point--are used for rehabilitation. These environments are designed to create a positive influence in the rehabilitation process, reduce costs, and engage the patient.

  9. Collaborative Science Learning in Three-Dimensional Immersive Virtual Worlds: Pre-Service Teachers' Experiences in Second Life

    ERIC Educational Resources Information Center

    Nussli, Natalie; Oh, Kevin; McCandless, Kevin

    2014-01-01

    The purpose of this mixed methods study was to help pre-service teachers experience and evaluate the potential of Second Life, a three-dimensional immersive virtual environment, for potential integration into their future teaching. By completing collaborative assignments in Second Life, nineteen pre-service general education teachers explored an…

  10. Designing the Self: The Transformation of the Relational Self-Concept through Social Encounters in a Virtual Immersive Environment

    ERIC Educational Resources Information Center

    Knutzen, K. Brant; Kennedy, David M.

    2012-01-01

    This article describes the findings of a 3-month study on how social encounters mediated by an online Virtual Immersive Environment (VIE) impacted on the relational self-concept of adolescents. The study gathered data from two groups of students as they took an Introduction to Design and Programming class. Students in group 1 undertook course…

  11. Psychophysical estimation of 3D virtual depth of united, synthesized and mixed type stereograms by means of simultaneous observation

    NASA Astrophysics Data System (ADS)

    Iizuka, Masayuki; Ookuma, Yoshio; Nakashima, Yoshio; Takamatsu, Mamoru

    2007-02-01

    Recently, many types of computer-generated stereograms (CGSs), i.e. various works of art produced by using computer are published for hobby and entertainment. It is said that activation of brain, improvement of visual eye sight, decrease of mental stress, effect of healing, etc. are expected when properly appreciating a kind of CGS as the stereoscopic view. There is a lot of information on the internet web site concerning all aspects of stereogram history, science, social organization, various types of stereograms, and free software for generating CGS. Generally, the CGS is classified into nine types: (1) stereo pair type, (2) anaglyph type, (3) repeated pattern type, (4) embedded type, (5) random dot stereogram (RDS), (6) single image stereogram (SIS), (7) united stereogram, (8) synthesized stereogram, and (9) mixed or multiple type stereogram. Each stereogram has advantages and disadvantages when viewing directly the stereogram with two eyes by training with a little patience. In this study, the characteristics of united, synthesized and mixed type stereograms, the role and composition of depth map image (DMI) called hidden image or picture, and the effect of irregular shift of texture pattern image called wall paper are discussed from the viewpoint of psychophysical estimation of 3D virtual depth and visual quality of virtual image by means of simultaneous observation in the case of the parallel viewing method.

  12. The development of a virtual 3D model of the renal corpuscle from serial histological sections for E-learning environments.

    PubMed

    Roth, Jeremy A; Wilson, Timothy D; Sandig, Martin

    2015-01-01

    Histology is a core subject in the anatomical sciences where learners are challenged to interpret two-dimensional (2D) information (gained from histological sections) to extrapolate and understand the three-dimensional (3D) morphology of cells, tissues, and organs. In gross anatomical education 3D models and learning tools have been associated with improved learning outcomes, but similar tools have not been created for histology education to visualize complex cellular structure-function relationships. This study outlines steps in creating a virtual 3D model of the renal corpuscle from serial, semi-thin, histological sections obtained from epoxy resin-embedded kidney tissue. The virtual renal corpuscle model was generated by digital segmentation to identify: Bowman's capsule, nuclei of epithelial cells in the parietal capsule, afferent arteriole, efferent arteriole, proximal convoluted tubule, distal convoluted tubule, glomerular capillaries, podocyte nuclei, nuclei of extraglomerular mesangial cells, nuclei of epithelial cells of the macula densa in the distal convoluted tubule. In addition to the imported images of the original sections the software generates, and allows for visualization of, images of virtual sections generated in any desired orientation, thus serving as a "virtual microtome". These sections can be viewed separately or with the 3D model in transparency. This approach allows for the development of interactive e-learning tools designed to enhance histology education of microscopic structures with complex cellular interrelationships. Future studies will focus on testing the efficacy of interactive virtual 3D models for histology education.

  13. Redirecting walking and driving for natural navigation in immersive virtual environments.

    PubMed

    Bruder, Gerd; Interrante, Victoria; Phillips, Lane; Steinicke, Frank

    2012-04-01

    Walking is the most natural form of locomotion for humans, and real walking interfaces have demonstrated their benefits for several navigation tasks. With recently proposed redirection techniques it becomes possible to overcome space limitations as imposed by tracking sensors or laboratory setups, and, theoretically, it is now possible to walk through arbitrarily large virtual environments. However, walking as sole locomotion technique has drawbacks, in particular, for long distances, such that even in the real world we tend to support walking with passive or active transportation for longer-distance travel. In this article we show that concepts from the field of redirected walking can be applied to movements with transportation devices. We conducted psychophysical experiments to determine perceptual detection thresholds for redirected driving, and set these in relation to results from redirected walking. We show that redirected walking-and-driving approaches can easily be realized in immersive virtual reality laboratories, e. g., with electric wheelchairs, and show that such systems can combine advantages of real walking in confined spaces with benefits of using vehicle-based self-motion for longer-distance travel.

  14. Visualization and Interpretation in 3D Virtual Reality of Topographic and Geophysical Data from the Chicxulub Impact Crater

    NASA Astrophysics Data System (ADS)

    Rosen, J.; Kinsland, G. L.; Borst, C.

    2011-12-01

    We have assembled Shuttle Radar Topography Mission (SRTM) data (Borst and Kinsland, 2005), gravity data (Bedard, 1977), horizontal gravity gradient data (Hildebrand et al., 1995), magnetic data (Pilkington et al., 2000) and GPS topography data (Borst and Kinsland, 2005) from the Chicxulub Impact Crater buried on the Yucatan Peninsula of Mexico. These data sets are imaged as gridded surfaces and are all georegistered, within an interactive 3D virtual reality (3DVR) visualization and interpretation system created and maintained in the Center for Advanced Computer Studies at the University of Louisiana at Lafayette. We are able to view and interpret the data sets individually or together and to scale and move the data or to move our physical head position so as to achieve the best viewing perspective for interpretation. A feature which is especially valuable for understanding the relationships between the various data sets is our ability to "interlace" the 3D images. "Interlacing" is a technique we have developed whereby the data surfaces are moved along a common axis so that they interpenetrate. This technique leads to rapid and positive identification of spatially corresponding features in the various data sets. We present several images from the 3D system, which demonstrate spatial relationships amongst the features in the data sets. Some of the anomalies in gravity are very nearly coincident with anomalies in the magnetic data as one might suspect if the causal bodies are the same. Other gravity and magnetic anomalies are not spatially coincident indicating different causal bodies. Topographic anomalies display a strong spatial correspondence with many gravity anomalies. In some cases small gravity anomalies and topographic valleys are caused by shallow dissolution within the Tertiary cover along faults or fractures propagated upward from the buried structure. In other cases the sources of the gravity anomalies are in the more deeply buried structure from which

  15. iSocial: delivering the Social Competence Intervention for Adolescents (SCI-A) in a 3D virtual learning environment for youth with high functioning autism.

    PubMed

    Stichter, Janine P; Laffey, James; Galyen, Krista; Herzog, Melissa

    2014-02-01

    One consistent area of need for students with autism spectrum disorders is in the area of social competence. However, the increasing need to provide qualified teachers to deliver evidence-based practices in areas like social competence leave schools, such as those found in rural areas, in need of support. Distance education and in particular, 3D Virtual Learning, holds great promise for supporting schools and youth to gain social competence through knowledge and social practice in context. iSocial, a distance education, 3D virtual learning environment implemented the 31-lesson social competence intervention for adolescents across three small cohorts totaling 11 students over a period of 4 months. Results demonstrated that the social competence curriculum was delivered with fidelity in the 3D virtual learning environment. Moreover, learning outcomes suggest that the iSocial approach shows promise for social competence benefits for youth.

  16. 3D virtual planning in orthognathic surgery and CAD/CAM surgical splints generation in one patient with craniofacial microsomia: a case report

    PubMed Central

    Vale, Francisco; Scherzberg, Jessica; Cavaleiro, João; Sanz, David; Caramelo, Francisco; Maló, Luísa; Marcelino, João Pedro

    2016-01-01

    Objective: In this case report, the feasibility and precision of tridimensional (3D) virtual planning in one patient with craniofacial microsomia is tested using Nemoceph 3D-OS software (Software Nemotec SL, Madrid, Spain) to predict postoperative outcomes on hard tissue and produce CAD/CAM (Computer Aided Design/Computer Aided Manufacturing) surgical splints. Methods: The clinical protocol consists of 3D data acquisition of the craniofacial complex by cone-beam computed tomography (CBCT) and surface scanning of the plaster dental casts. The ''virtual patient'' created underwent virtual surgery and a simulation of postoperative results on hard tissues. Surgical splints were manufactured using CAD/CAM technology in order to transfer the virtual surgical plan to the operating room. Intraoperatively, both CAD/CAM and conventional surgical splints are comparable. A second set of 3D images was obtained after surgery to acquire linear measurements and compare them with measurements obtained when predicting postoperative results virtually. Results: It was found a high similarity between both types of surgical splints with equal fitting on the dental arches. The linear measurements presented some discrepancies between the actual surgical outcomes and the predicted results from the 3D virtual simulation, but caution must be taken in the analysis of these results due to several variables. Conclusions: The reported case confirms the clinical feasibility of the described computer-assisted orthognathic surgical protocol. Further progress in the development of technologies for 3D image acquisition and improvements on software programs to simulate postoperative changes on soft tissue are required. PMID:27007767

  17. 3D Virtual Reality Applied in Tectonic Geomorphic Study of the Gombori Range of Greater Caucasus Mountains

    NASA Astrophysics Data System (ADS)

    Sukhishvili, Lasha; Javakhishvili, Zurab

    2016-04-01

    Gombori Range represents the southern part of the young Greater Caucasus Mountains and stretches from NW to SE. The range separates Alazani and Iori basins within the eastern Georgian province of Kakheti. The active phase of Caucasian orogeny started in the Pliocene, but according to alluvial sediments of Gombori range (mapped in the Soviet geologic map), we observe its uplift process to be Quaternary event. The highest peak of the Gombori range has an absolute elevation of 1991 m, while its neighboring Alazani valley gains only 400 m. We assume the range has a very fast uplift rate and it could trigger streams flow direction course reverse in Quaternary. To check this preliminary assumptions we are going to use a tectonic and fluvial geomorphic and stratigraphic approaches including paleocurrent analyses and various affordable absolute dating techniques to detect the evidence of river course reverses and date them. For these purposes we have selected river Turdo outcrop. The river itself flows northwards from the Gombori range and nearby region`s main city of Telavi generates 30-40 m high continuous outcrop along 1 km section. Turdo outcrop has very steep walls and requires special climbing skills to work on it. The goal of this particularly study is to avoid time and resource consuming ground survey process of this steep, high and wide outcrop and test 3D aerial and ground base photogrammetric modelling and analyzing approaches in initial stage of the tectonic geomorphic study. Using this type of remote sensing and virtual lab analyses of 3D outcrop model, we roughly delineated stratigraphic layers, selected exact locations for applying various research techniques and planned safe and suitable climbing routes for getting to the investigation sites.

  18. Extension of the Optimized Virtual Fields Method to estimate viscoelastic material parameters from 3D dynamic displacement fields

    PubMed Central

    Connesson, N.; Clayton, E.H.; Bayly, P.V.; Pierron, F.

    2015-01-01

    In-vivo measurement of the mechanical properties of soft tissues is essential to provide necessary data in biomechanics and medicine (early cancer diagnosis, study of traumatic brain injuries, etc.). Imaging techniques such as Magnetic Resonance Elastography (MRE) can provide 3D displacement maps in the bulk and in vivo, from which, using inverse methods, it is then possible to identify some mechanical parameters of the tissues (stiffness, damping etc.). The main difficulties in these inverse identification procedures consist in dealing with the pressure waves contained in the data and with the experimental noise perturbing the spatial derivatives required during the processing. The Optimized Virtual Fields Method (OVFM) [1], designed to be robust to noise, present natural and rigorous solution to deal with these problems. The OVFM has been adapted to identify material parameter maps from Magnetic Resonance Elastography (MRE) data consisting of 3-dimensional displacement fields in harmonically loaded soft materials. In this work, the method has been developed to identify elastic and viscoelastic models. The OVFM sensitivity to spatial resolution and to noise has been studied by analyzing 3D analytically simulated displacement data. This study evaluates and describes the OVFM identification performances: different biases on the identified parameters are induced by the spatial resolution and experimental noise. The well-known identification problems in the case of quasi-incompressible materials also find a natural solution in the OVFM. Moreover, an a posteriori criterion to estimate the local identification quality is proposed. The identification results obtained on actual experiments are briefly presented. PMID:26146416

  19. Improving the Sequential Time Perception of Teenagers with Mild to Moderate Mental Retardation with 3D Immersive Virtual Reality (IVR)

    ERIC Educational Resources Information Center

    Passig, David

    2009-01-01

    Children with mental retardation have pronounced difficulties in using cognitive strategies and comprehending abstract concepts--among them, the concept of sequential time (Van-Handel, Swaab, De-Vries, & Jongmans, 2007). The perception of sequential time is generally tested by using scenarios presenting a continuum of actions. The goal of this…

  20. Incorporating immersive virtual environments in health promotion campaigns: a construal level theory approach.

    PubMed

    Ahn, Sun Joo Grace

    2015-01-01

    In immersive virtual environments (IVEs), users may observe negative consequences of a risky health behavior in a personally involving way via digital simulations. In the context of an ongoing health promotion campaign, IVEs coupled with pamphlets are proposed as a novel messaging strategy to heighten personal relevance and involvement with the issue of soft-drink consumption and obesity, as well as perceptions that the risk is proximal and imminent. The framework of construal level theory guided the design of a 2 (tailoring: other vs. self) × 2 (medium: pamphlet only vs. pamphlet with IVEs) between-subjects experiment to test the efficacy in reducing the consumption of soft drinks over 1 week. Immediately following exposure, tailoring the message to the self (vs. other) seemed to be effective in reducing intentions to consume soft drinks. The effect of tailoring dissipated after 1 week, and measures of actual soft-drink consumption 1 week following experimental treatments demonstrated that coupling IVEs with the pamphlet was more effective. Behavioral intention was a significant predictor of actual behavior, but underlying mechanisms driving intentions and actual behavior were distinct. Results prescribed a messaging strategy that incorporates both tailoring and coupling IVEs with traditional media to increase behavioral changes over time.

  1. Three Dimensional (3D) Printing: A Straightforward, User-Friendly Protocol to Convert Virtual Chemical Models to Real-Life Objects

    ERIC Educational Resources Information Center

    Rossi, Sergio; Benaglia, Maurizio; Brenna, Davide; Porta, Riccardo; Orlandi, Manuel

    2015-01-01

    A simple procedure to convert protein data bank files (.pdb) into a stereolithography file (.stl) using VMD software (Virtual Molecular Dynamic) is reported. This tutorial allows generating, with a very simple protocol, three-dimensional customized structures that can be printed by a low-cost 3D-printer, and used for teaching chemical education…

  2. A Learner-Centered Approach for Training Science Teachers through Virtual Reality and 3D Visualization Technologies: Practical Experience for Sharing

    ERIC Educational Resources Information Center

    Yeung, Yau-Yuen

    2004-01-01

    This paper presentation will report on how some science educators at the Science Department of The Hong Kong Institute of Education have successfully employed an array of innovative learning media such as three-dimensional (3D) and virtual reality (VR) technologies to create seven sets of resource kits, most of which are being placed on the…

  3. Multi-parallel open technology to enable collaborative volume visualization: how to create global immersive virtual anatomy classrooms.

    PubMed

    Silverstein, Jonathan C; Walsh, Colin; Dech, Fred; Olson, Eric; E, Michael; Parsad, Nigel; Stevens, Rick

    2008-01-01

    Many prototype projects aspire to develop a sustainable model of immersive radiological volume visualization for virtual anatomic education. Some have focused on distributed or parallel architectures. However, very few, if any others, have combined multi-location, multi-directional, multi-stream sharing of video, audio, desktop applications, and parallel stereo volume rendering, to converge on an open, globally scalable, and inexpensive collaborative architecture and implementation method for anatomic teaching using radiological volumes. We have focused our efforts on bringing this all together for several years. We outline here the technology we're making available to the open source community and a system implementation suggestion for how to create global immersive virtual anatomy classrooms. With the releases of Access Grid 3.1 and our parallel stereo volume rendering code, inexpensive globally scalable technology is available to enable collaborative volume visualization upon an award-winning framework. Based upon these technologies, immersive virtual anatomy classrooms that share educational or clinical principles can be constructed with the setup described with moderate technological expertise and global scalability.

  4. EEG-based cognitive load of processing events in 3D virtual worlds is lower than processing events in 2D displays.

    PubMed

    Dan, Alex; Reiner, Miriam

    2016-08-31

    Interacting with 2D displays, such as computer screens, smartphones, and TV, is currently a part of our daily routine; however, our visual system is built for processing 3D worlds. We examined the cognitive load associated with a simple and a complex task of learning paper-folding (origami) by observing 2D or stereoscopic 3D displays. While connected to an electroencephalogram (EEG) system, participants watched a 2D video of an instructor demonstrating the paper-folding tasks, followed by a stereoscopic 3D projection of the same instructor (a digital avatar) illustrating identical tasks. We recorded the power of alpha and theta oscillations and calculated the cognitive load index (CLI) as the ratio of the average power of frontal theta (Fz.) and parietal alpha (Pz). The results showed a significantly higher cognitive load index associated with processing the 2D projection as compared to the 3D projection; additionally, changes in the average theta Fz power were larger for the 2D conditions as compared to the 3D conditions, while alpha average Pz power values were similar for 2D and 3D conditions for the less complex task and higher in the 3D state for the more complex task. The cognitive load index was lower for the easier task and higher for the more complex task in 2D and 3D. In addition, participants with lower spatial abilities benefited more from the 3D compared to the 2D display. These findings have implications for understanding cognitive processing associated with 2D and 3D worlds and for employing stereoscopic 3D technology over 2D displays in designing emerging virtual and augmented reality applications.

  5. Translation of First North American 50 and 70 cc Total Artificial Heart Virtual and Clinical Implantations: Utility of 3D Computed Tomography to Test Fit Devices.

    PubMed

    Ferng, Alice S; Oliva, Isabel; Jokerst, Clinton; Avery, Ryan; Connell, Alana M; Tran, Phat L; Smith, Richard G; Khalpey, Zain

    2016-11-10

    Since the creation of SynCardia's 50 cc Total Artificial Hearts (TAHs), patients with irreversible biventricular failure now have two sizing options. Herein, a case series of three patients who have undergone successful 50 and 70 cc TAH implantation with complete closure of the chest cavity utilizing preoperative "virtual implantation" of different sized devices for surgical planning are presented. Computed tomography (CT) images were used for preoperative planning prior to TAH implantation. Three-dimensional (3D) reconstructions of preoperative chest CT images were generated and both 50 and 70 cc TAHs were virtually implanted into patients' thoracic cavities. During the simulation, the TAHs were projected over the native hearts in a similar position to the actual implantation, and the relationship between the devices and the atria, ventricles, chest wall, and diaphragm were assessed. The 3D reconstructed images and virtual modeling were used to simulate and determine for each patient if the 50 or 70 cc TAH would have a higher likelihood of successful implantation without complications. Subsequently, all three patients received clinical implants of the properly sized TAH based on virtual modeling, and their chest cavities were fully closed. This virtual implantation increases our confidence that the selected TAH will better fit within the thoracic cavity allowing for improved surgical outcome. Clinical implantation of the TAHs showed that our virtual modeling was an effective method for determining the correct fit and sizing of 50 and 70 cc TAHs.

  6. Feasibility of an Immersive Virtual Reality Intervention for Hospitalized Patients: An Observational Cohort Study

    PubMed Central

    2016-01-01

    Background Virtual reality (VR) offers immersive, realistic, three-dimensional experiences that “transport” users to novel environments. Because VR is effective for acute pain and anxiety, it may have benefits for hospitalized patients; however, there are few reports using VR in this setting. Objective The aim was to evaluate the acceptability and feasibility of VR in a diverse cohort of hospitalized patients. Methods We assessed the acceptability and feasibility of VR in a cohort of patients admitted to an inpatient hospitalist service over a 4-month period. We excluded patients with motion sickness, stroke, seizure, dementia, nausea, and in isolation. Eligible patients viewed VR experiences (eg, ocean exploration; Cirque du Soleil; tour of Iceland) with Samsung Gear VR goggles. We then conducted semistructured patient interview and performed statistical testing to compare patients willing versus unwilling to use VR. Results We evaluated 510 patients; 423 were excluded and 57 refused to participate, leaving 30 participants. Patients willing versus unwilling to use VR were younger (mean 49.1, SD 17.4 years vs mean 60.2, SD 17.7 years; P=.01); there were no differences by sex, race, or ethnicity. Among users, most reported a positive experience and indicated that VR could improve pain and anxiety, although many felt the goggles were uncomfortable. Conclusions Most inpatient users of VR described the experience as pleasant and capable of reducing pain and anxiety. However, few hospitalized patients in this “real-world” series were both eligible and willing to use VR. Consistent with the “digital divide” for emerging technologies, younger patients were more willing to participate. Future research should evaluate the impact of VR on clinical and resource outcomes. ClinicalTrial Clinicaltrials.gov NCT02456987; https://clinicaltrials.gov/ct2/show/NCT02456987 (Archived by WebCite at http://www.webcitation.org/6iFIMRNh3) PMID:27349654

  7. Bystander Responses to a Violent Incident in an Immersive Virtual Environment

    PubMed Central

    Slater, Mel; Rovira, Aitor; Southern, Richard; Swapp, David; Zhang, Jian J.; Campbell, Claire; Levine, Mark

    2013-01-01

    Under what conditions will a bystander intervene to try to stop a violent attack by one person on another? It is generally believed that the greater the size of the crowd of bystanders, the less the chance that any of them will intervene. A complementary model is that social identity is critical as an explanatory variable. For example, when the bystander shares common social identity with the victim the probability of intervention is enhanced, other things being equal. However, it is generally not possible to study such hypotheses experimentally for practical and ethical reasons. Here we show that an experiment that depicts a violent incident at life-size in immersive virtual reality lends support to the social identity explanation. 40 male supporters of Arsenal Football Club in England were recruited for a two-factor between-groups experiment: the victim was either an Arsenal supporter or not (in-group/out-group), and looked towards the participant for help or not during the confrontation. The response variables were the numbers of verbal and physical interventions by the participant during the violent argument. The number of physical interventions had a significantly greater mean in the in-group condition compared to the out-group. The more that participants perceived that the Victim was looking to them for help the greater the number of interventions in the in-group but not in the out-group. These results are supported by standard statistical analysis of variance, with more detailed findings obtained by a symbolic regression procedure based on genetic programming. Verbal interventions made during their experience, and analysis of post-experiment interview data suggest that in-group members were more prone to confrontational intervention compared to the out-group who were more prone to make statements to try to diffuse the situation. PMID:23300991

  8. Spatial awareness in immersive virtual environments revealed in open-loop walking

    NASA Astrophysics Data System (ADS)

    Turano, Kathleen A.; Chaudhury, Sidhartha

    2005-03-01

    People are able to walk without vision to previously viewed targets in the real world. This ability to update one"s position in space has been attributed to a path integration system that uses internally generated self-motion signals together with the perceived object-to-self distance of the target. In a previous study using an immersive virtual environment (VE), we found that many subjects were unable to walk without vision to a previously viewed target located 4 m away. Their walking paths were influenced by the room structure that varied trial to trial. In this study we investigated whether the phenomenon is specific to a VE by testing subjects in a real world and a VE. The real world was viewed with field restricting goggles and via cameras using the same head-mounted display as in the VE. The results showed that only in the VE were walking paths influenced by the room structure. Women were more affected than men, and the effect decreased over trials and after subjects performed the task in the real world. The results also showed that a brief (<0.5 s) exposure to the visual scene during self-motion was sufficient to reduce the influence of the room structure on walking paths. The results are consistent with the idea that without visual experience within the VE, the path integration system is unable to effectively update one"s spatial position. As a result, people rely on other cues to define their position in space. Women, unlike men, choose to use visual cues about environmental structure to reorient.

  9. Europeana and 3D

    NASA Astrophysics Data System (ADS)

    Pletinckx, D.

    2011-09-01

    The current 3D hype creates a lot of interest in 3D. People go to 3D movies, but are we ready to use 3D in our homes, in our offices, in our communication? Are we ready to deliver real 3D to a general public and use interactive 3D in a meaningful way to enjoy, learn, communicate? The CARARE project is realising this for the moment in the domain of monuments and archaeology, so that real 3D of archaeological sites and European monuments will be available to the general public by 2012. There are several aspects to this endeavour. First of all is the technical aspect of flawlessly delivering 3D content over all platforms and operating systems, without installing software. We have currently a working solution in PDF, but HTML5 will probably be the future. Secondly, there is still little knowledge on how to create 3D learning objects, 3D tourist information or 3D scholarly communication. We are still in a prototype phase when it comes to integrate 3D objects in physical or virtual museums. Nevertheless, Europeana has a tremendous potential as a multi-facetted virtual museum. Finally, 3D has a large potential to act as a hub of information, linking to related 2D imagery, texts, video, sound. We describe how to create such rich, explorable 3D objects that can be used intuitively by the generic Europeana user and what metadata is needed to support the semantic linking.

  10. The use of a low-cost visible light 3D scanner to create virtual reality environment models of actors and objects

    NASA Astrophysics Data System (ADS)

    Straub, Jeremy

    2015-05-01

    A low-cost 3D scanner has been developed with a parts cost of approximately USD $5,000. This scanner uses visible light sensing to capture both structural as well as texture and color data of a subject. This paper discusses the use of this type of scanner to create 3D models for incorporation into a virtual reality environment. It describes the basic scanning process (which takes under a minute for a single scan), which can be repeated to collect multiple positions, if needed for actor model creation. The efficacy of visible light versus other scanner types is also discussed.

  11. A Second Chance at Health: How a 3D Virtual World Can Improve Health Self-Efficacy for Weight Loss Management Among Adults.

    PubMed

    Behm-Morawitz, Elizabeth; Lewallen, Jennifer; Choi, Grace

    2016-02-01

    Health self-efficacy, or the beliefs in one's capabilities to perform health behaviors, is a significant factor in eliciting health behavior change, such as weight loss. Research has demonstrated that virtual embodiment has the potential to alter one's psychology and physicality, particularly in health contexts; however, little is known about the impacts embodiment in a virtual world has on health self-efficacy. The present research is a randomized controlled trial (N = 90) examining the effectiveness of virtual embodiment and play in a social virtual world (Second Life [SL]) for increasing health self-efficacy (exercise and nutrition efficacy) among overweight adults. Participants were randomly assigned to a 3D social virtual world (avatar virtual interaction experimental condition), 2D social networking site (no avatar virtual interaction control condition), or no intervention (no virtual interaction control condition). The findings of this study provide initial evidence for the use of SL to improve exercise efficacy and to support weight loss. Results also suggest that individuals who have higher self-presence with their avatar reap more benefits. Finally, quantitative findings are triangulated with qualitative data to increase confidence in the results and provide richer insight into the perceived effectiveness and limitations of SL for meeting weight loss goals. Themes resulting from the qualitative analysis indicate that participation in SL can improve motivation and efficacy to try new physical activities; however, individuals who have a dislike for video games may not be benefitted by avatar-based virtual interventions. Implications for research on the transformative potential of virtual embodiment and self-presence in general are discussed.

  12. Research-Grade 3D Virtual Astromaterials Samples: Novel Visualization of NASA's Apollo Lunar Samples and Antarctic Meteorite Samples to Benefit Curation, Research, and Education

    NASA Technical Reports Server (NTRS)

    Blumenfeld, E. H.; Evans, C. A.; Oshel, E. R.; Liddle, D. A.; Beaulieu, K. R.; Zeigler, R. A.; Righter, K.; Hanna, R. D.; Ketcham, R. A.

    2017-01-01

    NASA's vast and growing collections of astromaterials are both scientifically and culturally significant, requiring unique preservation strategies that need to be recurrently updated to contemporary technological capabilities and increasing accessibility demands. New technologies have made it possible to advance documentation and visualization practices that can enhance conservation and curation protocols for NASA's Astromaterials Collections. Our interdisciplinary team has developed a method to create 3D Virtual Astromaterials Samples (VAS) of the existing collections of Apollo Lunar Samples and Antarctic Meteorites. Research-grade 3D VAS will virtually put these samples in the hands of researchers and educators worldwide, increasing accessibility and visibility of these significant collections. With new sample return missions on the horizon, it is of primary importance to develop advanced curation standards for documentation and visualization methodologies.

  13. The influence of regulatory fit and interactivity on brand satisfaction and trust in E-health marketing inside 3D virtual worlds (Second Life).

    PubMed

    Jin, Seung-A Annie; Lee, Kwan Min

    2010-12-01

    Interactive three-dimensional (3D) virtual environments like Second Life have great potential as venues for effective e-health marketing and e-brand management. Drawing from regulatory focus and interactivity literatures, this study examined the effects of the regulatory fit that consumers experience in interactive e-health marketing on their brand satisfaction and brand trust. The results of a two-group comparison experiment conducted within Second Life revealed that consumers in the regulatory fit condition show greater brand satisfaction and brand trust than those in the regulatory misfit condition, thus confirming the persuasive influence of regulatory fit in e-brand management inside 3D virtual worlds. In addition, a structural equation modeling analysis demonstrated the mediating role of consumers' perceived interactivity in explaining the processional link between regulatory fit and brand evaluation. Theoretical contributions and managerial implications of these findings are discussed.

  14. Repeated Use of Immersive Virtual Reality Therapy to Control Pain during Wound Dressing Changes in Pediatric and Adult Burn Patients

    PubMed Central

    Faber, Albertus W.; Patterson, David R.; Bremer, Marco

    2012-01-01

    Objective The current study explored whether immersive virtual reality continues to reduce pain (via distraction) during more than one wound care session per patient. Patients: Thirty six patients aged 8 to 57 years (mean age of 27.7 years), with an average of 8.4% total body surface area burned (range .25 to 25.5 TBSA) received bandage changes, and wound cleaning. Methods Each patient received one baseline wound cleaning/debridement session with no-VR (control condition) followed by one or more (up to seven) subsequent wound care sessions during VR. After each wound care session (one session per day), worst pain intensity was measured using a Visual Analogue Thermometer (VAT), the dependent variable. Using a within subjects design, worst pain intensity VAT during wound care with no-VR (baseline, Day 0) was compared to pain during wound care while using immersive virtual reality (up to seven days of wound care during VR). Results Compared to pain during no-VR Baseline (Day 0), pain ratings during wound debridement were statistically lower when patients were in virtual reality on Days 1, 2 and 3, and although not significant beyond day 3, the pattern of results from Days 4, 5, and 6 are consistent with the notion that VR continues to reduce pain when used repeatedly. Conclusions Results from the present study suggest that VR continues to be effective when used for three (or possibly more) treatments during severe burn wound debridement. PMID:23970314

  15. Three-dimensional display modes for CT colonography: conventional 3D virtual colonoscopy versus unfolded cube projection.

    PubMed

    Vos, Frans M; van Gelder, Rogier E; Serlie, Iwo W O; Florie, Jasper; Nio, C Yung; Glas, Afina S; Post, Frits H; Truyen, Roel; Gerritsen, Frans A; Stoker, Jaap

    2003-09-01

    The authors compared a conventional two-directional three-dimensional (3D) display for computed tomography (CT) colonography with an alternative method they developed on the basis of time efficiency and surface visibility. With the conventional technique, 3D ante- and retrograde cine loops were obtained (hereafter, conventional 3D). With the alternative method, six projections were obtained at 90 degrees viewing angles (unfolded cube display). Mean evaluation time per patient with the conventional 3D display was significantly longer than that with the unfolded cube display. With the conventional 3D method, 93.8% of the colon surface came into view; with the unfolded cube method, 99.5% of the colon surface came into view. Sensitivity and specificity were not significantly different between the two methods. Agreements between observers were kappa = 0.605 for conventional 3D display and kappa = 0.692 for unfolded cube display. Consequently, the latter method enhances the 3D endoluminal display with improved time efficiency and higher surface visibility.

  16. Coming down to Earth: Helping Teachers Use 3D Virtual Worlds in Across-Spaces Learning Situations

    ERIC Educational Resources Information Center

    Muñoz-Cristóbal, Juan A.; Prieto, Luis P.; Asensio-Pérez, Juan I.; Martínez-Monés, Alejandra; Jorrín-Abellán, Iván M.; Dimitriadis, Yannis

    2015-01-01

    Different approaches have explored how to provide seamless learning across multiple ICT-enabled physical and virtual spaces, including three-dimensional virtual worlds (3DVW). However, these approaches present limitations that may reduce their acceptance in authentic educational practice: The difficulties of authoring and sharing teacher-created…

  17. Atom pair 2D-fingerprints perceive 3D-molecular shape and pharmacophores for very fast virtual screening of ZINC and GDB-17.

    PubMed

    Awale, Mahendra; Reymond, Jean-Louis

    2014-07-28

    Three-dimensional (3D) molecular shape and pharmacophores are important determinants of the biological activity of organic molecules; however, a precise computation of 3D-shape is generally too slow for virtual screening of very large databases. A reinvestigation of the concept of atom pairs initially reported by Carhart et al. and extended by Schneider et al. showed that a simple atom pair fingerprint (APfp) counting atom pairs at increasing topological distances in 2D-structures without atom property assignment correlates with various representations of molecular shape extracted from the 3D-structures. A related 55-dimensional atom pair fingerprint extended with atom properties (Xfp) provided an efficient pharmacophore fingerprint with good performance for ligand-based virtual screening such as the recovery of active compounds from decoys in DUD, and overlap with the ROCS 3D-pharmacophore scoring function. The APfp and Xfp data were organized for web-based extremely fast nearest-neighbor searching in ZINC (13.5 M compounds) and GDB-17 (50 M random subset) freely accessible at www.gdb.unibe.ch .

  18. 3D workflow for HDR image capture of projection systems and objects for CAVE virtual environments authoring with wireless touch-sensitive devices

    NASA Astrophysics Data System (ADS)

    Prusten, Mark J.; McIntyre, Michelle; Landis, Marvin

    2006-02-01

    A 3D workflow pipeline is presented for High Dynamic Range (HDR) image capture of projected scenes or objects for presentation in CAVE virtual environments. The methods of HDR digital photography of environments vs. objects are reviewed. Samples of both types of virtual authoring being the actual CAVE environment and a sculpture are shown. A series of software tools are incorporated into a pipeline called CAVEPIPE, allowing for high-resolution objects and scenes to be composited together in natural illumination environments [1] and presented in our CAVE virtual reality environment. We also present a way to enhance the user interface for CAVE environments. The traditional methods of controlling the navigation through virtual environments include: glove, HUD's and 3D mouse devices. By integrating a wireless network that includes both WiFi (IEEE 802.11b/g) and Bluetooth (IEEE 802.15.1) protocols the non-graphical input control device can be eliminated. Therefore wireless devices can be added that would include: PDA's, Smart Phones, TabletPC's, Portable Gaming consoles, and PocketPC's.

  19. A Comparative Analysis of 2D and 3D Tasks for Virtual Reality Therapies Based on Robotic-Assisted Neurorehabilitation for Post-stroke Patients

    PubMed Central

    Lledó, Luis D.; Díez, Jorge A.; Bertomeu-Motos, Arturo; Ezquerro, Santiago; Badesa, Francisco J.; Sabater-Navarro, José M.; García-Aracil, Nicolás

    2016-01-01

    Post-stroke neurorehabilitation based on virtual therapies are performed completing repetitive exercises shown in visual electronic devices, whose content represents imaginary or daily life tasks. Currently, there are two ways of visualization of these task. 3D virtual environments are used to get a three dimensional space that represents the real world with a high level of detail, whose realism is determinated by the resolucion and fidelity of the objects of the task. Furthermore, 2D virtual environments are used to represent the tasks with a low degree of realism using techniques of bidimensional graphics. However, the type of visualization can influence the quality of perception of the task, affecting the patient's sensorimotor performance. The purpose of this paper was to evaluate if there were differences in patterns of kinematic movements when post-stroke patients performed a reach task viewing a virtual therapeutic game with two different type of visualization of virtual environment: 2D and 3D. Nine post-stroke patients have participated in the study receiving a virtual therapy assisted by PUPArm rehabilitation robot. Horizontal movements of the upper limb were performed to complete the aim of the tasks, which consist in reaching peripheral or perspective targets depending on the virtual environment shown. Various parameter types such as the maximum speed, reaction time, path length, or initial movement are analyzed from the data acquired objectively by the robotic device to evaluate the influence of the task visualization. At the end of the study, a usability survey was provided to each patient to analysis his/her satisfaction level. For all patients, the movement trajectories were enhanced when they completed the therapy. This fact suggests that patient's motor recovery was increased. Despite of the similarity in majority of the kinematic parameters, differences in reaction time and path length were higher using the 3D task. Regarding the success rates

  20. A Comparative Analysis of 2D and 3D Tasks for Virtual Reality Therapies Based on Robotic-Assisted Neurorehabilitation for Post-stroke Patients.

    PubMed

    Lledó, Luis D; Díez, Jorge A; Bertomeu-Motos, Arturo; Ezquerro, Santiago; Badesa, Francisco J; Sabater-Navarro, José M; García-Aracil, Nicolás

    2016-01-01

    Post-stroke neurorehabilitation based on virtual therapies are performed completing repetitive exercises shown in visual electronic devices, whose content represents imaginary or daily life tasks. Currently, there are two ways of visualization of these task. 3D virtual environments are used to get a three dimensional space that represents the real world with a high level of detail, whose realism is determinated by the resolucion and fidelity of the objects of the task. Furthermore, 2D virtual environments are used to represent the tasks with a low degree of realism using techniques of bidimensional graphics. However, the type of visualization can influence the quality of perception of the task, affecting the patient's sensorimotor performance. The purpose of this paper was to evaluate if there were differences in patterns of kinematic movements when post-stroke patients performed a reach task viewing a virtual therapeutic game with two different type of visualization of virtual environment: 2D and 3D. Nine post-stroke patients have participated in the study receiving a virtual therapy assisted by PUPArm rehabilitation robot. Horizontal movements of the upper limb were performed to complete the aim of the tasks, which consist in reaching peripheral or perspective targets depending on the virtual environment shown. Various parameter types such as the maximum speed, reaction time, path length, or initial movement are analyzed from the data acquired objectively by the robotic device to evaluate the influence of the task visualization. At the end of the study, a usability survey was provided to each patient to analysis his/her satisfaction level. For all patients, the movement trajectories were enhanced when they completed the therapy. This fact suggests that patient's motor recovery was increased. Despite of the similarity in majority of the kinematic parameters, differences in reaction time and path length were higher using the 3D task. Regarding the success rates

  1. 3D Imaging.

    ERIC Educational Resources Information Center

    Hastings, S. K.

    2002-01-01

    Discusses 3 D imaging as it relates to digital representations in virtual library collections. Highlights include X-ray computed tomography (X-ray CT); the National Science Foundation (NSF) Digital Library Initiatives; output peripherals; image retrieval systems, including metadata; and applications of 3 D imaging for libraries and museums. (LRW)

  2. The Effect of the Use of the 3-D Multi-User Virtual Environment "Second Life" on Student Motivation and Language Proficiency in Courses of Spanish as a Foreign Language

    ERIC Educational Resources Information Center

    Pares-Toral, Maria T.

    2013-01-01

    The ever increasing popularity of virtual worlds, also known as 3-D multi-user virtual environments (MUVEs) or simply virtual worlds provides language instructors with a new tool they can exploit in their courses. For now, "Second Life" is one of the most popular MUVEs used for teaching and learning, and although "Second Life"…

  3. High-immersion three-dimensional display of the numerical computer model

    NASA Astrophysics Data System (ADS)

    Xing, Shujun; Yu, Xunbo; Zhao, Tianqi; Cai, Yuanfa; Chen, Duo; Chen, Zhidong; Sang, Xinzhu

    2013-08-01

    High-immersion three-dimensional (3D) displays making them valuable tools for many applications, such as designing and constructing desired building houses, industrial architecture design, aeronautics, scientific research, entertainment, media advertisement, military areas and so on. However, most technologies provide 3D display in the front of screens which are in parallel with the walls, and the sense of immersion is decreased. To get the right multi-view stereo ground image, cameras' photosensitive surface should be parallax to the public focus plane and the cameras' optical axes should be offset to the center of public focus plane both atvertical direction and horizontal direction. It is very common to use virtual cameras, which is an ideal pinhole camera to display 3D model in computer system. We can use virtual cameras to simulate the shooting method of multi-view ground based stereo image. Here, two virtual shooting methods for ground based high-immersion 3D display are presented. The position of virtual camera is determined by the people's eye position in the real world. When the observer stand in the circumcircle of 3D ground display, offset perspective projection virtual cameras is used. If the observer stands out the circumcircle of 3D ground display, offset perspective projection virtual cameras and the orthogonal projection virtual cameras are adopted. In this paper, we mainly discussed the parameter setting of virtual cameras. The Near Clip Plane parameter setting is the main point in the first method, while the rotation angle of virtual cameras is the main point in the second method. In order to validate the results, we use the D3D and OpenGL to render scenes of different viewpoints and generate a stereoscopic image. A realistic visualization system for 3D models is constructed and demonstrated for viewing horizontally, which provides high-immersion 3D visualization. The displayed 3D scenes are compared with the real objects in the real world.

  4. Optometric Measurements Predict Performance but not Comfort on a Virtual Object Placement Task with a Stereoscopic 3D Display

    DTIC Science & Technology

    2014-09-16

    environment, depth perception 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT: SAR 18. NUMBER OF PAGES 29 19a. NAME OF...virtual environment, depth perception 1 Distribution A: Approved for public release; distribution unlimited. 88ABW Cleared 9/9/2013; 88ABW...precision placement of a virtual object in depth at the same location as a target object. Subjective discomfort was assessed using the Simulator Sickness

  5. NewVision: a program for interactive navigation and analysis of multiple 3-D data sets using coordinated virtual cameras.

    PubMed

    Pixton, J L; Belmont, A S

    1996-01-01

    We describe "NewVision", a program designed for rapid interactive display, sectioning, and comparison of multiple large three-dimensional (3-D) reconstructions. User tools for navigating within large 3-D data sets and selecting local subvolumes for display, combined with view caching, fast integer interpolation, and background tasking, provide highly interactive viewing of arbitrarily sized data sets on Silicon Graphics systems ranging from simple workstations to supercomputers. Multiple windows, each showing different views of the same 3-D data set, are coordinated through mapping of local coordinate systems to a single global world coordinate system. Mapping to a world coordinate system allows quantitative measurements from any open window as well as creation of linked windows in which operations such as panning, zooming, and 3-D rotations of the viewing perspective in any one window are mirrored by corresponding transformations in the views shown in other linked windows. The specific example of tracing 3-D fiber trajectories is used to demonstrate the potential of the linked window concept. A global overview of NewVision's design and organization is provided, and future development directions are briefly discussed.

  6. A New Approach to Improve Cognition, Muscle Strength, and Postural Balance in Community-Dwelling Elderly with a 3-D Virtual Reality Kayak Program.

    PubMed

    Park, Junhyuck; Yim, JongEun

    2016-01-01

    Aging is usually accompanied with deterioration of physical abilities, such as muscular strength, sensory sensitivity, and functional capacity. Recently, intervention methods with virtual reality have been introduced, providing an enjoyable therapy for elderly. The aim of this study was to investigate whether a 3-D virtual reality kayak program could improve the cognitive function, muscle strength, and balance of community-dwelling elderly. Importantly, kayaking involves most of the upper body musculature and needs the balance control. Seventy-two participants were randomly allocated into the kayak program group (n = 36) and the control group (n = 36). The two groups were well matched with respect to general characteristics at baseline. The participants in both groups performed a conventional exercise program for 30 min, and then the 3-D virtual reality kayak program was performed in the kayak program group for 20 min, two times a week for 6 weeks. Cognitive function was measured using the Montreal Cognitive Assessment. Muscle strength was measured using the arm curl and handgrip strength tests. Standing and sitting balance was measured using the Good Balance system. The post-test was performed in the same manner as the pre-test; the overall outcomes such as cognitive function (p < 0.05), muscle strength (p < 0.05), and balance (standing and sitting balance, p < 0.05) were significantly improved in kayak program group compared to the control group. We propose that the 3-D virtual reality kayak program is a promising intervention method for improving the cognitive function, muscle strength, and balance of elderly.

  7. Searching for anthranilic acid-based thumb pocket 2 HCV NS5B polymerase inhibitors through a combination of molecular docking, 3D-QSAR and virtual screening.

    PubMed

    Vrontaki, Eleni; Melagraki, Georgia; Mavromoustakos, Thomas; Afantitis, Antreas

    2016-01-01

    A combination of the following computational methods: (i) molecular docking, (ii) 3-D Quantitative Structure Activity Relationship Comparative Molecular Field Analysis (3D-QSAR CoMFA), (iii) similarity search and (iv) virtual screening using PubChem database was applied to identify new anthranilic acid-based inhibitors of hepatitis C virus (HCV) replication. A number of known inhibitors were initially docked into the "Thumb Pocket 2" allosteric site of the crystal structure of the enzyme HCV RNA-dependent RNA polymerase (NS5B GT1b). Then, the CoMFA fields were generated through a receptor-based alignment of docking poses to build a validated and stable 3D-QSAR CoMFA model. The proposed model can be first utilized to get insight into the molecular features that promote bioactivity, and then within a virtual screening procedure, it can be used to estimate the activity of novel potential bioactive compounds prior to their synthesis and biological tests.

  8. Enabling immersive simulation.

    SciTech Connect

    McCoy, Josh; Mateas, Michael; Hart, Derek H.; Whetzel, Jonathan; Basilico, Justin Derrick; Glickman, Matthew R.; Abbott, Robert G.

    2009-02-01

    The object of the 'Enabling Immersive Simulation for Complex Systems Analysis and Training' LDRD has been to research, design, and engineer a capability to develop simulations which (1) provide a rich, immersive interface for participation by real humans (exploiting existing high-performance game-engine technology wherever possible), and (2) can leverage Sandia's substantial investment in high-fidelity physical and cognitive models implemented in the Umbra simulation framework. We report here on these efforts. First, we describe the integration of Sandia's Umbra modular simulation framework with the open-source Delta3D game engine. Next, we report on Umbra's integration with Sandia's Cognitive Foundry, specifically to provide for learning behaviors for 'virtual teammates' directly from observed human behavior. Finally, we describe the integration of Delta3D with the ABL behavior engine, and report on research into establishing the theoretical framework that will be required to make use of tools like ABL to scale up to increasingly rich and realistic virtual characters.

  9. Changelings and Shape Shifters? Identity Play and Pedagogical Positioning of Staff in Immersive Virtual Worlds

    ERIC Educational Resources Information Center

    Savin-Baden, Maggi

    2010-01-01

    This paper presents a study that used narrative inquiry to explore staff experiences of learning and teaching in immersive worlds. The findings introduced issues relating to identity play, the relationship between pedagogy and play and the ways in which learning, play and fun were managed (or not). At the same time there was a sense of imposed or…

  10. Investigating the Affective Learning in a 3D Virtual Learning Environment: The Case Study of the Chatterdale Mystery

    ERIC Educational Resources Information Center

    Molka-Danielsen, Judith; Hadjistassou, Stella; Messl-Egghart, Gerhilde

    2016-01-01

    This research is motivated by the emergence of virtual technologies and their potential as engaging pedagogical tools for facilitating comprehension, interactions and collaborations for learning; and in particular as applied to learning second languages (L2). This paper provides a descriptive analysis of a case study that examines affective…

  11. Caring in the Dynamics of Design and Languaging: Exploring Second Language Learning in 3D Virtual Spaces

    ERIC Educational Resources Information Center

    Zheng, Dongping

    2012-01-01

    This study provides concrete evidence of ecological, dialogical views of languaging within the dynamics of coordination and cooperation in a virtual world. Beginning level second language learners of Chinese engaged in cooperative activities designed to provide them opportunities to refine linguistic actions by way of caring for others, for the…

  12. Implementing Advanced Characteristics of X3D Collaborative Virtual Environments for Supporting e-Learning: The Case of EVE Platform

    ERIC Educational Resources Information Center

    Bouras, Christos; Triglianos, Vasileios; Tsiatsos, Thrasyvoulos

    2014-01-01

    Three dimensional Collaborative Virtual Environments are a powerful form of collaborative telecommunication applications, enabling the users to share a common three-dimensional space and interact with each other as well as with the environment surrounding them, in order to collaboratively solve problems or aid learning processes. Such an…

  13. Transforming Clinical Imaging and 3D Data for Virtual Reality Learning Objects: HTML5 and Mobile Devices Implementation

    ERIC Educational Resources Information Center

    Trelease, Robert B.; Nieder, Gary L.

    2013-01-01

    Web deployable anatomical simulations or "virtual reality learning objects" can easily be produced with QuickTime VR software, but their use for online and mobile learning is being limited by the declining support for web browser plug-ins for personal computers and unavailability on popular mobile devices like Apple iPad and Android…

  14. 3D-Reconstructions and Virtual 4D-Visualization to Study Metamorphic Brain Development in the Sphinx Moth Manduca Sexta.

    PubMed

    Huetteroth, Wolf; El Jundi, Basil; El Jundi, Sirri; Schachtner, Joachim

    2010-01-01

    DURING METAMORPHOSIS, THE TRANSITION FROM THE LARVA TO THE ADULT, THE INSECT BRAIN UNDERGOES CONSIDERABLE REMODELING: new neurons are integrated while larval neurons are remodeled or eliminated. One well acknowledged model to study metamorphic brain development is the sphinx moth Manduca sexta. To further understand mechanisms involved in the metamorphic transition of the brain we generated a 3D standard brain based on selected brain areas of adult females and 3D reconstructed the same areas during defined stages of pupal development. Selected brain areas include for example mushroom bodies, central complex, antennal- and optic lobes. With this approach we eventually want to quantify developmental changes in neuropilar architecture, but also quantify changes in the neuronal complement and monitor the development of selected neuronal populations. Furthermore, we used a modeling software (Cinema 4D) to create a virtual 4D brain, morphing through its developmental stages. Thus the didactical advantages of 3D visualization are expanded to better comprehend complex processes of neuropil formation and remodeling during development. To obtain datasets of the M. sexta brain areas, we stained whole brains with an antiserum against the synaptic vesicle protein synapsin. Such labeled brains were then scanned with a confocal laser scanning microscope and selected neuropils were reconstructed with the 3D software AMIRA 4.1.

  15. 3D-QSAR and virtual screening studies of thiazolidine-2,4-dione analogs: Validation of experimental inhibitory potencies towards PIM-1 kinase

    NASA Astrophysics Data System (ADS)

    Asati, Vivek; Bharti, Sanjay Kumar; Budhwani, Ashok Kumar

    2017-04-01

    The proviral insertion site in moloney murine leukemia virus (PIM) is a family of serine/threonine kinase of Ca2+-calmodulin-dependent protein kinase (CAMK) group which is responsible for the activation and regulation of cellular transcription and translation. The three isoforms of PIM kinase (PIM-1, PIM-2 and PIM-3) share high homology and functional idleness are widely expressed and involved in a variety of biological processes including cell survival, proliferation, differentiation and apoptosis. Altered expression of PIM-1 kinase correlated with hematologic malignancies and solid tumors. In the present study, atom-based 3D-QSAR, docking and virtual screening studies have been performed on a series of thiazolidine-2,4-dione derivatives as PIM-1 kinase inhibitors. 3D-QSAR and docking approach has shortlisted the most active thiazolidine-2,4-dione derivatives such as 28, 31, 33 and 35 with the incorporation of more than one structural feature in a single molecule. External validations by various parameters and molecular docking studies at the active site of PIM-1 kinase have proved the reliability of the developed 3D-QSAR model. The generated pharmacophore (AADHR.33) from 3D-QSAR study was used for screening of drug like compounds from ZINC database, where ZINC15056464 and ZINC83292944 showed potential binding affinities at the active site amino acid residues (LYS67, GLU171, ASP128 and ASP186) of PIM-1 kinase (PDB ID: "pdb:4DTK").

  16. Foreign Language Vocabulary Development through Activities in an Online 3D Environment

    ERIC Educational Resources Information Center

    Milton, James; Jonsen, Sunniva; Hirst, Steven; Lindenburn, Sharn

    2012-01-01

    On-line virtual 3D worlds offer the opportunity for users to interact in real time with native speakers of the language they are learning. In principle, this ought to be of great benefit to learners, and mimicking the opportunity for immersion that real-life travel to a foreign country offers. We have very little research to show whether this is…

  17. Extending Body Space in Immersive Virtual Reality: A Very Long Arm Illusion

    PubMed Central

    Kilteni, Konstantina; Normand, Jean-Marie; Sanchez-Vives, Maria V.; Slater, Mel

    2012-01-01

    Recent studies have shown that a fake body part can be incorporated into human body representation through synchronous multisensory stimulation on the fake and corresponding real body part – the most famous example being the Rubber Hand Illusion. However, the extent to which gross asymmetries in the fake body can be assimilated remains unknown. Participants experienced, through a head-tracked stereo head-mounted display a virtual body coincident with their real body. There were 5 conditions in a between-groups experiment, with 10 participants per condition. In all conditions there was visuo-motor congruence between the real and virtual dominant arm. In an Incongruent condition (I), where the virtual arm length was equal to the real length, there was visuo-tactile incongruence. In four Congruent conditions there was visuo-tactile congruence, but the virtual arm lengths were either equal to (C1), double (C2), triple (C3) or quadruple (C4) the real ones. Questionnaire scores and defensive withdrawal movements in response to a threat showed that the overall level of ownership was high in both C1 and I, and there was no significant difference between these conditions. Additionally, participants experienced ownership over the virtual arm up to three times the length of the real one, and less strongly at four times the length. The illusion did decline, however, with the length of the virtual arm. In the C2–C4 conditions although a measure of proprioceptive drift positively correlated with virtual arm length, there was no correlation between the drift and ownership of the virtual arm, suggesting different underlying mechanisms between ownership and drift. Overall, these findings extend and enrich previous results that multisensory and sensorimotor information can reconstruct our perception of the body shape, size and symmetry even when this is not consistent with normal body proportions. PMID:22829891

  18. Workflows and the Role of Images for Virtual 3d Reconstruction of no Longer Extant Historic Objects

    NASA Astrophysics Data System (ADS)

    Münster, S.

    2013-07-01

    3D reconstruction technologies have gained importance as tools for the research and visualization of no longer extant historic objects during the last decade. Within such reconstruction processes, visual media assumes several important roles: as the most important sources especially for a reconstruction of no longer extant objects, as a tool for communication and cooperation within the production process, as well as for a communication and visualization of results. While there are many discourses about theoretical issues of depiction as sources and as visualization outcomes of such projects, there is no systematic research about the importance of depiction during a 3D reconstruction process and based on empirical findings. Moreover, from a methodological perspective, it would be necessary to understand which role visual media plays during the production process and how it is affected by disciplinary boundaries and challenges specific to historic topics. Research includes an analysis of published work and case studies investigating reconstruction projects. This study uses methods taken from social sciences to gain a grounded view of how production processes would take place in practice and which functions and roles images would play within them. For the investigation of these topics, a content analysis of 452 conference proceedings and journal articles related to 3D reconstruction modeling in the field of humanities has been completed. Most of the projects described in those publications dealt with data acquisition and model building for existing objects. Only a small number of projects focused on structures that no longer or never existed physically. Especially that type of project seems to be interesting for a study of the importance of pictures as sources and as tools for interdisciplinary cooperation during the production process. In the course of the examination the authors of this paper applied a qualitative content analysis for a sample of 26 previously

  19. Building Analysis for Urban Energy Planning Using Key Indicators on Virtual 3d City Models - the Energy Atlas of Berlin

    NASA Astrophysics Data System (ADS)

    Krüger, A.; Kolbe, T. H.

    2012-07-01

    In the context of increasing greenhouse gas emission and global demographic change with the simultaneous trend to urbanization, it is a big challenge for cities around the world to perform modifications in energy supply chain and building characteristics resulting in reduced energy consumption and carbon dioxide mitigation. Sound knowledge of energy resource demand and supply including its spatial distribution within urban areas is of great importance for planning strategies addressing greater energy efficiency. The understanding of the city as a complex energy system affects several areas of the urban living, e.g. energy supply, urban texture, human lifestyle, and climate protection. With the growing availability of 3D city models around the world based on the standard language and format CityGML, energy system modelling, analysis and simulation can be incorporated into these models. Both domains will profit from that interaction by bringing together official and accurate building models including building geometries, semantics and locations forming a realistic image of the urban structure with systemic energy simulation models. A holistic view on the impacts of energy planning scenarios can be modelled and analyzed including side effects on urban texture and human lifestyle. This paper focuses on the identification, classification, and integration of energy-related key indicators of buildings and neighbourhoods within 3D building models. Consequent application of 3D city models conforming to CityGML serves the purpose of deriving indicators for this topic. These will be set into the context of urban energy planning within the Energy Atlas Berlin. The generation of indicator objects covering the indicator values and related processing information will be presented on the sample scenario estimation of heating energy consumption in buildings and neighbourhoods. In their entirety the key indicators will form an adequate image of the local energy situation for

  20. An immersive virtual peer for studying social influences on child cyclists' road-crossing behavior.

    PubMed

    Babu, Sabarish V; Grechkin, Timofey Y; Chihak, Benjamin; Ziemer, Christine; Kearney, Joseph K; Cremer, James F; Plumert, Jodie M

    2011-01-01

    The goal of our work is to develop a programmatically controlled peer to bicycle with a human subject for the purpose of studying how social interactions influence road-crossing behavior. The peer is controlled through a combination of reactive controllers that determine the gross motion of the virtual bicycle, action-based controllers that animate the virtual bicyclist and generate verbal behaviors, and a keyboard interface that allows an experimenter to initiate the virtual bicyclist's actions during the course of an experiment. The virtual bicyclist's repertoire of behaviors includes road following, riding alongside the human rider, stopping at intersections, and crossing intersections through specified gaps in traffic. The virtual cyclist engages the human subject through gaze, gesture, and verbal interactions. We describe the structure of the behavior code and report the results of a study examining how 10- and 12-year-old children interact with a peer cyclist that makes either risky or safe choices in selecting gaps in traffic. Results of our study revealed that children who rode with a risky peer were more likely to cross intermediate-sized gaps than children who rode with a safe peer. In addition, children were significantly less likely to stop at the last six intersections after the experience of riding with the risky than the safe peer during the first six intersections. The results of the study and children's reactions to the virtual peer indicate that our virtual peer framework is a promising platform for future behavioral studies of peer influences on children's bicycle riding behavior.

  1. ARC+(Registered Trademark) and ARC PC Welding Simulators: Teach Welders with Virtual Interactive 3D Technologies

    NASA Technical Reports Server (NTRS)

    Choquet, Claude

    2011-01-01

    123 Certification Inc., a Montreal based company, has developed an innovative hands-on welding simulator solution to help build the welding workforce in the most simple way. The solution lies in virtual reality technology, which has been fully tested since the early 90's. President and founder of 123 Certification Inc., Mr. Claude Choquet Ing. Msc. IWE. acts as a bridge between the welding and the programming world. Working in these fields for more than 20 years. he has filed 12 patents world-wide for a gesture control platform with leading edge hardware related to simulation. In the summer of 2006. Mr Choquet was proud to be invited to the annual IIW International Weld ing Congress in Quebec City to launch the ARC+ welding simulator. A 100% virtual reality system and web based training center was developed to simulate multi process. multi-materiaL multi-position and multi pass welding. The simulator is intended to train welding students and apprentices in schools or industries. The welding simulator is composed of a real welding e[eetrode holder (SMAW-GTAW) and gun (GMAW-FCAW). a head mounted display (HMD), a 6 degrees of freedom tracking system for interaction between the user's hands and head. as well as external audio speakers. Both guns and HMD are interacting online and simultaneously. The welding simulation is based on the law of physics and empirical results from detailed analysis of a series of welding tests based on industrial applications tested over the last 20 years. The simulation runs in real-time, using a local logic network to determine the quality and shape of the created weld. These results are based on the orientation distance. and speed of the welding torch and depth of penetration. The welding process and resulting weld bc.1d are displayed in a virtual environment with screenplay interactive training modules. For review. weld quality and recorded process values can be displayed and diagnosed after welding. To help in the le.tming process, a

  2. In silico exploration of c-KIT inhibitors by pharmaco-informatics methodology: pharmacophore modeling, 3D QSAR, docking studies, and virtual screening.

    PubMed

    Chaudhari, Prashant; Bari, Sanjay

    2016-02-01

    c-KIT is a component of the platelet-derived growth factor receptor family, classified as type-III receptor tyrosine kinase. c-KIT has been reported to be involved in, small cell lung cancer, other malignant human cancers, and inflammatory and autoimmune diseases associated with mast cells. Available c-KIT inhibitors suffer from tribulations of growing resistance or cardiac toxicity. A combined in silico pharmacophore and structure-based virtual screening was performed to identify novel potential c-KIT inhibitors. In the present study, five molecules from the ZINC database were retrieved as new potential c-KIT inhibitors, using Schrödinger's Maestro 9.0 molecular modeling suite. An atom-featured 3D QSAR model was built using previously reported c-KIT inhibitors containing the indolin-2-one scaffold. The developed 3D QSAR model ADHRR.24 was found to be significant (R2 = 0.9378, Q2 = 0.7832) and instituted to be sufficiently robust with good predictive accuracy, as confirmed through external validation approaches, Y-randomization and GH approach [GH score 0.84 and Enrichment factor (E) 4.964]. The present QSAR model was further validated for the OECD principle 3, in that the applicability domain was calculated using a "standardization approach." Molecular docking of the QSAR dataset molecules and final ZINC hits were performed on the c-KIT receptor (PDB ID: 3G0E). Docking interactions were in agreement with the developed 3D QSAR model. Model ADHRR.24 was explored for ligand-based virtual screening followed by in silico ADME prediction studies. Five molecules from the ZINC database were obtained as potential c-KIT inhibitors with high in -silico predicted activity and strong key binding interactions with the c-KIT receptor.

  3. High precision analysis of an embryonic extensional fault-related fold using 3D orthorectified virtual outcrops: The viewpoint importance in structural geology

    NASA Astrophysics Data System (ADS)

    Tavani, Stefano; Corradetti, Amerigo; Billi, Andrea

    2016-05-01

    Image-based 3D modeling has recently opened the way to the use of virtual outcrop models in geology. An intriguing application of this method involves the production of orthorectified images of outcrops using almost any user-defined point of view, so that photorealistic cross-sections suitable for numerous geological purposes and measurements can be easily generated. These purposes include the accurate quantitative analysis of fault-fold relationships starting from imperfectly oriented and partly inaccessible real outcrops. We applied the method of image-based 3D modeling and orthorectification to a case study from the northern Apennines, Italy, where an incipient extensional fault affecting well-layered limestones is exposed on a 10-m-high barely accessible cliff. Through a few simple steps, we constructed a high-quality image-based 3D model of the outcrop. In the model, we made a series of measurements including fault and bedding attitudes, which allowed us to derive the bedding-fault intersection direction. We then used this direction as viewpoint to obtain a distortion-free photorealistic cross-section, on which we measured bed dips and thicknesses as well as fault stratigraphic separations. These measurements allowed us to identify a slight difference (i.e. only 0.5°) between the hangingwall and footwall cutoff angles. We show that the hangingwall strain required to compensate the upward-decreasing displacement of the fault was accommodated by this 0.5° rotation (i.e. folding) and coeval 0.8% thickening of strata in the hangingwall relatively to footwall strata. This evidence is consistent with trishear fault-propagation folding. Our results emphasize the viewpoint importance in structural geology and therefore the potential of using orthorectified virtual outcrops.

  4. Sino-VirtualMoon: A 3D web platform using Chang’E-1 data for collaborative research

    NASA Astrophysics Data System (ADS)

    Chen, Min; Lin, Hui; Wen, Yongning; He, Li; Hu, Mingyuan

    2012-05-01

    The successful launch of the Chinese Chang’E-1 satellite created a valuable opportunity for lunar research, and represented China’s remarkable leap in deep space exploration. With the observed data acquired by Chang’E-1 satellite, a web platform was developed aims at providing an open research workspace for experts to conduct collaborative scientific research on the Moon. Excepting for supporting 3D visualization, the platform also provides collaborative tools for the basic geospatial analysis of the Moon, and supports collaborative simulation about the dynamic formation of lunar impact craters caused by the collision of meteors (or small asteroids). Based on this platform, related multidisciplinary experts can contribute their domain knowledge conveniently for collaborative scientific research of the Moon.

  5. μCT of ex-vivo stained mouse hearts and embryos enables a precise match between 3D virtual histology, classical histology and immunochemistry

    PubMed Central

    Larsson, Emanuel; Martin, Sabine; Lazzarini, Marcio; Tromba, Giuliana; Missbach-Guentner, Jeannine; Pinkert-Leetsch, Diana; Katschinski, Dörthe M.; Alves, Frauke

    2017-01-01

    The small size of the adult and developing mouse heart poses a great challenge for imaging in preclinical research. The aim of the study was to establish a phosphotungstic acid (PTA) ex-vivo staining approach that efficiently enhances the x-ray attenuation of soft-tissue to allow high resolution 3D visualization of mouse hearts by synchrotron radiation based μCT (SRμCT) and classical μCT. We demonstrate that SRμCT of PTA stained mouse hearts ex-vivo allows imaging of the cardiac atrium, ventricles, myocardium especially its fibre structure and vessel walls in great detail and furthermore enables the depiction of growth and anatomical changes during distinct developmental stages of hearts in mouse embryos. Our x-ray based virtual histology approach is not limited to SRμCT as it does not require monochromatic and/or coherent x-ray sources and even more importantly can be combined with conventional histological procedures. Furthermore, it permits volumetric measurements as we show for the assessment of the plaque volumes in the aortic valve region of mice from an ApoE-/- mouse model. Subsequent, Masson-Goldner trichrome staining of paraffin sections of PTA stained samples revealed intact collagen and muscle fibres and positive staining of CD31 on endothelial cells by immunohistochemistry illustrates that our approach does not prevent immunochemistry analysis. The feasibility to scan hearts already embedded in paraffin ensured a 100% correlation between virtual cut sections of the CT data sets and histological heart sections of the same sample and may allow in future guiding the cutting process to specific regions of interest. In summary, since our CT based virtual histology approach is a powerful tool for the 3D depiction of morphological alterations in hearts and embryos in high resolution and can be combined with classical histological analysis it may be used in preclinical research to unravel structural alterations of various heart diseases. PMID:28178293

  6. μCT of ex-vivo stained mouse hearts and embryos enables a precise match between 3D virtual histology, classical histology and immunochemistry.

    PubMed

    Dullin, Christian; Ufartes, Roser; Larsson, Emanuel; Martin, Sabine; Lazzarini, Marcio; Tromba, Giuliana; Missbach-Guentner, Jeannine; Pinkert-Leetsch, Diana; Katschinski, Dörthe M; Alves, Frauke

    2017-01-01

    The small size of the adult and developing mouse heart poses a great challenge for imaging in preclinical research. The aim of the study was to establish a phosphotungstic acid (PTA) ex-vivo staining approach that efficiently enhances the x-ray attenuation of soft-tissue to allow high resolution 3D visualization of mouse hearts by synchrotron radiation based μCT (SRμCT) and classical μCT. We demonstrate that SRμCT of PTA stained mouse hearts ex-vivo allows imaging of the cardiac atrium, ventricles, myocardium especially its fibre structure and vessel walls in great detail and furthermore enables the depiction of growth and anatomical changes during distinct developmental stages of hearts in mouse embryos. Our x-ray based virtual histology approach is not limited to SRμCT as it does not require monochromatic and/or coherent x-ray sources and even more importantly can be combined with conventional histological procedures. Furthermore, it permits volumetric measurements as we show for the assessment of the plaque volumes in the aortic valve region of mice from an ApoE-/- mouse model. Subsequent, Masson-Goldner trichrome staining of paraffin sections of PTA stained samples revealed intact collagen and muscle fibres and positive staining of CD31 on endothelial cells by immunohistochemistry illustrates that our approach does not prevent immunochemistry analysis. The feasibility to scan hearts already embedded in paraffin ensured a 100% correlation between virtual cut sections of the CT data sets and histological heart sections of the same sample and may allow in future guiding the cutting process to specific regions of interest. In summary, since our CT based virtual histology approach is a powerful tool for the 3D depiction of morphological alterations in hearts and embryos in high resolution and can be combined with classical histological analysis it may be used in preclinical research to unravel structural alterations of various heart diseases.

  7. Virtualizing ancient Rome: 3D acquisition and modeling of a large plaster-of-Paris model of imperial Rome

    NASA Astrophysics Data System (ADS)

    Guidi, Gabriele; Frischer, Bernard; De Simone, Monica; Cioci, Andrea; Spinetti, Alessandro; Carosso, Luca; Micoli, Laura L.; Russo, Michele; Grasso, Tommaso

    2005-01-01

    Computer modeling through digital range images has been used for many applications, including 3D modeling of objects belonging to our cultural heritage. The scales involved range from small objects (e.g. pottery), to middle-sized works of art (statues, architectural decorations), up to very large structures (architectural and archaeological monuments). For any of these applications, suitable sensors and methodologies have been explored by different authors. The object to be modeled within this project is the "Plastico di Roma antica," a large plaster-of-Paris model of imperial Rome (16x17 meters) created in the last century. Its overall size therefore demands an acquisition approach typical of large structures, but it also is characterized extremely tiny details typical of small objects (houses are a few centimeters high; their doors, windows, etc. are smaller than 1 centimeter). This paper gives an account of the procedures followed for solving this "contradiction" and describes how a huge 3D model was acquired and generated by using a special metrology Laser Radar. The procedures for reorienting in a single reference system the huge point clouds obtained after each acquisition phase, thanks to the measurement of fixed redundant references, are described. The data set was split in smaller sub-areas 2 x 2 meters each for purposes of mesh editing. This subdivision was necessary owing to the huge number of points in each individual scan (50-60 millions). The final merge of the edited parts made it possible to create a single mesh. All these processes were made with software specifically designed for this project since no commercial package could be found that was suitable for managing such a large number of points. Preliminary models are presented. Finally, the significance of the project is discussed in terms of the overall project known as "Rome Reborn," of which the present acquisition is an important component.

  8. Virtualizing ancient Rome: 3D acquisition and modeling of a large plaster-of-Paris model of imperial Rome

    NASA Astrophysics Data System (ADS)

    Guidi, Gabriele; Frischer, Bernard; De Simone, Monica; Cioci, Andrea; Spinetti, Alessandro; Carosso, Luca; Micoli, Laura L.; Russo, Michele; Grasso, Tommaso

    2004-12-01

    Computer modeling through digital range images has been used for many applications, including 3D modeling of objects belonging to our cultural heritage. The scales involved range from small objects (e.g. pottery), to middle-sized works of art (statues, architectural decorations), up to very large structures (architectural and archaeological monuments). For any of these applications, suitable sensors and methodologies have been explored by different authors. The object to be modeled within this project is the "Plastico di Roma antica," a large plaster-of-Paris model of imperial Rome (16x17 meters) created in the last century. Its overall size therefore demands an acquisition approach typical of large structures, but it also is characterized extremely tiny details typical of small objects (houses are a few centimeters high; their doors, windows, etc. are smaller than 1 centimeter). This paper gives an account of the procedures followed for solving this "contradiction" and describes how a huge 3D model was acquired and generated by using a special metrology Laser Radar. The procedures for reorienting in a single reference system the huge point clouds obtained after each acquisition phase, thanks to the measurement of fixed redundant references, are described. The data set was split in smaller sub-areas 2 x 2 meters each for purposes of mesh editing. This subdivision was necessary owing to the huge number of points in each individual scan (50-60 millions). The final merge of the edited parts made it possible to create a single mesh. All these processes were made with software specifically designed for this project since no commercial package could be found that was suitable for managing such a large number of points. Preliminary models are presented. Finally, the significance of the project is discussed in terms of the overall project known as "Rome Reborn," of which the present acquisition is an important component.

  9. Intervertebral Disc Swelling Demonstrated by 3D and Water Content Magnetic Resonance Analyses after a 3-Day Dry Immersion Simulating Microgravity

    PubMed Central

    Treffel, Loïc; Mkhitaryan, Karen; Gellee, Stéphane; Gauquelin-Koch, Guillemette; Gharib, Claude; Blanc, Stéphane; Millet, Catherine

    2016-01-01

    Background: Vertebral deconditioning is commonly experienced after space flight and simulation studies. Disc herniation is quadrupled after space flight. Purpose: The main hypothesis formulated by the authors is that microgravity results in intervertebral disc (IVD) swelling. Study Design: The aim of the study was to identify the morphological changes of the spine and their clinical consequences after simulated microgravity by 3-day dry immersion (DI). The experimental protocol was performed on 12 male volunteers using magnetic resonance imaging and spectroscopy before and after DI. Methods: All the experiment was financially supported by CNES (Centre national d'études spatiales i.e., French Space Agency). Results: We observed an increase in spine height of 1.5 ± 0.4 cm and a decrease in curvature, particularly for the lumbar region with a decrease of −4 ± 2.5°. We found a significant increase in IVD volume of +8 ± 9% at T12-L1 and +11 ± 9% at L5-S1. This phenomenon is likely associated with the increase in disc intervertebral water content (IWC), 17 ± 27%. During the 3 days in DI, 92% of the subjects developed back pain in the lumbar region below the diaphragmatic muscle. This clinical observation may be linked to the morphological changes of the spine. Conclusions: The morphological changes observed and, specifically, the disc swelling caused by increased IWC may contribute to understanding disc herniation after microgravity exposure. Our results confirmed the efficiency of the 3-day DI model to reproduce quickly the effects of microgravity on spine morphology. Our findings raise the question of the subject selection in spatial studies, especially studies about spine morphology and reconditioning programs after space flight. These results may contribute to a better understanding of the mechanisms underlying disc herniation and may serve as the basis to develop countermeasures for astronauts and to prevent IVD herniation and back pain on Earth. PMID

  10. A Virtual Walk through London: Culture Learning through a Cultural Immersion Experience

    ERIC Educational Resources Information Center

    Shih, Ya-Chun

    2015-01-01

    Integrating Google Street View into a three-dimensional virtual environment in which users control personal avatars provides these said users with access to an innovative, interactive, and real-world context for communication and culture learning. We have selected London, a city famous for its rich historical, architectural, and artistic heritage,…

  11. Immersive Virtual Reality in the Psychology Classroom: What Purpose Could it Serve?

    ERIC Educational Resources Information Center

    Coxon, Matthew

    2013-01-01

    Virtual reality is by no means a new technology, yet it is increasingly being used, to different degrees, in education, training, rehabilitation, therapy, and home entertainment. Although the exact reasons for this shift are not the subject of this short opinion piece, it is possible to speculate that decreased costs, and increased performance, of…

  12. Developing and Assessing Immersive Content for Naval Training: Lessons Learned in the Virtual World

    DTIC Science & Technology

    2012-10-01

    they learned how to navigate in the virtual world, control the camera view, and teleport between points of interest. After this trail was completed...students teleported to the main trail, where they were led through the line of sight and Method 1 exhibits by the instructor. During the second

  13. Visual Perspectives within Educational Computer Games: Effects on Presence and Flow within Virtual Immersive Learning Environments

    ERIC Educational Resources Information Center

    Scoresby, Jon; Shelton, Brett E.

    2011-01-01

    The mis-categorizing of cognitive states involved in learning within virtual environments has complicated instructional technology research. Further, most educational computer game research does not account for how learning activity is influenced by factors of game content and differences in viewing perspectives. This study is a qualitative…

  14. Realistic Simulation of Environments of Unlimited Size in Immersive Virtual Environments

    DTIC Science & Technology

    2013-01-02

    compelling visual information is augmented with some combination of positional auditory cues, proprioceptive , kinesthetic, and inertial feedback about...completely untethered and can roam freely over a wide area. Proprioceptive , vestibular, and efferent sensory information for virtual walking in a...users with realistic proprioceptive and vestibular cues. Similarly, the proposed development of redirected walking in large physical spaces

  15. The Development and Evaluation of a Virtual Radiotherapy Treatment Machine Using an Immersive Visualisation Environment

    ERIC Educational Resources Information Center

    Bridge, P.; Appleyard, R. M.; Ward, J. W.; Philips, R.; Beavis, A. W.

    2007-01-01

    Due to the lengthy learning process associated with complicated clinical techniques, undergraduate radiotherapy students can struggle to access sufficient time or patients to gain the level of expertise they require. By developing a hybrid virtual environment with real controls, it was hoped that group learning of these techniques could take place…

  16. 3D World Building System

    ScienceCinema

    None

    2016-07-12

    This video provides an overview of the Sandia National Laboratories developed 3-D World Model Building capability that provides users with an immersive, texture rich 3-D model of their environment in minutes using a laptop and color and depth camera.

  17. 3D World Building System

    SciTech Connect

    2013-10-30

    This video provides an overview of the Sandia National Laboratories developed 3-D World Model Building capability that provides users with an immersive, texture rich 3-D model of their environment in minutes using a laptop and color and depth camera.

  18. New Desktop Virtual Reality Technology in Technical Education

    ERIC Educational Resources Information Center

    Ausburn, Lynna J.; Ausburn, Floyd B.

    2008-01-01

    Virtual reality (VR) that immerses users in a 3D environment through use of headwear, body suits, and data gloves has demonstrated effectiveness in technical and professional education. Immersive VR is highly engaging and appealing to technically skilled young Net Generation learners. However, technical difficulty and very high costs have kept…

  19. Game engines and immersive displays

    NASA Astrophysics Data System (ADS)

    Chang, Benjamin; Destefano, Marc

    2014-02-01

    While virtual reality and digital games share many core technologies, the programming environments, toolkits, and workflows for developing games and VR environments are often distinct. VR toolkits designed for applications in visualization and simulation often have a different feature set or design philosophy than game engines, while popular game engines often lack support for VR hardware. Extending a game engine to support systems such as the CAVE gives developers a unified development environment and the ability to easily port projects, but involves challenges beyond just adding stereo 3D visuals. In this paper we outline the issues involved in adapting a game engine for use with an immersive display system including stereoscopy, tracking, and clustering, and present example implementation details using Unity3D. We discuss application development and workflow approaches including camera management, rendering synchronization, GUI design, and issues specific to Unity3D, and present examples of projects created for a multi-wall, clustered, stereoscopic display.

  20. Drug Design for CNS Diseases: Polypharmacological Profiling of Compounds Using Cheminformatic, 3D-QSAR and Virtual Screening Methodologies

    PubMed Central

    Nikolic, Katarina; Mavridis, Lazaros; Djikic, Teodora; Vucicevic, Jelica; Agbaba, Danica; Yelekci, Kemal; Mitchell, John B. O.

    2016-01-01

    HIGHLIGHTS Many CNS targets are being explored for multi-target drug designNew databases and cheminformatic methods enable prediction of primary pharmaceutical target and off-targets of compoundsQSAR, virtual screening and docking methods increase the potential of rational drug design The diverse cerebral mechanisms implicated in Central Nervous System (CNS) diseases together with the heterogeneous and overlapping nature of phenotypes indicated that multitarget strategies may be appropriate for the improved treatment of complex brain diseases. Understanding how the neurotransmitter systems interact is also important in optimizing therapeutic strategies. Pharmacological intervention on one target will often influence another one, such as the well-established serotonin-dopamine interaction or the dopamine-glutamate interaction. It is now accepted that drug action can involve plural targets and that polypharmacological interaction with multiple targets, to address disease in more subtle and effective ways, is a key concept for development of novel drug candidates against complex CNS diseases. A multi-target therapeutic strategy for Alzheimer‘s disease resulted in the development of very effective Multi-Target Designed Ligands (MTDL) that act on both the cholinergic and monoaminergic systems, and also retard the progression of neurodegeneration by inhibiting amyloid aggregation. Many compounds already in databases have been investigated as ligands for multiple targets in drug-discovery programs. A probabilistic method, the Parzen-Rosenblatt Window approach, was used to build a “predictor” model using data collected from the ChEMBL database. The model can be used to predict both the primary pharmaceutical target and off-targets of a compound based on its structure. Several multi-target ligands were selected for further study, as compounds with possible additional beneficial pharmacological activities. Based on all these findings, it is concluded that multipotent

  1. A semi-immersive virtual reality incremental swing balance task activates prefrontal cortex: a functional near-infrared spectroscopy study.

    PubMed

    Basso Moro, Sara; Bisconti, Silvia; Muthalib, Makii; Spezialetti, Matteo; Cutini, Simone; Ferrari, Marco; Placidi, Giuseppe; Quaresima, Valentina

    2014-01-15

    Previous functional near-infrared spectroscopy (fNIRS) studies indicated that the prefrontal cortex (PFC) is involved in the maintenance of the postural balance after external perturbations. So far, no studies have been conducted to investigate the PFC hemodynamic response to virtual reality (VR) tasks that could be adopted in the field of functional neurorehabilitation. The aim of this fNIRS study was to assess PFC oxygenation response during an incremental and a control swing balance task (ISBT and CSBT, respectively) in a semi-immersive VR environment driven by a depth-sensing camera. It was hypothesized that: i) the PFC would be bilaterally activated in response to the increase of the ISBT difficulty, as this cortical region is involved in the allocation of attentional resources to maintain postural control; and ii) the PFC activation would be greater in the right than in the left hemisphere considering its dominance for visual control of body balance. To verify these hypotheses, 16 healthy male subjects were requested to stand barefoot while watching a 3 dimensional virtual representation of themselves projected onto a screen. They were asked to maintain their equilibrium on a virtual blue swing board susceptible to external destabilizing perturbations (i.e., randomizing the forward-backward direction of the impressed pulse force) during a 3-min ISBT (performed at four levels of difficulty) or during a 3-min CSBT (performed constantly at the lowest level of difficulty of the ISBT). The center of mass (COM), at each frame, was calculated and projected on the floor. When the subjects were unable to maintain the COM over the board, this became red (error). After each error, the time required to bring back the COM on the board was calculated (returning time). An eight-channel continuous wave fNIRS system was employed for measuring oxygenation changes (oxygenated-hemoglobin, O2Hb; deoxygenated-hemoglobin, HHb) related to the PFC activation (Brodmann Areas 10, 11

  2. Urban Archaeology: how to Communicate a Story of a Site, 3d Virtual Reconstruction but not Only

    NASA Astrophysics Data System (ADS)

    Capone, M.

    2011-09-01

    Over the past few years experimental systems have been developed to introduce new ways of enjoying cultural heritage using digital media. Technology had a lead role in this testing ground increasing the need to develop new way of communication according to contemporary iconography culture. Most applications are aimed at creating online databases that allow free access to information, that helps to spread the culture and simplify the study about cultural heritage. To this type of application are added others, which are aimed at defining new and different ways of cultural heritage enjoyment. Very interesting applications are those regarding to reconstruction of archaeological landscape. The target of these applications is to develop a new level of knowledge that increases the value of the archaeological find and the level of understanding. In fact, digital media can bridge the gap of communication associated to archaeological find: the virtual simulation offers the possibility to put it in the context and it defines a new way to enjoy the cultural heritage. In most of these cases the spectacular and recreational factor generally prevails. We believe that experimentation is needed in this area, particularly for the development of Urban Archaeology. In this case, another trouble to enjoy is added to the lack of communication, typical of archaeological finds, because it is "hidden" in an irreversible way: it is under water or under city. So, our research is mainly oriented to define a methodological path to elaborate a communication strategy to increase interest about Urban Archaeology.

  3. Transforming clinical imaging and 3D data for virtual reality learning objects: HTML5 and mobile devices implementation.

    PubMed

    Trelease, Robert B; Nieder, Gary L

    2013-01-01

    Web deployable anatomical simulations or "virtual reality learning objects" can easily be produced with QuickTime VR software, but their use for online and mobile learning is being limited by the declining support for web browser plug-ins for personal computers and unavailability on popular mobile devices like Apple iPad and Android tablets. This article describes complementary methods for creating comparable, multiplatform VR learning objects in the new HTML5 standard format, circumventing platform-specific limitations imposed by the QuickTime VR multimedia file format. Multiple types or "dimensions" of anatomical information can be embedded in such learning objects, supporting different kinds of online learning applications, including interactive atlases, examination questions, and complex, multi-structure presentations. Such HTML5 VR learning objects are usable on new mobile devices that do not support QuickTime VR, as well as on personal computers. Furthermore, HTML5 VR learning objects can be embedded in "ebook" document files, supporting the development of new types of electronic textbooks on mobile devices that are increasingly popular and self-adopted for mobile learning.

  4. Immersive virtual reality in computational chemistry: Applications to the analysis of QM and MM data

    PubMed Central

    Salvadori, Andrea; Del Frate, Gianluca; Pagliai, Marco; Barone, Vincenzo

    2016-01-01

    Abstract The role of Virtual Reality (VR) tools in molecular sciences is analyzed in this contribution through the presentation of the Caffeine software to the quantum chemistry community. Caffeine, developed at Scuola Normale Superiore, is specifically tailored for molecular representation and data visualization with VR systems, such as VR theaters and helmets. Usefulness and advantages that can be gained by exploiting VR are here reported, considering few examples specifically selected to illustrate different level of theory and molecular representation. PMID:27867214

  5. Enhanced Virtual Presence for Immersive Visualization of Complex Situations for Mission Rehearsal

    DTIC Science & Technology

    1997-06-01

    Engagement Response Server needs to obtain and present the data from different databases. We take a concrete example. Many architects, civil engineers use...of a classification system that can be used to unify the drawings from architects, civil engineers and mechanical engineers. This is only an example...such as NASA and the Smithsonian it used virtual reality technologies to recreate expeditions to Antarctica, Mayan ruins and the surface of Mars. Seismic

  6. A method for generating an illusion of backwards time travel using immersive virtual reality-an exploratory study.

    PubMed

    Friedman, Doron; Pizarro, Rodrigo; Or-Berkers, Keren; Neyret, Solène; Pan, Xueni; Slater, Mel

    2014-01-01

    We introduce a new method, based on immersive virtual reality (IVR), to give people the illusion of having traveled backwards through time to relive a sequence of events in which they can intervene and change history. The participant had played an important part in events with a tragic outcome-deaths of strangers-by having to choose between saving 5 people or 1. We consider whether the ability to go back through time, and intervene, to possibly avoid all deaths, has an impact on how the participant views such moral dilemmas, and also whether this experience leads to a re-evaluation of past unfortunate events in their own lives. We carried out an exploratory study where in the "Time Travel" condition 16 participants relived these events three times, seeing incarnations of their past selves carrying out the actions that they had previously carried out. In a "Repetition" condition another 16 participants replayed the same situation three times, without any notion of time travel. Our results suggest that those in the Time Travel condition did achieve an illusion of "time travel" provided that they also experienced an illusion of presence in the virtual environment, body ownership, and agency over the virtual body that substituted their own. Time travel produced an increase in guilt feelings about the events that had occurred, and an increase in support of utilitarian behavior as the solution to the moral dilemma. Time travel also produced an increase in implicit morality as judged by an implicit association test. The time travel illusion was associated with a reduction of regret associated with bad decisions in their own lives. The results show that when participants have a third action that they can take to solve the moral dilemma (that does not immediately involve choosing between the 1 and the 5) then they tend to take this option, even though it is useless in solving the dilemma, and actually results in the deaths of a greater number.

  7. Manifold compositions, music visualization, and scientific sonification in an immersive virtual-reality environment.

    SciTech Connect

    Kaper, H. G.

    1998-01-05

    An interdisciplinary project encompassing sound synthesis, music composition, sonification, and visualization of music is facilitated by the high-performance computing capabilities and the virtual-reality environments available at Argonne National Laboratory. The paper describes the main features of the project's centerpiece, DIASS (Digital Instrument for Additive Sound Synthesis); ''A.N.L.-folds'', an equivalence class of compositions produced with DIASS; and application of DIASS in two experiments in the sonification of complex scientific data. Some of the larger issues connected with this project, such as the changing ways in which both scientists and composers perform their tasks, are briefly discussed.

  8. Modeling and Accuracy Assessment for 3D-VIRTUAL Reconstruction in Cultural Heritage Using Low-Cost Photogrammetry: Surveying of the "santa MARÍA Azogue" Church's Front

    NASA Astrophysics Data System (ADS)

    Robleda Prieto, G.; Pérez Ramos, A.

    2015-02-01

    Sometimes it could be difficult to represent "on paper" an architectural idea, a solution, a detail or a newly created element, depending on the complexity what it want be conveyed through its graphical representation but it may be even harder to represent the existing reality. (a building, a detail,...), at least with an acceptable degree of definition and accuracy. As a solution to this hypothetical problem, this paper try to show a methodology to collect measure data by combining different methods or techniques, to obtain the characteristic geometry of architectonic elements, especially in those highly decorated and/or complex geometry, as well as to assess the accuracy of the results obtained, but in an accuracy level enough and not very expensive costs. In addition, we can obtain a 3D recovery model that allows us a strong support, beyond point clouds obtained through another more expensive methods as using laser scanner, to obtain orthoimages. This methodology was used in the study case of the 3D-virtual reconstruction of a main medieval church façade because of the geometrical complexity in many elements as the existing main doorway with archivolts and many details, as well as the rose window located above it so it's inaccessible due to the height.

  9. Weapon identification using antemortem computed tomography with virtual 3D and rapid prototype modeling--a report in a case of blunt force head injury.

    PubMed

    Woźniak, Krzysztof; Rzepecka-Woźniak, Ewa; Moskała, Artur; Pohl, Jerzy; Latacz, Katarzyna; Dybała, Bogdan

    2012-10-10

    A frequent request of a prosecutor referring to forensic autopsy is to determine the mechanism of an injury and to identify the weapons used to cause those injuries. This task could be problematic in many ways, including changes in the primary injury caused by medical intervention and the process of healing. To accomplish this task, the forensic pathologist has to gather all possible information during the post-mortem examination. The more data is collected, the easier it is to obtain an accurate answer to the prosecutor's question. The authors present a case of head injuries that the victim sustained under unknown circumstances. The patient underwent neurosurgical treatment which resulted in alteration of the bone fracture pattern. The only way to evaluate this injury was to analyze antemortem clinical data, especially CT scans, with virtual 3D reconstruction of the fractured skull. A physical model of a part of the broken skull was created with the use of 3D printing. These advanced techniques, applied for the first time in Poland for forensic purposes, allowed investigators to extract enough data to develop a hypothesis about the mechanism of injury and the weapon most likely used.

  10. ‘My Virtual Dream’: Collective Neurofeedback in an Immersive Art Environment

    PubMed Central

    Kovacevic, Natasha; Ritter, Petra; Tays, William; Moreno, Sylvain; McIntosh, Anthony Randal

    2015-01-01

    While human brains are specialized for complex and variable real world tasks, most neuroscience studies reduce environmental complexity, which limits the range of behaviours that can be explored. Motivated to overcome this limitation, we conducted a large-scale experiment with electroencephalography (EEG) based brain-computer interface (BCI) technology as part of an immersive multi-media science-art installation. Data from 523 participants were collected in a single night. The exploratory experiment was designed as a collective computer game where players manipulated mental states of relaxation and concentration with neurofeedback targeting modulation of relative spectral power in alpha and beta frequency ranges. Besides validating robust time-of-night effects, gender differences and distinct spectral power patterns for the two mental states, our results also show differences in neurofeedback learning outcome. The unusually large sample size allowed us to detect unprecedented speed of learning changes in the power spectrum (~ 1 min). Moreover, we found that participants' baseline brain activity predicted subsequent neurofeedback beta training, indicating state-dependent learning. Besides revealing these training effects, which are relevant for BCI applications, our results validate a novel platform engaging art and science and fostering the understanding of brains under natural conditions. PMID:26154513

  11. Conformal Visualization for Partially-Immersive Platforms

    PubMed Central

    Petkov, Kaloian; Papadopoulos, Charilaos; Zhang, Min; Kaufman, Arie E.; Gu, Xianfeng

    2010-01-01

    Current immersive VR systems such as the CAVE provide an effective platform for the immersive exploration of large 3D data. A major limitation is that in most cases at least one display surface is missing due to space, access or cost constraints. This partially-immersive visualization results in a substantial loss of visual information that may be acceptable for some applications, however it becomes a major obstacle for critical tasks, such as the analysis of medical data. We propose a conformal deformation rendering pipeline for the visualization of datasets on partially-immersive platforms. The angle-preserving conformal mapping approach is used to map the 360°3D view volume to arbitrary display configurations. It has the desirable property of preserving shapes under distortion, which is important for identifying features, especially in medical data. The conformal mapping is used for rasterization, realtime raytracing and volume rendering of the datasets. Since the technique is applied during the rendering, we can construct stereoscopic images from the data, which is usually not true for image-based distortion approaches. We demonstrate the stereo conformal mapping rendering pipeline in the partially-immersive 5-wall Immersive Cabin (IC) for virtual colonoscopy and architectural review. PMID:26279083

  12. Resting-State fMRI Activity Predicts Unsupervised Learning and Memory in an Immersive Virtual Reality Environment

    PubMed Central

    Wong, Chi Wah; Olafsson, Valur; Plank, Markus; Snider, Joseph; Halgren, Eric; Poizner, Howard; Liu, Thomas T.

    2014-01-01

    In the real world, learning often proceeds in an unsupervised manner without explicit instructions or feedback. In this study, we employed an experimental paradigm in which subjects explored an immersive virtual reality environment on each of two days. On day 1, subjects implicitly learned the location of 39 objects in an unsupervised fashion. On day 2, the locations of some of the objects were changed, and object location recall performance was assessed and found to vary across subjects. As prior work had shown that functional magnetic resonance imaging (fMRI) measures of resting-state brain activity can predict various measures of brain performance across individuals, we examined whether resting-state fMRI measures could be used to predict object location recall performance. We found a significant correlation between performance and the variability of the resting-state fMRI signal in the basal ganglia, hippocampus, amygdala, thalamus, insula, and regions in the frontal and temporal lobes, regions important for spatial exploration, learning, memory, and decision making. In addition, performance was significantly correlated with resting-state fMRI connectivity between the left caudate and the right fusiform gyrus, lateral occipital complex, and superior temporal gyrus. Given the basal ganglia's role in exploration, these findings suggest that tighter integration of the brain systems responsible for exploration and visuospatial processing may be critical for learning in a complex environment. PMID:25286145

  13. Eliciting Affect via Immersive Virtual Reality: A Tool for Adolescent Risk Reduction

    PubMed Central

    Houck, Christopher D.; Barker, David H.; Garcia, Abbe Marrs; Spitalnick, Josh S.; Curtis, Virginia; Roye, Scott; Brown, Larry K.

    2014-01-01

    Objective A virtual reality environment (VRE) was designed to expose participants to substance use and sexual risk-taking cues to examine the utility of VR in eliciting adolescent physiological arousal. Methods 42 adolescents (55% male) with a mean age of 14.54 years (SD = 1.13) participated. Physiological arousal was examined through heart rate (HR), respiratory sinus arrhythmia (RSA), and self-reported somatic arousal. A within-subject design (neutral VRE, VR party, and neutral VRE) was utilized to examine changes in arousal. Results The VR party demonstrated an increase in physiological arousal relative to a neutral VRE. Examination of individual segments of the party (e.g., orientation, substance use, and sexual risk) demonstrated that HR was significantly elevated across all segments, whereas only the orientation and sexual risk segments demonstrated significant impact on RSA. Conclusions This study provides preliminary evidence that VREs can be used to generate physiological arousal in response to substance use and sexual risk cues. PMID:24365699

  14. A method for generating an illusion of backwards time travel using immersive virtual reality—an exploratory study

    PubMed Central

    Friedman, Doron; Pizarro, Rodrigo; Or-Berkers, Keren; Neyret, Solène; Pan, Xueni; Slater, Mel

    2014-01-01

    We introduce a new method, based on immersive virtual reality (IVR), to give people the illusion of having traveled backwards through time to relive a sequence of events in which they can intervene and change history. The participant had played an important part in events with a tragic outcome—deaths of strangers—by having to choose between saving 5 people or 1. We consider whether the ability to go back through time, and intervene, to possibly avoid all deaths, has an impact on how the participant views such moral dilemmas, and also whether this experience leads to a re-evaluation of past unfortunate events in their own lives. We carried out an exploratory study where in the “Time Travel” condition 16 participants relived these events three times, seeing incarnations of their past selves carrying out the actions that they had previously carried out. In a “Repetition” condition another 16 participants replayed the same situation three times, without any notion of time travel. Our results suggest that those in the Time Travel condition did achieve an illusion of “time travel” provided that they also experienced an illusion of presence in the virtual environment, body ownership, and agency over the virtual body that substituted their own. Time travel produced an increase in guilt feelings about the events that had occurred, and an increase in support of utilitarian behavior as the solution to the moral dilemma. Time travel also produced an increase in implicit morality as judged by an implicit association test. The time travel illusion was associated with a reduction of regret associated with bad decisions in their own lives. The results show that when participants have a third action that they can take to solve the moral dilemma (that does not immediately involve choosing between the 1 and the 5) then they tend to take this option, even though it is useless in solving the dilemma, and actually results in the deaths of a greater number. PMID:25228889

  15. Structure-based rational quest for potential novel inhibitors of human HMG-CoA reductase by combining CoMFA 3D QSAR modeling and virtual screening.

    PubMed

    Zhang, Qing Y; Wan, Jian; Xu, Xin; Yang, Guang F; Ren, Yan L; Liu, Jun J; Wang, Hui; Guo, Yu

    2007-01-01

    3-Hydroxy-3-methylglutaryl-coenzyme A reductase (HMGR) catalyzes the formation of mevalonate. In many classes of organisms, this is the committed step leading to the synthesis of essential compounds, such as cholesterol. However, a high level of cholesterol is an important risk factor for coronary heart disease, for which an effective clinical treatment is to block HMGR using inhibitors like statins. Recently the structures of catalytic portion of human HMGR complexed with six different statins have been determined by a delicate crystallography study (Istvan and Deisenhofer Science 2001, 292, 1160-1164), which established a solid basis of structure and mechanism for the rational design, optimization, and development of even better HMGR inhibitors. In this study, three-dimensional quantitative structure-activity relationship (3D QSAR) with comparative molecular field analysis (CoMFA) was performed on a training set of up to 35 statins and statin-like compounds. Predictive models were established by using two different ways: (1) Models-fit, obtained by SYBYL conventional fit-atom molecular alignment rule, has cross-validated coefficients (q2) up to 0.652 and regression coefficients (r2) up to 0.977. (2) Models-dock, obtained by FlexE by docking compounds into the HMGR active site, has cross-validated coefficients (q2) up to 0.731 and regression coefficients (r2) up to 0.947. These models were further validated by an external testing set of 12 statins and statin-like compounds. Integrated with CoMFA 3D QSAR predictive models, molecular surface property (electrostatic and steric) mapping and structure-based (both ligand and receptor) virtual screening have been employed to explore potential novel hits for the HMGR inhibitors. A representative set of eight new compounds of non-statin-like structures but with high pIC(50) values were sorted out in the present study.

  16. New weather depiction technology for night vision goggle (NVG) training: 3D virtual/augmented reality scene-weather-atmosphere-target simulation

    NASA Astrophysics Data System (ADS)

    Folaron, Michelle; Deacutis, Martin; Hegarty, Jennifer; Vollmerhausen, Richard; Schroeder, John; Colby, Frank P.

    2007-04-01

    US Navy and Marine Corps pilots receive Night Vision Goggle (NVG) training as part of their overall training to maintain the superiority of our forces. This training must incorporate realistic targets; backgrounds; and representative atmospheric and weather effects they may encounter under operational conditions. An approach for pilot NVG training is to use the Night Imaging and Threat Evaluation Laboratory (NITE Lab) concept. The NITE Labs utilize a 10' by 10' static terrain model equipped with both natural and cultural lighting that are used to demonstrate various illumination conditions, and visual phenomena which might be experienced when utilizing night vision goggles. With this technology, the military can safely, systematically, and reliably expose pilots to the large number of potentially dangerous environmental conditions that will be experienced in their NVG training flights. A previous SPIE presentation described our work for NAVAIR to add realistic atmospheric and weather effects to the NVG NITE Lab training facility using the NVG - WDT(Weather Depiction Technology) system (Colby, et al.). NVG -WDT consist of a high end multiprocessor server with weather simulation software, and several fixed and goggle mounted Heads Up Displays (HUDs). Atmospheric and weather effects are simulated using state-of-the-art computer codes such as the WRF (Weather Research μ Forecasting) model; and the US Air Force Research Laboratory MODTRAN radiative transport model. Imagery for a variety of natural and man-made obscurations (e.g. rain, clouds, snow, dust, smoke, chemical releases) are being calculated and injected into the scene observed through the NVG via the fixed and goggle mounted HUDs. This paper expands on the work described in the previous presentation and will describe the 3D Virtual/Augmented Reality Scene - Weather - Atmosphere - Target Simulation part of the NVG - WDT. The 3D virtual reality software is a complete simulation system to generate realistic

  17. 3D Audio System

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Ames Research Center research into virtual reality led to the development of the Convolvotron, a high speed digital audio processing system that delivers three-dimensional sound over headphones. It consists of a two-card set designed for use with a personal computer. The Convolvotron's primary application is presentation of 3D audio signals over headphones. Four independent sound sources are filtered with large time-varying filters that compensate for motion. The perceived location of the sound remains constant. Possible applications are in air traffic control towers or airplane cockpits, hearing and perception research and virtual reality development.

  18. 2.5D/3D Models for the enhancement of architectural-urban heritage. An Virtual Tour of design of the Fascist headquarters in Littoria

    NASA Astrophysics Data System (ADS)

    Ippoliti, E.; Calvano, M.; Mores, L.

    2014-05-01

    Enhancement of cultural heritage is not simply a matter of preserving material objects but comes full circle only when the heritage can be enjoyed and used by the community. This is the rationale behind this presentation: an urban Virtual Tour to explore the 1937 design of the Fascist Headquarters in Littoria, now part of Latina, by the architect Oriolo Frezzotti. Although the application is deliberately "simple", it was part of a much broader framework of goals. One such goal was to create "friendly and perceptively meaningful" interfaces by integrating different "3D models" and so enriching. In fact, by exploiting the activation of natural mechanisms of visual perception and the ensuing emotional emphasis associated with vision, the illusionistic simulation of the scene facilitates access to the data even for "amateur" users. A second goal was to "contextualise the information" on which the concept of cultural heritage is based. In the application, communication of the heritage is linked to its physical and linguistic context; the latter is then used as a basis from which to set out to explore and understand the historical evidence. A third goal was to foster the widespread dissemination and sharing of this heritage of knowledge. On the one hand we worked to make the application usable from the Web, on the other, we established a reliable, rapid operational procedure with high quality processed data and ensuing contents. The procedure was also repeatable on a large scale.

  19. Spatial working memory in immersive virtual reality foraging: path organization, traveling distance and search efficiency in humans (Homo sapiens).

    PubMed

    De Lillo, Carlo; Kirby, Melissa; James, Frances C

    2014-05-01

    Search and serial recall tasks were used in the present study to characterize the factors affecting the ability of humans to keep track of a set of spatial locations while traveling in an immersive virtual reality foraging environment. The first experiment required the exhaustive exploration of a set of locations following a procedure previously used with other primate and non-primate species to assess their sensitivity to the geometric arrangement of foraging sites. The second experiment assessed the dependency of search performance on search organization by requiring the participants to recall specific trajectories throughout the foraging space. In the third experiment, the distance between the foraging sites was manipulated in order to contrast the effects of organization and traveling distance on recall accuracy. The results show that humans benefit from the use of organized search patterns when attempting to monitor their travel though either a clustered "patchy" space or a matrix of locations. Their ability to recall a series of locations is dependent on whether the order in which they are explored conformed or did not conform to specific organization principles. Moreover, the relationship between search efficiency and search organization is not confounded by effects of traveling distance. These results indicate that in humans, organizational factors may play a large role in their ability to forage efficiently. The extent to which such dependency may pertain to other primates and could be accounted for by visual organization processes is discussed on the basis of previous studies focused on perceptual grouping, search, and serial recall in non-human species.

  20. Learning immersion without getting wet

    NASA Astrophysics Data System (ADS)

    Aguilera, Julieta C.

    2012-03-01

    This paper describes the teaching of an immersive environments class on the Spring of 2011. The class had students from undergraduate as well as graduate art related majors. Their digital background and interests were also diverse. These variables were channeled as different approaches throughout the semester. Class components included fundamentals of stereoscopic computer graphics to explore spatial depth, 3D modeling and skeleton animation to in turn explore presence, exposure to formats like a stereo projection wall and dome environments to compare field of view across devices, and finally, interaction and tracking to explore issues of embodiment. All these components were supported by theoretical readings discussed in class. Guest artists presented their work in Virtual Reality, Dome Environments and other immersive formats. Museum professionals also introduced students to space science visualizations, which utilize immersive formats. Here I present the assignments and their outcome, together with insights as to how the creation of immersive environments can be learned through constraints that expose students to situations of embodied cognition.

  1. The Responses of Medical General Practitioners to Unreasonable Patient Demand for Antibiotics - A Study of Medical Ethics Using Immersive Virtual Reality

    PubMed Central

    Pan, Xueni; Slater, Mel; Beacco, Alejandro; Navarro, Xavi; Bellido Rivas, Anna I.; Swapp, David; Hale, Joanna; Forbes, Paul Alexander George; Denvir, Catrina; de C. Hamilton, Antonia F.; Delacroix, Sylvie

    2016-01-01

    Background Dealing with insistent patient demand for antibiotics is an all too common part of a General Practitioner’s daily routine. This study explores the extent to which portable Immersive Virtual Reality technology can help us gain an accurate understanding of the factors that influence a doctor’s response to the ethical challenge underlying such tenacious requests for antibiotics (given the threat posed by growing anti-bacterial resistance worldwide). It also considers the potential of such technology to train doctors to face such dilemmas. Experiment Twelve experienced GPs and nine trainees were confronted with an increasingly angry demand by a woman to prescribe antibiotics to her mother in the face of inconclusive evidence that such antibiotic prescription is necessary. The daughter and mother were virtual characters displayed in immersive virtual reality. The specific purposes of the study were twofold: first, whether experienced GPs would be more resistant to patient demands than the trainees, and second, to investigate whether medical doctors would take the virtual situation seriously. Results Eight out of the 9 trainees prescribed the antibiotics, whereas 7 out of the 12 GPs did so. On the basis of a Bayesian analysis, these results yield reasonable statistical evidence in favor of the notion that experienced GPs are more likely to withstand the pressure to prescribe antibiotics than trainee doctors, thus answering our first question positively. As for the second question, a post experience questionnaire assessing the participants’ level of presence (together with participants’ feedback and body language) suggested that overall participants did tend towards the illusion of being in the consultation room depicted in the virtual reality and that the virtual consultation taking place was really happening. PMID:26889676

  2. A novel minimally invasive, dorsolateral, tubular partial odontoidectomy and autologous bone augmentation to treat dens pseudarthrosis: cadaveric, 3D virtual simulation study and technical report.

    PubMed

    Archavlis, Eleftherios; Serrano, Lucas; Schwandt, Eike; Nimer, Amr; Molina-Fuentes, Moisés Felipe; Rahim, Tamim; Ackermann, Maximilian; Gutenberg, Angelika; Kantelhardt, Sven Rainer; Giese, Alf

    2017-02-01

    OBJECTIVE The goal of this study was to demonstrate the clinical and technical nuances of a minimally invasive, dorsolateral, tubular approach for partial odontoidectomy, autologous bone augmentation, and temporary C1-2 fixation to treat dens pseudarthrosis. METHODS A cadaveric feasibility study, a 3D virtual reality reconstruction study, and the subsequent application of this approach in 2 clinical cases are reported. Eight procedures were completed in 4 human cadavers. A minimally invasive, dorsolateral, tubular approach for odontoidectomy was performed with the aid of a tubular retraction system, using a posterolateral incision and an oblique approach angle. Fluoroscopy and postprocedural CT, using 3D volumetric averaging software, were used to evaluate the degree of bone removal of C1-2 lateral masses and the C-2 pars interarticularis. Two clinical cases were treated using the approach: a 23-year-old patient with an odontoid fracture and pseudarthrosis, and a 35-year-old patient with a history of failed conservative treatment for odontoid fracture. RESULTS At 8 cadaveric levels, the mean volumetric bone removal of the C1-2 lateral masses on 1 side was 3% ± 1%, and the mean resection of the pars interarticularis on 1 side was 2% ± 1%. The median angulation of the trajectory was 50°, and the median distance from the midline of the incision entry point on the skin surface was 67 mm. The authors measured the diameter of the working channel in relation to head positioning and assessed a greater working corridor of 12 ± 4 mm in 20° inclination, 15° contralateral rotation, and 5° lateral flexion to the contralateral side. There were no violations of the dura. The reliability of C-2 pedicle screws and C-1 lateral mass screws was 94% (15 of 16 screws) with a single lateral breach. The patients treated experienced excellent clinical outcomes. CONCLUSIONS A minimally invasive, dorsolateral, tubular odontoidectomy and autologous bone augmentation combined with C1

  3. Learning as Immersive Experiences: Using the Four-Dimensional Framework for Designing and Evaluating Immersive Learning Experiences in a Virtual World

    ERIC Educational Resources Information Center

    de Freitas, Sara; Rebolledo-Mendez, Genaro; Liarokapis, Fotis; Magoulas, George; Poulovassilis, Alexandra

    2010-01-01

    Traditional approaches to learning have often focused upon knowledge transfer strategies that have centred on textually-based engagements with learners, and dialogic methods of interaction with tutors. The use of virtual worlds, with text-based, voice-based and a feeling of "presence" naturally is allowing for more complex social interactions and…

  4. Generation IV Nuclear Energy Systems Construction Cost Reductions through the Use of Virtual Environments - Task 5 Report: Generation IV Reactor Virtual Mockup Proof-of-Principle Study

    SciTech Connect

    Timothy Shaw; Anthony Baratta; Vaughn Whisker

    2005-02-28

    Task 5 report is part of a 3 year DOE NERI-sponsored effort evaluating immersive virtual reality (CAVE) technology for design review, construction planning, and maintenance planning and training for next generation nuclear power plants. Program covers development of full-scale virtual mockups generated from 3D CAD data presented in a CAVE visualization facility. Created a virtual mockup of PBMR reactor cavity and discussed applications of virtual mockup technology to improve Gen IV design review, construction planning, and maintenance planning.

  5. ADN-Viewer: a 3D approach for bioinformatic analyses of large DNA sequences.

    PubMed

    Hérisson, Joan; Ferey, Nicolas; Gros, Pierre-Emmanuel; Gherbi, Rachid

    2007-01-20

    Most of biologists work on textual DNA sequences that are limited to the linear representation of DNA. In this paper, we address the potential offered by Virtual Reality for 3D modeling and immersive visualization of large genomic sequences. The representation of the 3D structure of naked DNA allows biologists to observe and analyze genomes in an interactive way at different levels. We developed a powerful software platform that provides a new point of view for sequences analysis: ADNViewer. Nevertheless, a classical eukaryotic chromosome of 40 million base pairs requires about 6 Gbytes of 3D data. In order to manage these huge amounts of data in real-time, we designed various scene management algorithms and immersive human-computer interaction for user-friendly data exploration. In addition, one bioinformatics study scenario is proposed.

  6. Knowledge and Valorization of Historical Sites Through 3d Documentation and Modeling

    NASA Astrophysics Data System (ADS)

    Farella, E.; Menna, F.; Nocerino, E.; Morabito, D.; Remondino, F.; Campi, M.

    2016-06-01

    The paper presents the first results of an interdisciplinary project related to the 3D documentation, dissemination, valorization and digital access of archeological sites. Beside the mere 3D documentation aim, the project has two goals: (i) to easily explore and share via web references and results of the interdisciplinary work, including the interpretative process and the final reconstruction of the remains; (ii) to promote and valorize archaeological areas using reality-based 3D data and Virtual Reality devices. This method has been verified on the ruins of the archeological site of Pausilypon, a maritime villa of Roman period (Naples, Italy). Using Unity3D, the virtual tour of the heritage site was integrated and enriched with the surveyed 3D data, text documents, CAAD reconstruction hypotheses, drawings, photos, etc. In this way, starting from the actual appearance of the ruins (panoramic images), passing through the 3D digital surveying models and several other historical information, the user is able to access virtual contents and reconstructed scenarios, all in a single virtual, interactive and immersive environment. These contents and scenarios allow to derive documentation and geometrical information, understand the site, perform analyses, see interpretative processes, communicate historical information and valorize the heritage location.

  7. [A survey of virtual reality research: From technology to psychology].

    PubMed

    Sakurai, K

    1995-10-01

    A technology of virtual reality enables us to immerse ourselves into 3D synthesized environments. In this paper, I review recent researches on virtual reality focusing on (a) the terminology used in this research area, (b) technological approaches to setting up different components of virtual reality autonomy, interaction, and presence, (c) objective measures and subjective ratings of a viewer's sense of presence in virtual environments, (d) present applications of virtual reality in different fields and their relation to pictorial communication. This review concludes that intermodality conflict and measurement of sense of presence are the crucial perceptual and cognitive topics in virtual reality research.

  8. History Educators and the Challenge of Immersive Pasts: A Critical Review of Virtual Reality "Tools" and History Pedagogy

    ERIC Educational Resources Information Center

    Allison, John

    2008-01-01

    This paper will undertake a critical review of the impact of virtual reality tools on the teaching of history. Virtual reality is useful in several different ways. History educators, elementary and secondary school teachers and professors, can all profit from the digital environment. Challenges arise quickly however. Virtual reality technologies…

  9. SciEthics Interactive: Science and Ethics Learning in a Virtual Environment

    ERIC Educational Resources Information Center

    Nadolny, Larysa; Woolfrey, Joan; Pierlott, Matthew; Kahn, Seth

    2013-01-01

    Learning in immersive 3D environments allows students to collaborate, build, and interact with difficult course concepts. This case study examines the design and development of the TransGen Island within the SciEthics Interactive project, a National Science Foundation-funded, 3D virtual world emphasizing learning science content in the context of…

  10. Digital Planetariums and Immersive Visualizations for Astronomy Education

    NASA Astrophysics Data System (ADS)

    Yu, K. C.; Sahami, K.

    2015-11-01

    Modern “fulldome” video digital planetariums combine immersive projection that facilitate the understanding of relationships involving wide spatial angles, while 3D virtual environments facilitate learning of spatial relationships by allowing models and scenes to be viewed from multiple frames of reference. We report on an efficacy study of the use of digital planetariums for learning the astronomical topic of the seasons. Comparison of curriculum tests taken immediately after instruction versus pre-instruction show significant gains for students who viewed visualizations in the immersive dome, versus their counterparts who viewed non-immersive content and those in the control group that saw no visualizations. The greater gains in learning in the digital planetarium can be traced not only to its ability to show wide-angle phenomena and the benefits accorded by the simulation software, but also the lower quality visual experience for students viewing the non-immersive versions of the lectures.

  11. A novel semi-immersive virtual reality visuo-motor task activates ventrolateral prefrontal cortex: a functional near-infrared spectroscopy study

    NASA Astrophysics Data System (ADS)

    Basso Moro, Sara; Carrieri, Marika; Avola, Danilo; Brigadoi, Sabrina; Lancia, Stefania; Petracca, Andrea; Spezialetti, Matteo; Ferrari, Marco; Placidi, Giuseppe; Quaresima, Valentina

    2016-06-01

    Objective. In the last few years, the interest in applying virtual reality systems for neurorehabilitation is increasing. Their compatibility with neuroimaging techniques, such as functional near-infrared spectroscopy (fNIRS), allows for the investigation of brain reorganization with multimodal stimulation and real-time control of the changes occurring in brain activity. The present study was aimed at testing a novel semi-immersive visuo-motor task (VMT), which has the features of being adopted in the field of neurorehabilitation of the upper limb motor function. Approach. A virtual environment was simulated through a three-dimensional hand-sensing device (the LEAP Motion Controller), and the concomitant VMT-related prefrontal cortex (PFC) response was monitored non-invasively by fNIRS. Upon the VMT, performed at three different levels of difficulty, it was hypothesized that the PFC would be activated with an expected greater level of activation in the ventrolateral PFC (VLPFC), given its involvement in the motor action planning and in the allocation of the attentional resources to generate goals from current contexts. Twenty-one subjects were asked to move their right hand/forearm with the purpose of guiding a virtual sphere over a virtual path. A twenty-channel fNIRS system was employed for measuring changes in PFC oxygenated-deoxygenated hemoglobin (O2Hb/HHb, respectively). Main results. A VLPFC O2Hb increase and a concomitant HHb decrease were observed during the VMT performance, without any difference in relation to the task difficulty. Significance. The present study has revealed a particular involvement of the VLPFC in the execution of the novel proposed semi-immersive VMT adoptable in the neurorehabilitation field.

  12. The Development of a Virtual 3D Model of the Renal Corpuscle from Serial Histological Sections for E-Learning Environments

    ERIC Educational Resources Information Center

    Roth, Jeremy A.; Wilson, Timothy D.; Sandig, Martin

    2015-01-01

    Histology is a core subject in the anatomical sciences where learners are challenged to interpret two-dimensional (2D) information (gained from histological sections) to extrapolate and understand the three-dimensional (3D) morphology of cells, tissues, and organs. In gross anatomical education 3D models and learning tools have been associated…

  13. Recent Advances in Immersive Visualization of Ocean Data: Virtual Reality Through the Web on Your Laptop Computer

    NASA Astrophysics Data System (ADS)

    Hermann, A. J.; Moore, C.; Soreide, N. N.

    2002-12-01

    Ocean circulation is irrefutably three dimensional, and powerful new measurement technologies and numerical models promise to expand our three-dimensional knowledge of the dynamics further each year. Yet, most ocean data and model output is still viewed using two-dimensional maps. Immersive visualization techniques allow the investigator to view their data as a three dimensional world of surfaces and vectors which evolves through time. The experience is not unlike holding a part of the ocean basin in one's hand, turning and examining it from different angles. While immersive, three dimensional visualization has been possible for at least a decade, the technology was until recently inaccessible (both physically and financially) for most researchers. It is not yet fully appreciated by practicing oceanographers how new, inexpensive computing hardware and software (e.g. graphics cards and controllers designed for the huge PC gaming market) can be employed for immersive, three dimensional, color visualization of their increasingly huge datasets and model output. In fact, the latest developments allow immersive visualization through web servers, giving scientists the ability to "fly through" three-dimensional data stored half a world away. Here we explore what additional insight is gained through immersive visualization, describe how scientists of very modest means can easily avail themselves of the latest technology, and demonstrate its implementation on a web server for Pacific Ocean model output.

  14. IQ-Station: A Low Cost Portable Immersive Environment

    SciTech Connect

    Eric Whiting; Patrick O'Leary; William Sherman; Eric Wernert

    2010-11-01

    The emergence of inexpensive 3D TV’s, affordable input and rendering hardware and open-source software has created a yeasty atmosphere for the development of low-cost immersive environments (IE). A low cost IE system, or IQ-station, fashioned from commercial off the shelf technology (COTS), coupled with a targeted immersive application can be a viable laboratory instrument for enhancing scientific workflow for exploration and analysis. The use of an IQ-station in a laboratory setting also has the potential of quickening the adoption of a more sophisticated immersive environment as a critical enabler in modern scientific and engineering workflows. Prior work in immersive environments generally required either a head mounted display (HMD) system or a large projector-based implementation both of which have limitations in terms of cost, usability, or space requirements. The solution presented here provides an alternative platform providing a reasonable immersive experience that addresses those limitations. Our work brings together the needed hardware and software to create a fully integrated immersive display and interface system that can be readily deployed in laboratories and common workspaces. By doing so, it is now feasible for immersive technologies to be included in researchers’ day-to-day workflows. The IQ-Station sets the stage for much wider adoption of immersive environments outside the small communities of virtual reality centers.

  15. Exploring Ecosystems from the Inside: How Immersive Multi-user Virtual Environments Can Support Development of Epistemologically Grounded Modeling Practices in Ecosystem Science Instruction

    NASA Astrophysics Data System (ADS)

    Kamarainen, Amy M.; Metcalf, Shari; Grotzer, Tina; Dede, Chris

    2015-04-01

    Recent reform efforts and the next generation science standards emphasize the importance of incorporating authentic scientific practices into science instruction. Modeling can be a particularly challenging practice to address because modeling occurs within a socially structured system of representation that is specific to a domain. Further, in the process of modeling, experts interact deeply with domain-specific content knowledge and integrate modeling with other scientific practices in service of a larger investigation. It can be difficult to create learning experiences enabling students to engage in modeling practices that both honor the position of the novice along a spectrum toward more expert understanding and align well with the practices and reasoning used by experts in the domain. In this paper, we outline the challenges in teaching modeling practices specific to the domain of ecosystem science, and we present a description of a curriculum built around an immersive virtual environment that offers unique affordances for supporting student engagement in modeling practices. Illustrative examples derived from pilot studies suggest that the tools and context provided within the immersive virtual environment helped support student engagement in modeling practices that are epistemologically grounded in the field of ecosystem science.

  16. 3-D reconstruction and virtual ductoscopy of high-grade ductal carcinoma in situ of the breast with casting type calcifications using refraction-based X-ray CT.

    PubMed

    Ichihara, Shu; Ando, Masami; Maksimenko, Anton; Yuasa, Tetsuya; Sugiyama, Hiroshi; Hashimoto, Eiko; Yamasaki, Katsuhito; Mori, Kensaku; Arai, Yoshinori; Endo, Tokiko

    2008-01-01

    Stereomicroscopic observations of thick sections, or three-dimensional (3-D) reconstructions from serial sections, have provided insights into histopathology. However, they generally require time-consuming and laborious procedures. Recently, we have developed a new algorithm for refraction-based X-ray computed tomography (CT). The aim of this study is to apply this emerging technology to visualize the 3-D structure of a high-grade ductal carcinomas in situ (DCIS) of the breast. The high-resolution two-dimensional images of the refraction-based CT were validated by comparing them with the sequential histological sections. Without adding any contrast medium, the new CT showed strong contrast and was able to depict the non-calcified fine structures such as duct walls and intraductal carcinoma itself, both of which were barely visible in a conventional absorption-based CT. 3-D reconstruction and virtual endoscopy revealed that the high-grade DCIS was located within the dichotomatous branches of the ducts. Multiple calcifications occurred in the necrotic core of the continuous DCIS, resulting in linear and branching (casting type) calcifications, a hallmark of high-grade DCIS on mammograms. In conclusion, refraction-based X-ray CT approaches the low-power light microscopic view of the histological sections. It provides high quality slice data for 3-D reconstruction and virtual ductosocpy.

  17. Pathways for Learning from 3D Technology

    ERIC Educational Resources Information Center

    Carrier, L. Mark; Rab, Saira S.; Rosen, Larry D.; Vasquez, Ludivina; Cheever, Nancy A.

    2012-01-01

    The purpose of this study was to find out if 3D stereoscopic presentation of information in a movie format changes a viewer's experience of the movie content. Four possible pathways from 3D presentation to memory and learning were considered: a direct connection based on cognitive neuroscience research; a connection through "immersion"…

  18. 3D Surgical Simulation

    PubMed Central

    Cevidanes, Lucia; Tucker, Scott; Styner, Martin; Kim, Hyungmin; Chapuis, Jonas; Reyes, Mauricio; Proffit, William; Turvey, Timothy; Jaskolka, Michael

    2009-01-01

    This paper discusses the development of methods for computer-aided jaw surgery. Computer-aided jaw surgery allows us to incorporate the high level of precision necessary for transferring virtual plans into the operating room. We also present a complete computer-aided surgery (CAS) system developed in close collaboration with surgeons. Surgery planning and simulation include construction of 3D surface models from Cone-beam CT (CBCT), dynamic cephalometry, semi-automatic mirroring, interactive cutting of bone and bony segment repositioning. A virtual setup can be used to manufacture positioning splints for intra-operative guidance. The system provides further intra-operative assistance with the help of a computer display showing jaw positions and 3D positioning guides updated in real-time during the surgical procedure. The CAS system aids in dealing with complex cases with benefits for the patient, with surgical practice, and for orthodontic finishing. Advanced software tools for diagnosis and treatment planning allow preparation of detailed operative plans, osteotomy repositioning, bone reconstructions, surgical resident training and assessing the difficulties of the surgical procedures prior to the surgery. CAS has the potential to make the elaboration of the surgical plan a more flexible process, increase the level of detail and accuracy of the plan, yield higher operative precision and control, and enhance documentation of cases. Supported by NIDCR DE017727, and DE018962 PMID:20816308

  19. Academic Library Services in Virtual Worlds: An Examination of the Potential for Library Services in Immersive Environments

    ERIC Educational Resources Information Center

    Ryan, Jenna; Porter, Marjorie; Miller, Rebecca

    2010-01-01

    Current literature on libraries is abundant with articles about the uses and the potential of new interactive communication technology, including Web 2.0 tools. Recently, the advent and use of virtual worlds have received top billing in these works. Many library institutions are exploring these virtual environments; this exploration and the…

  20. Exploring the Instruction of Fluid Dynamics Concepts in an Immersive Virtual Environment: A Case Study of Pedagogical Strategies

    ERIC Educational Resources Information Center

    Lio, Cindy; Mazur, Joan

    2004-01-01

    The deployment of immersive, non-restrictive environments for instruction and learning presents a new set of challenges for instructional designers and educators. Adopting the conceptual frameworks of Sherin's (2002) learning while teaching and Vygotsky's (1978) cultural development via the mediation of tools, this paper explores one professor's…

  1. The Use of 3d Scanning and Photogrammetry Techniques in the Case Study of the Roman Theatre of Nikopolis. Surveying, Virtual Reconstruction and Restoration Study.

    NASA Astrophysics Data System (ADS)

    Bilis, T.; Kouimtzoglou, T.; Magnisali, M.; Tokmakidis, P.

    2017-02-01

    The aim of this paper is to present the specific methods by which 3D scanning and photogrammetric techniques were incorporated into the architectural study, the documentation and the graphic restoration study of the monument of the ancient theatre of Nikopolis. Traditional methods of surveying were enhanced by the use of 3D scanning and image-based 3D reconstruction and 3D remodelling and renderings. For this reason, a team of specialists from different scientific fields has been organized. This presented the opportunity to observe every change of the restoration design process, not only by the use of common elevations and ground plans, but also in 3D space. It has been also very liberating to know how the monument will look like in this unique site after the restoration, so as to obtain at the study stage the best intervention decisions possible. Moreover, these modern work tools helped of course to convince the authorities for the accuracy of the restoration actions and finally to make the proposal clear to the public.

  2. Initial Assessment of Human Performance Using the Gaiter Interaction Technique to Control Locomotion in Fully Immersive Virtual Environments

    DTIC Science & Technology

    2007-11-02

    and control the posture of the body should support the user’s inter- action with the virtual world. Skills and actions, such as aiming a rifle and...3 Our general approach to interaction technique design is based on principles derived from an under- standing of human perception and motor control...the stride length and cadence of virtual steps. Since Gaiter uses only the legs and pelvis , it does not interfere with actions performed by other

  3. Prenatal diagnosis, 3-D virtual rendering and lung sparing surgery by ligasure device in a baby with “CCAM and intralobar pulmonary sequestration”

    PubMed Central

    Molinaro, Francesco; Angotti, Rossella; Di Crescenzo, Vincenzo Giuseppe; Cortese, Antonio; Messina, Mario

    2016-01-01

    Abstract Congenital cystic lung lesions are a rare but clinically significant group of anomalies, including congenital cystic adenomatoid malformation (CCAM), pulmonary sequestration, congenital lobar emphysema (CLE) and bronchogenic cysts. Despite the knowledge of these lesions increasing in the last years, some aspects are still debated and controversial. The diagnosis is certainly one aspect which underwent many changes in the last 15 years due to the improvement of antenatal scan and the introduction of 3-D reconstruction techniques. As it is known, a prompt diagnosis has an essential role in the management of these children. The new imaging studies as 3D Volume rendering system are the focus of this paper. We describe our preliminary experience in a case of hybrid lung lesion, which we approached by thoracoscopy after a preoperative study with 3D VR reconstruction. Our final balance is absolutely positive. PMID:28352794

  4. Age and gestural differences in the ease of rotating a virtual 3D image on a large, multi-touch screen.

    PubMed

    Ku, Chao-Jen; Chen, Li-Chieh

    2013-04-01

    Providing a natural mapping between multi-touch gestures and manipulations of digital content is important for user-friendly interfaces. Although there are some guidelines for 2D digital content available in the literature, a guideline for manipulation of 3D content has yet to be developed. In this research, two sets of gestures were developed for experiments in the ease of manipulating 3D content on a touchscreen. As there typically are large differences between age groups in the ease of learning new interfaces, we compared a group of adults with a group of children. Each person carried out three tasks linked to rotating the digital model of a green turtle to inspect major characteristics of its body. Task completion time, subjective evaluations, and gesture changing frequency were measured. Results showed that using the conventional gestures for 2D object rotation was not appropriate in the 3D environment. Gestures that required multiple touch points hampered the real-time visibility of rotational effects on a large screen. While the cumulative effects of 3D rotations became complicated after intensive operations, simpler gestures facilitated the mapping between 2D control movements and 3D content displays. For rotation in Cartesian coordinates, moving one fingertip horizontally or vertically on a 2D touchscreen corresponded to the rotation angles of two axes for 3D content, while the relative movement between two fingertips was used to control the rotation angleof the third axis. Based on behavior analysis, adults and children differed in the diversity of gesture types and in the touch points with respect to the object's contours. Offering a robust mechanism for gestural inputs is necessary for universal control of such a system.

  5. Virtual Prototyping at CERN

    NASA Astrophysics Data System (ADS)

    Gennaro, Silvano De

    The VENUS (Virtual Environment Navigation in the Underground Sites) project is probably the largest Virtual Reality application to Engineering design in the world. VENUS is just over one year old and offers a fully immersive and stereoscopic "flythru" of the LHC pits for the proposed experiments, including the experimental area equipment and the surface models that are being prepared for a territorial impact study. VENUS' Virtual Prototypes are an ideal replacement for the wooden models traditionally build for the past CERN machines, as they are generated directly from the EUCLID CAD files, therefore they are totally reliable, they can be updated in a matter of minutes, and they allow designers to explore them from inside, in a one-to-one scale. Navigation can be performed on the computer screen, on a stereoscopic large projection screen, or in immersive conditions, with an helmet and 3D mouse. By using specialised collision detection software, the computer can find optimal paths to lower each detector part into the pits and position it to destination, letting us visualize the whole assembly probess. During construction, these paths can be fed to a robot controller, which can operate the bridge cranes and build LHC almost without human intervention. VENUS is currently developing a multiplatform VR browser that will let the whole HEP community access LHC's Virtual Protoypes over the web. Many interesting things took place during the conference on Virtual Reality. For more information please refer to the Virtual Reality section.

  6. Overestimation of heights in virtual reality is influenced more by perceived distal size than by the 2-D versus 3-D dimensionality of the display

    NASA Technical Reports Server (NTRS)

    Dixon, Melissa W.; Proffitt, Dennis R.; Kaiser, M. K. (Principal Investigator)

    2002-01-01

    One important aspect of the pictorial representation of a scene is the depiction of object proportions. Yang, Dixon, and Proffitt (1999 Perception 28 445-467) recently reported that the magnitude of the vertical-horizontal illusion was greater for vertical extents presented in three-dimensional (3-D) environments compared to two-dimensional (2-D) displays. However, because all of the 3-D environments were large and all of the 2-D displays were small, the question remains whether the observed magnitude differences were due solely to the dimensionality of the displays (2-D versus 3-D) or to the perceived distal size of the extents (small versus large). We investigated this question by comparing observers' judgments of vertical relative to horizontal extents on a large but 2-D display compared to the large 3-D and the small 2-D displays used by Yang et al (1999). The results confirmed that the magnitude differences for vertical overestimation between display media are influenced more by the perceived distal object size rather than by the dimensionality of the display.

  7. Applying microCT and 3D visualization to Jurassic silicified conifer seed cones: A virtual advantage over thin-sectioning1

    PubMed Central

    Gee, Carole T.

    2013-01-01

    • Premise of the study: As an alternative to conventional thin-sectioning, which destroys fossil material, high-resolution X-ray computed tomography (also called microtomography or microCT) integrated with scientific visualization, three-dimensional (3D) image segmentation, size analysis, and computer animation is explored as a nondestructive method of imaging the internal anatomy of 150-million-year-old conifer seed cones from the Late Jurassic Morrison Formation, USA, and of recent and other fossil cones. • Methods: MicroCT was carried out on cones using a General Electric phoenix v|tome|x s 240D, and resulting projections were processed with visualization software to produce image stacks of serial single sections for two-dimensional (2D) visualization, 3D segmented reconstructions with targeted structures in color, and computer animations. • Results: If preserved in differing densities, microCT produced images of internal fossil tissues that showed important characters such as seed phyllotaxy or number of seeds per cone scale. Color segmentation of deeply embedded seeds highlighted the arrangement of seeds in spirals. MicroCT of recent cones was even more effective. • Conclusions: This is the first paper on microCT integrated with 3D segmentation and computer animation applied to silicified seed cones, which resulted in excellent 2D serial sections and segmented 3D reconstructions, revealing features requisite to cone identification and understanding of strobilus construction. PMID:25202495

  8. Generation IV Nuclear Energy Systems Construction Cost Reductions through the Use of Virtual Environments - Task 4 Report: Virtual Mockup Maintenance Task Evaluation

    SciTech Connect

    Timothy Shaw; Anthony Baratta; Vaughn Whisker

    2005-02-28

    Task 4 report of 3 year DOE NERI-sponsored effort evaluating immersive virtual reality (CAVE) technology for design review, construction planning, and maintenance planning and training for next generation nuclear power plants. Program covers development of full-scale virtual mockups generated from 3D CAD data presented in a CAVE visualization facility. This report focuses on using Full-scale virtual mockups for nuclear power plant training applications.

  9. Prefrontal cortex activated bilaterally by a tilt board balance task: a functional near-infrared spectroscopy study in a semi-immersive virtual reality environment.

    PubMed

    Ferrari, Marco; Bisconti, Silvia; Spezialetti, Matteo; Basso Moro, Sara; Di Palo, Caterina; Placidi, Giuseppe; Quaresima, Valentina

    2014-05-01

    The aim of this study was to assess the prefrontal cortex (PFC) oxygenation response to a 5-min incremental tilt board balance task (ITBBT) in a semi-immersive virtual reality (VR) environment driven by a depth-sensing camera. It was hypothesized that the PFC would be bilaterally activated in response to the increase of the ITBBT difficulty, given the PFC involvement in the allocation of the attentional resources to maintain postural control. Twenty-two healthy male subjects were asked to use medial-lateral postural sways to maintain their equilibrium on a virtual tilt board (VTB) balancing over a pivot. When the subject was unable to maintain the VTB angle within ± 35° the VTB became red (error). An eight-channel fNIRS system was employed for measuring changes in PFC oxygenated-deoxygenated hemoglobin (O2Hb-HHb, respectively). Results revealed that the number of the performed board sways and errors augmented with the increasing of the ITBBT difficulty. A PFC activation was observed with a tendency to plateau for both O2Hb-HHb changes within the last 2 min of the task. A significant main effect of the level of difficulty was found in O2Hb and HHb (p < 0.001). The study has demonstrated that the oxygenation increased over the PFC while the subject was performing an ITBBT in a semi-immersive VR environment. This increase was modulated by the task difficulty, suggesting that the PFC is bilaterally involved in attention-demanding tasks. This task could be considered useful for diagnostic testing and functional neurorehabilitation given its adaptability in elderly and in patients with movement disorders.

  10. PC-Based Virtual Reality for CAD Model Viewing

    ERIC Educational Resources Information Center

    Seth, Abhishek; Smith, Shana S.-F.

    2004-01-01

    Virtual reality (VR), as an emerging visualization technology, has introduced an unprecedented communication method for collaborative design. VR refers to an immersive, interactive, multisensory, viewer-centered, 3D computer-generated environment and the combination of technologies required to build such an environment. This article introduces the…

  11. Andragogical Characteristics and Expectations of University of Hawai'i Adult Learners in a 3D Multi-User Virtual Environment

    ERIC Educational Resources Information Center

    Meeder, Rebecca L.

    2012-01-01

    The purpose of this study was to discover which andragogical characteristics and expectations of adult learners manifested themselves in the three-dimensional, multi-user virtual environment known as Second Life. This digital ethnographic study focused specifically on adult students within the University of Hawai'i Second Life group and their…

  12. Immersive group-to-group telepresence.

    PubMed

    Beck, Stephan; Kunert, André; Kulik, Alexander; Froehlich, Bernd

    2013-04-01

    We present a novel immersive telepresence system that allows distributed groups of users to meet in a shared virtual 3D world. Our approach is based on two coupled projection-based multi-user setups, each providing multiple users with perspectively correct stereoscopic images. At each site the users and their local interaction space are continuously captured using a cluster of registered depth and color cameras. The captured 3D information is transferred to the respective other location, where the remote participants are virtually reconstructed. We explore the use of these virtual user representations in various interaction scenarios in which local and remote users are face-to-face, side-by-side or decoupled. Initial experiments with distributed user groups indicate the mutual understanding of pointing and tracing gestures independent of whether they were performed by local or remote participants. Our users were excited about the new possibilities of jointly exploring a virtual city, where they relied on a world-in-miniature metaphor for mutual awareness of their respective locations.

  13. Use of 2.5-D and 3-D technology to evaluate control room upgrades

    SciTech Connect

    Hanes, L. F.; Naser, J.

    2006-07-01

    This paper describes an Electric Power Research Inst. (EPRI) study in which 2.5-D and 3-D visualization technology was applied to evaluate the design of a nuclear power plant control room upgrade. The study involved converting 3-D CAD flies of a planned upgrade into a photo-realistic appearing virtual model, and evaluating the value and usefulness of the model. Nuclear utility and EPRI evaluators viewed and interacted with the control room virtual model with both 2.5-D and 3-D representations. They identified how control room and similar virtual models may be used by utilities for design and evaluation purposes; assessed potential economic and other benefits; and identified limitations, potential problems, and other issues regarding use of visualization technology for this and similar applications. In addition, the Halden CREATE (Control Room Engineering Advanced Tool-kit Environment) Verification Tool was applied to evaluate features of the virtual model against US NRC NUREG 0700 Revision 2 human factors engineering guidelines (NUREG 0700) [1]. The study results are very favorable for applying 2.5-D visualization technology to support upgrading nuclear power plant control rooms and other plant facilities. Results, however, show that today's 3-D immersive viewing systems are difficult to justify based on cost, availability and value of information provided for this application. (authors)

  14. Immersive Education, an Annotated Webliography

    ERIC Educational Resources Information Center

    Pricer, Wayne F.

    2011-01-01

    In this second installment of a two-part feature on immersive education a webliography will provide resources discussing the use of various types of computer simulations including: (a) augmented reality, (b) virtual reality programs, (c) gaming resources for teaching with technology, (d) virtual reality lab resources, (e) virtual reality standards…

  15. Digital Geology from field to 3D modelling and Google Earth virtual environment: methods and goals from the Furlo Gorge (Northern Apennines - Italy)

    NASA Astrophysics Data System (ADS)

    De Donatis, Mauro; Susini, Sara

    2014-05-01

    A new map of the Furlo Gorge was surveyed and elaborated in a digital way. In every step of work we used digital tools as mobile GIS and 3D modelling software. Phase 1st Starting in the lab, planning the field project development, base cartography, forms and data base were designed in the way we thought was the best for collecting and store data in order of producing a digital n­-dimensional map. Bedding attitudes, outcrops sketches and description, stratigraphic logs, structural features and other informations were collected and organised in a structured database using rugged tablet PC, GPS receiver, digital cameras and later also an Android smartphone with some survey apps in-­house developed. A new mobile GIS (BeeGIS) was developed starting from an open source GIS (uDig): a number of tools like GPS connection, pen drawing annotations, geonotes, fieldbook, photo synchronization and geotagging were originally designed. Phase 2nd After some month of digital field work, all the informations were elaborated for drawing a geologic map in GIS environment. For that we use both commercial (ArcGIS) and open source (gvSig, QGIS, uDig) without big technical problems. Phase 3rd When we get to the step of building a 3D model (using 3DMove), passing trough the assisted drawing of cross-­sections (2DMove), we discovered a number of problems in the interpretation of geological structures (thrusts, normal faults) and more in the interpretation of stratigraphic thickness and boundaries and their relationships with topography. Phase 4th Before an "on­-armchair" redrawing of map, we decide to go back to the field and check directly what was wrong. Two main vantages came from this: (1) the mistakes we found could be reinterpreted and corrected directly in the field having all digital tools we need; (2) previous interpretations could be stored in GIS layers keeping memory of the previous work (also mistakes). Phase 5th A 3D model built with 3D Move is already almost self

  16. Moving virtuality into reality: A comparison study of the effectiveness of traditional and alternative assessments of learning in a multisensory, fully immersive physics program

    NASA Astrophysics Data System (ADS)

    Gamor, Keysha Ingram

    This paper contains a research study that investigated the relative efficacy of using both a traditional paper-and-pencil assessment instrument and an alternative, virtual reality (VR) assessment instrument to assist educators and/or instructional designers in measuring learning in a virtual reality learning environment. To this end, this research study investigated assessment in VR, with the goal of analyzing aspects of student learning in VR that are feasible to access or capture by traditional assessments and alternative assessments. The researcher also examined what additional types of learning alternative assessments may offer. More specifically, this study compared the effectiveness of a traditional method with an alternative (performance-based) method of assessment that was used to examine the ability of the tools to accurately evidence the levels of students' understanding and learning. The domain area was electrostatics, a complex, abstract multidimensional concept, with which students often experience difficulty. Outcomes of the study suggest that, in the evaluation of learning in an immersive VR learning environment, assessments would most accurately manifest student learning if the assessment measure matched the learning environment itself. In this study, learning and assessing in the VR environment yielded higher final test scores than learning in VR and testing with traditional paper-and-pencil. Being able to transfer knowledge from a VR environment to other situations is critical in demonstrating the overall level of understanding of a concept. For this reason, the researcher recommends a combination of testing measures to enhance understanding of complex, abstract concepts.

  17. User Interface Technology Transfer to NASA's Virtual Wind Tunnel System

    NASA Technical Reports Server (NTRS)

    vanDam, Andries

    1998-01-01

    Funded by NASA grants for four years, the Brown Computer Graphics Group has developed novel 3D user interfaces for desktop and immersive scientific visualization applications. This past grant period supported the design and development of a software library, the 3D Widget Library, which supports the construction and run-time management of 3D widgets. The 3D Widget Library is a mechanism for transferring user interface technology from the Brown Graphics Group to the Virtual Wind Tunnel system at NASA Ames as well as the public domain.

  18. Efficacy and safety of non-immersive virtual reality exercising in stroke rehabilitation (EVREST): a randomised, multicentre, single-blind, controlled trial

    PubMed Central

    Saposnik, Gustavo; Cohen, Leonardo G; Mamdani, Muhammad; Pooyania, Sepideth; Ploughman, Michelle; Cheung, Donna; Shaw, Jennifer; Hall, Judith; Nord, Peter; Dukelow, Sean; Nilanont, Yongchai; De los Rios, Felipe; Olmos, Lisandro; Levin, Mindy; Teasell, Robert; Cohen, Ashley; Thorpe, Kevin; Laupacis, Andreas; Bayley, Mark

    2016-01-01

    Summary Background Non-immersive virtual reality is an emerging strategy to enhance motor performance for stroke rehabilitation. There has been rapid adoption of non-immersive virtual reality as a rehabilitation strategy despite the limited evidence about its safety and effectiveness. Our aim was to compare the safety and efficacy of virtual reality with recreational therapy on motor recovery in patients after an acute ischaemic stroke. Methods In this randomised, controlled, single-blind, parallel-group trial we enrolled adults (aged 18–85 years) who had a first-ever ischaemic stroke and a motor deficit of the upper extremity score of 3 or more (measured with the Chedoke-McMaster scale) within 3 months of randomisation from 14 in-patient stroke rehabilitation units from four countries (Canada [11], Argentina [1], Peru [1], and Thailand [1]). Participants were randomly allocated (1:1) by a computer-generated assignment at enrolment to receive a programme of structured, task-oriented, upper extremity sessions (ten sessions, 60 min each) of either non-immersive virtual reality using the Nintendo Wii gaming system (VRWii) or simple recreational activities (playing cards, bingo, Jenga, or ball game) as add-on therapies to conventional rehabilitation over a 2 week period. All investigators assessing outcomes were masked to treatment assignment. The primary outcome was upper extremity motor performance measured by total time to complete the Wolf Motor Function Test (WMFT) at the end of the 2 week intervention period, analysed in the intention-to-treat population. This trial is registered with ClinicalTrials.gov, number NTC01406912. Findings The study was done between May 12, 2012, and Oct 1, 2015. We randomly assigned 141 patients: 71 received VRWii therapy and 70 received recreational activity. 121 (86%) patients (59 in the VRWii group and 62 in the recreational activity group) completed the final assessment and were included in the primary analysis. Each group

  19. Immersive Visual Analytics for Transformative Neutron Scattering Science

    SciTech Connect

    Steed, Chad A; Daniel, Jamison R; Drouhard, Margaret; Hahn, Steven E; Proffen, Thomas E

    2016-01-01

    The ORNL Spallation Neutron Source (SNS) provides the most intense pulsed neutron beams in the world for scientific research and development across a broad range of disciplines. SNS experiments produce large volumes of complex data that are analyzed by scientists with varying degrees of experience using 3D visualization and analysis systems. However, it is notoriously difficult to achieve proficiency with 3D visualizations. Because 3D representations are key to understanding the neutron scattering data, scientists are unable to analyze their data in a timely fashion resulting in inefficient use of the limited and expensive SNS beam time. We believe a more intuitive interface for exploring neutron scattering data can be created by combining immersive virtual reality technology with high performance data analytics and human interaction. In this paper, we present our initial investigations of immersive visualization concepts as well as our vision for an immersive visual analytics framework that could lower the barriers to 3D exploratory data analysis of neutron scattering data at the SNS.

  20. Optical 3D surface digitizing in forensic medicine: 3D documentation of skin and bone injuries.

    PubMed

    Thali, Michael J; Braun, Marcel; Dirnhofer, Richard

    2003-11-26

    Photography process reduces a three-dimensional (3D) wound to a two-dimensional level. If there is a need for a high-resolution 3D dataset of an object, it needs to be three-dimensionally scanned. No-contact optical 3D digitizing surface scanners can be used as a powerful tool for wound and injury-causing instrument analysis in trauma cases. The 3D skin wound and a bone injury documentation using the optical scanner Advanced TOpometric Sensor (ATOS II, GOM International, Switzerland) will be demonstrated using two illustrative cases. Using this 3D optical digitizing method the wounds (the virtual 3D computer model of the skin and the bone injuries) and the virtual 3D model of the injury-causing tool are graphically documented in 3D in real-life size and shape and can be rotated in the CAD program on the computer screen. In addition, the virtual 3D models of the bone injuries and tool can now be compared in a 3D CAD program against one another in virtual space, to see if there are matching areas. Further steps in forensic medicine will be a full 3D surface documentation of the human body and all the forensic relevant injuries using optical 3D scanners.

  1. Differential impact of partial cortical blindness on gaze strategies when sitting and walking - an immersive virtual reality study.

    PubMed

    Iorizzo, Dana B; Riley, Meghan E; Hayhoe, Mary; Huxlin, Krystel R

    2011-05-25

    The present experiments aimed to characterize the visual performance of subjects with long-standing, unilateral cortical blindness when walking in a naturalistic, virtual environment. Under static, seated testing conditions, cortically blind subjects are known to exhibit compensatory eye movement strategies. However, they still complain of significant impairment in visual detection during navigation. To assess whether this is due to a change in compensatory eye movement strategy between sitting and walking, we measured eye and head movements in subjects asked to detect peripherally-presented, moving basketballs. When seated, cortically blind subjects detected ∼80% of balls, while controls detected almost all balls. Seated blind subjects did not make larger head movements than controls, but they consistently biased their fixation distribution towards their blind hemifield. When walking, head movements were similar in the two groups, but the fixation bias decreased to the point that fixation distribution in cortically blind subjects became similar to that in controls - with one major exception: at the time of basketball appearance, walking controls looked primarily at the far ground, in upper quadrants of the virtual field of view; cortically blind subjects looked significantly more at the near ground, in lower quadrants of the virtual field. Cortically blind subjects detected only 58% of the balls when walking while controls detected ∼90%. Thus, the adaptive gaze strategies adopted by cortically blind individuals as a compensation for their visual loss are strongest and most effective when seated and stationary. Walking significantly alters these gaze strategies in a way that seems to favor walking performance, but impairs peripheral target detection. It is possible that this impairment underlies the experienced difficulty of those with cortical blindness when navigating in real life.

  2. Effects of immersion on visual analysis of volume data.

    PubMed

    Laha, Bireswar; Sensharma, Kriti; Schiffbauer, James D; Bowman, Doug A

    2012-04-01

    Volume visualization has been widely used for decades for analyzing datasets ranging from 3D medical images to seismic data to paleontological data. Many have proposed using immersive virtual reality (VR) systems to view volume visualizations, and there is anecdotal evidence of the benefits of VR for this purpose. However, there has been very little empirical research exploring the effects of higher levels of immersion for volume visualization, and it is not known how various components of immersion influence the effectiveness of visualization in VR. We conducted a controlled experiment in which we studied the independent and combined effects of three components of immersion (head tracking, field of regard, and stereoscopic rendering) on the effectiveness of visualization tasks with two x-ray microscopic computed tomography datasets. We report significant benefits of analyzing volume data in an environment involving those components of immersion. We find that the benefits do not necessarily require all three components simultaneously, and that the components have variable influence on different task categories. The results of our study improve our understanding of the effects of immersion on perceived and actual task performance, and provide guidance on the choice of display systems to designers seeking to maximize the effectiveness of volume visualization applications.

  3. Role of cranial and spinal virtual and augmented reality simulation using immersive touch modules in neurosurgical training.

    PubMed

    Alaraj, Ali; Charbel, Fady T; Birk, Daniel; Tobin, Matthew; Tobin, Mathew; Luciano, Cristian; Banerjee, Pat P; Rizzi, Silvio; Sorenson, Jeff; Foley, Kevin; Slavin, Konstantin; Roitberg, Ben

    2013-01-01

    Recent studies have shown that mental script-based rehearsal and simulation-based training improve the transfer of surgical skills in various medical disciplines. Despite significant advances in technology and intraoperative techniques over the last several decades, surgical skills training on neurosurgical operations still carries significant risk of serious morbidity or mortality. Potentially avoidable technical errors are well recognized as contributing to poor surgical outcome. Surgical education is undergoing overwhelming change, as a result of the reduction of work hours and current trends focusing on patient safety and linking reimbursement with clinical outcomes. Thus, there is a need for adjunctive means for neurosurgical training, which is a recent advancement in simulation technology. ImmersiveTouch is an augmented reality system that integrates a haptic device and a high-resolution stereoscopic display. This simulation platform uses multiple sensory modalities, re-creating many of the environmental cues experienced during an actual procedure. Modules available include ventriculostomy, bone drilling, percutaneous trigeminal rhizotomy, and simulated spinal modules such as pedicle screw placement, vertebroplasty, and lumbar puncture. We present our experience with the development of such augmented reality neurosurgical modules and the feedback from neurosurgical residents.

  4. Role of Cranial and Spinal Virtual and Augmented Reality Simulation Using Immersive Touch Modules in Neurosurgical Training

    PubMed Central

    Alaraj, Ali; Charbel, Fady T.; Birk, Daniel; Tobin, Mathew; Luciano, Cristian; Banerjee, Pat P.; Rizzi, Silvio; Sorenson, Jeff; Foley, Kevin; Slavin, Konstantin; Roitberg, Ben

    2013-01-01

    Recent studies have shown that mental script-based rehearsal and simulation-based training improves the transfer of surgical skills in various medical disciplines. Despite significant advances in technology and intraoperative techniques over the last several decades, surgical skills training on neurosurgical operations still carries significant risk of serious morbidity or mortality. Potentially avoidable technical errors are well recognized as contributing to poor surgical outcome. Surgical education is undergoing overwhelming change, with reduction of working hours and current trends to focus on patient’s safety and linking reimbursement with clinical outcomes, and there is a need for adjunctive means for neurosurgical training;this has been recent advancement in simulation technology. ImmersiveTouch (IT) is an augmented reality (AR) system that integrates a haptic device and a high-resolution stereoscopic display. This simulation platform utilizes multiple sensory modalities, recreating many of the environmental cues experienced during an actual procedure. Modules available include ventriculostomy, bone drilling, percutaneous trigeminal rhizotomy, in addition to simulated spinal modules such as pedicle screw placement, vertebroplasty, and lumbar puncture. We present our experience with development of such AR neurosurgical modules and the feedback from neurosurgical residents. PMID:23254799

  5. Pathways for Learning from 3D Technology

    PubMed Central

    Carrier, L. Mark; Rab, Saira S.; Rosen, Larry D.; Vasquez, Ludivina; Cheever, Nancy A.

    2016-01-01

    The purpose of this study was to find out if 3D stereoscopic presentation of information in a movie format changes a viewer's experience of the movie content. Four possible pathways from 3D presentation to memory and learning were considered: a direct connection based on cognitive neuroscience research; a connection through "immersion" in that 3D presentations could provide additional sensorial cues (e.g., depth cues) that lead to a higher sense of being surrounded by the stimulus; a connection through general interest such that 3D presentation increases a viewer’s interest that leads to greater attention paid to the stimulus (e.g., "involvement"); and a connection through discomfort, with the 3D goggles causing discomfort that interferes with involvement and thus with memory. The memories of 396 participants who viewed two-dimensional (2D) or 3D movies at movie theaters in Southern California were tested. Within three days of viewing a movie, participants filled out an online anonymous questionnaire that queried them about their movie content memories, subjective movie-going experiences (including emotional reactions and "presence") and demographic backgrounds. The responses to the questionnaire were subjected to path analyses in which several different links between 3D presentation to memory (and other variables) were explored. The results showed there were no effects of 3D presentation, either directly or indirectly, upon memory. However, the largest effects of 3D presentation were on emotions and immersion, with 3D presentation leading to reduced positive emotions, increased negative emotions and lowered immersion, compared to 2D presentations. PMID:28078331

  6. Convergent validity and sex differences in healthy elderly adults for performance on 3D virtual reality navigation learning and 2D hidden maze tasks.

    PubMed

    Tippett, William J; Lee, Jang-Han; Mraz, Richard; Zakzanis, Konstantine K; Snyder, Peter J; Black, Sandra E; Graham, Simon J

    2009-04-01

    This study assessed the convergent validity of a virtual environment (VE) navigation learning task, the Groton Maze Learning Test (GMLT), and selected traditional neuropsychological tests performed in a group of healthy elderly adults (n = 24). The cohort was divided equally between males and females to explore performance variability due to sex differences, which were subsequently characterized and reported as part of the analysis. To facilitate performance comparisons, specific "efficiency" scores were created for both the VE navigation task and the GMLT. Men reached peak performance more rapidly than women during VE navigation and on the GMLT and significantly outperformed women on the first learning trial in the VE. Results suggest reasonable convergent validity across the VE task, GMLT, and selected neuropsychological tests for assessment of spatial memory.

  7. Acquisition, Visualization and Analysis of Photo Real 3D Virtual Geology at High Accuracy: Oblique, Close Range Data Acquisition From the Ground With Digital Cameras, Terrestrial Laser Scanners and GPS

    NASA Astrophysics Data System (ADS)

    Xu, X.; Aiken, C. L.

    2005-12-01

    uses real photos for the surfaces. Petroleum companies have been the major user of these models utilizing them in 3D stereo in their CAVE (CAVE Automatic Visualization Environment) systems for virtual field trips and integrated analysis of reservoir characterization analogs. They are also being incorporated in short courses and UTD courses such as structural geology before, during and after field trips to the sites. One of our models has been used by a visual effects group for a film for television. We will display several examples of these models at this meeting in a Geowall 3D stereo system using powerful and cost effective open source software.

  8. 3D pharmacophore-based virtual screening, docking and density functional theory approach towards the discovery of novel human epidermal growth factor receptor-2 (HER2) inhibitors.

    PubMed

    Gogoi, Dhrubajyoti; Baruah, Vishwa Jyoti; Chaliha, Amrita Kashyap; Kakoti, Bibhuti Bhushan; Sarma, Diganta; Buragohain, Alak Kumar

    2016-12-21

    Human epidermal growth factor receptor 2 (HER2) is one of the four members of the epidermal growth factor receptor (EGFR) family and is expressed to facilitate cellular proliferation across various tissue types. Therapies targeting HER2, which is a transmembrane glycoprotein with tyrosine kinase activity, offer promising prospects especially in breast and gastric/gastroesophageal cancer patients. Persistence of both primary and acquired resistance to various routine drugs/antibodies is a disappointing outcome in the treatment of many HER2 positive cancer patients and is a challenge that requires formulation of new and improved strategies to overcome the same. Identification of novel HER2 inhibitors with improved therapeutics index was performed with a highly correlating (r=0.975) ligand-based pharmacophore model (Hypo1) in this study. Hypo1 was generated from a training set of 22 compounds with HER2 inhibitory activity and this well-validated hypothesis was subsequently used as a 3D query to screen compounds in a total of four databases of which two were natural product databases. Further, these compounds were analyzed for compliance with Veber's drug-likeness rule and optimum ADMET parameters. The selected compounds were then subjected to molecular docking and Density Functional Theory (DFT) analysis to discern their molecular interactions at the active site of HER2. The findings thus presented would be an important starting point towards the development of novel HER2 inhibitors using well-validated computational techniques.

  9. Pharmacophore modelling, atom-based 3D-QSAR generation and virtual screening of molecules projected for mPGES-1 inhibitory activity.

    PubMed

    Misra, S; Saini, M; Ojha, H; Sharma, D; Sharma, K

    2017-01-01

    COX-2 inhibitors exhibit anticancer effects in various cancer models but due to the adverse side effects associated with these inhibitors, targeting molecules downstream of COX-2 (such as mPGES-1) has been suggested. Even after calls for mPGES-1 inhibitor design, to date there are only a few published inhibitors targeting the enzyme and displaying anticancer activity. In the present study, we have deployed both ligand and structure-based drug design approaches to hunt novel drug-like candidates as mPGES-1 inhibitors. Fifty-four compounds with tested mPGES-1 inhibitory value were used to develop a model with four pharmacophoric features. 3D-QSAR studies were undertaken to check the robustness of the model. Statistical parameters such as r(2) = 0.9924, q(2) = 0.5761 and F test = 1139.7 indicated significant predictive ability of the proposed model. Our QSAR model exhibits sites where a hydrogen bond donor, hydrophobic group and the aromatic ring can be substituted so as to enhance the efficacy of the inhibitor. Furthermore, we used our validated pharmacophore model as a three-dimensional query to screen the FDA-approved Lopac database. Finally, five compounds were selected as potent mPGES-1 inhibitors on the basis of their docking energy and pharmacokinetic properties such as ADME and Lipinski rule of five.

  10. Three-dimensional simulation and auto-stereoscopic 3D display of the battlefield environment based on the particle system algorithm

    NASA Astrophysics Data System (ADS)

    Ning, Jiwei; Sang, Xinzhu; Xing, Shujun; Cui, Huilong; Yan, Binbin; Yu, Chongxiu; Dou, Wenhua; Xiao, Liquan

    2016-10-01

    The army's combat training is very important now, and the simulation of the real battlefield environment is of great significance. Two-dimensional information has been unable to meet the demand at present. With the development of virtual reality technology, three-dimensional (3D) simulation of the battlefield environment is possible. In the simulation of 3D battlefield environment, in addition to the terrain, combat personnel and the combat tool ,the simulation of explosions, fire, smoke and other effects is also very important, since these effects can enhance senses of realism and immersion of the 3D scene. However, these special effects are irregular objects, which make it difficult to simulate with the general geometry. Therefore, the simulation of irregular objects is always a hot and difficult research topic in computer graphics. Here, the particle system algorithm is used for simulating irregular objects. We design the simulation of the explosion, fire, smoke based on the particle system and applied it to the battlefield 3D scene. Besides, the battlefield 3D scene simulation with the glasses-free 3D display is carried out with an algorithm based on GPU 4K super-multiview 3D video real-time transformation method. At the same time, with the human-computer interaction function, we ultimately realized glasses-free 3D display of the simulated more realistic and immersed 3D battlefield environment.

  11. Assessing the use of immersive virtual reality, mouse and touchscreen in pointing and dragging-and-dropping tasks among young, middle-aged and older adults.

    PubMed

    Chen, Jiayin; Or, Calvin

    2017-04-07

    This study assessed the use of an immersive virtual reality (VR), a mouse and a touchscreen for one-directional pointing, multi-directional pointing, and dragging-and-dropping tasks involving targets of smaller and larger widths by young (n = 18; 18-30 years), middle-aged (n = 18; 40-55 years) and older adults (n = 18; 65-75 years). A three-way, mixed-factorial design was used for data collection. The dependent variables were the movement time required and the error rate. Our main findings were that the participants took more time and made more errors in using the VR input interface than in using the mouse or the touchscreen. This pattern applied in all three age groups in all tasks, except for multi-directional pointing with a larger target width among the older group. Overall, older adults took longer to complete the tasks and made more errors than young or middle-aged adults. Larger target widths yielded shorter movement times and lower error rates in pointing tasks, but larger targets yielded higher rates of error in dragging-and-dropping tasks. Our study indicated that any other virtual environments that are similar to those we tested may be more suitable for displaying scenes than for manipulating objects that are small and require fine control. Although interacting with VR is relatively difficult, especially for older adults, there is still potential for older adults to adapt to that interface. Furthermore, adjusting the width of objects according to the type of manipulation required might be an effective way to promote performance.

  12. Identification of the Beer Component Hordenine as Food-Derived Dopamine D2 Receptor Agonist by Virtual Screening a 3D Compound Database

    PubMed Central

    Sommer, Thomas; Hübner, Harald; El Kerdawy, Ahmed; Gmeiner, Peter; Pischetsrieder, Monika; Clark, Timothy

    2017-01-01

    The dopamine D2 receptor (D2R) is involved in food reward and compulsive food intake. The present study developed a virtual screening (VS) method to identify food components, which may modulate D2R signalling. In contrast to their common applications in drug discovery, VS methods are rarely applied for the discovery of bioactive food compounds. Here, databases were created that exclusively contain substances occurring in food and natural sources (about 13,000 different compounds in total) as the basis for combined pharmacophore searching, hit-list clustering and molecular docking into D2R homology models. From 17 compounds finally tested in radioligand assays to determine their binding affinities, seven were classified as hits (hit rate = 41%). Functional properties of the five most active compounds were further examined in β-arrestin recruitment and cAMP inhibition experiments. D2R-promoted G-protein activation was observed for hordenine, a constituent of barley and beer, with approximately identical ligand efficacy as dopamine (76%) and a Ki value of 13 μM. Moreover, hordenine antagonised D2-mediated β-arrestin recruitment indicating functional selectivity. Application of our databases provides new perspectives for the discovery of bioactive food constituents using VS methods. Based on its presence in beer, we suggest that hordenine significantly contributes to mood-elevating effects of beer. PMID:28281694

  13. Identification of the Beer Component Hordenine as Food-Derived Dopamine D2 Receptor Agonist by Virtual Screening a 3D Compound Database

    NASA Astrophysics Data System (ADS)

    Sommer, Thomas; Hübner, Harald; El Kerdawy, Ahmed; Gmeiner, Peter; Pischetsrieder, Monika; Clark, Timothy

    2017-03-01

    The dopamine D2 receptor (D2R) is involved in food reward and compulsive food intake. The present study developed a virtual screening (VS) method to identify food components, which may modulate D2R signalling. In contrast to their common applications in drug discovery, VS methods are rarely applied for the discovery of bioactive food compounds. Here, databases were created that exclusively contain substances occurring in food and natural sources (about 13,000 different compounds in total) as the basis for combined pharmacophore searching, hit-list clustering and molecular docking into D2R homology models. From 17 compounds finally tested in radioligand assays to determine their binding affinities, seven were classified as hits (hit rate = 41%). Functional properties of the five most active compounds were further examined in β-arrestin recruitment and cAMP inhibition experiments. D2R-promoted G-protein activation was observed for hordenine, a constituent of barley and beer, with approximately identical ligand efficacy as dopamine (76%) and a Ki value of 13 μM. Moreover, hordenine antagonised D2-mediated β-arrestin recruitment indicating functional selectivity. Application of our databases provides new perspectives for the discovery of bioactive food constituents using VS methods. Based on its presence in beer, we suggest that hordenine significantly contributes to mood-elevating effects of beer.

  14. A new algorithm to diagnose atrial ectopic origin from multi lead ECG systems--insights from 3D virtual human atria and torso.

    PubMed

    Alday, Erick A Perez; Colman, Michael A; Langley, Philip; Butters, Timothy D; Higham, Jonathan; Workman, Antony J; Hancox, Jules C; Zhang, Henggui

    2015-01-01

    Rapid atrial arrhythmias such as atrial fibrillation (AF) predispose to ventricular arrhythmias, sudden cardiac death and stroke. Identifying the origin of atrial ectopic activity from the electrocardiogram (ECG) can help to diagnose the early onset of AF in a cost-effective manner. The complex and rapid atrial electrical activity during AF makes it difficult to obtain detailed information on atrial activation using the standard 12-lead ECG alone. Compared to conventional 12-lead ECG, more detailed ECG lead configurations may provide further information about spatio-temporal dynamics of the body surface potential (BSP) during atrial excitation. We apply a recently developed 3D human atrial model to simulate electrical activity during normal sinus rhythm and ectopic pacing. The atrial model is placed into a newly developed torso model which considers the presence of the lungs, liver and spinal cord. A boundary element method is used to compute the BSP resulting from atrial excitation. Elements of the torso mesh corresponding to the locations of the placement of the electrodes in the standard 12-lead and a more detailed 64-lead ECG configuration were selected. The ectopic focal activity was simulated at various origins across all the different regions of the atria. Simulated BSP maps during normal atrial excitation (i.e. sinoatrial node excitation) were compared to those observed experimentally (obtained from the 64-lead ECG system), showing a strong agreement between the evolution in time of the simulated and experimental data in the P-wave morphology of the ECG and dipole evolution. An algorithm to obtain the location of the stimulus from a 64-lead ECG system was developed. The algorithm presented had a success rate of 93%, meaning that it correctly identified the origin of atrial focus in 75/80 simulations, and involved a general approach relevant to any multi-lead ECG system. This represents a significant improvement over previously developed algorithms.

  15. A New Algorithm to Diagnose Atrial Ectopic Origin from Multi Lead ECG Systems - Insights from 3D Virtual Human Atria and Torso

    PubMed Central

    Alday, Erick A. Perez; Colman, Michael A.; Langley, Philip; Butters, Timothy D.; Higham, Jonathan; Workman, Antony J.; Hancox, Jules C.; Zhang, Henggui

    2015-01-01

    Rapid atrial arrhythmias such as atrial fibrillation (AF) predispose to ventricular arrhythmias, sudden cardiac death and stroke. Identifying the origin of atrial ectopic activity from the electrocardiogram (ECG) can help to diagnose the early onset of AF in a cost-effective manner. The complex and rapid atrial electrical activity during AF makes it difficult to obtain detailed information on atrial activation using the standard 12-lead ECG alone. Compared to conventional 12-lead ECG, more detailed ECG lead configurations may provide further information about spatio-temporal dynamics of the body surface potential (BSP) during atrial excitation. We apply a recently developed 3D human atrial model to simulate electrical activity during normal sinus rhythm and ectopic pacing. The atrial model is placed into a newly developed torso model which considers the presence of the lungs, liver and spinal cord. A boundary element method is used to compute the BSP resulting from atrial excitation. Elements of the torso mesh corresponding to the locations of the placement of the electrodes in the standard 12-lead and a more detailed 64-lead ECG configuration were selected. The ectopic focal activity was simulated at various origins across all the different regions of the atria. Simulated BSP maps during normal atrial excitation (i.e. sinoatrial node excitation) were compared to those observed experimentally (obtained from the 64-lead ECG system), showing a strong agreement between the evolution in time of the simulated and experimental data in the P-wave morphology of the ECG and dipole evolution. An algorithm to obtain the location of the stimulus from a 64-lead ECG system was developed. The algorithm presented had a success rate of 93%, meaning that it correctly identified the origin of atrial focus in 75/80 simulations, and involved a general approach relevant to any multi-lead ECG system. This represents a significant improvement over previously developed algorithms. PMID

  16. The 3D visualization technology research of submarine pipeline based Horde3D GameEngine

    NASA Astrophysics Data System (ADS)

    Yao, Guanghui; Ma, Xiushui; Chen, Genlang; Ye, Lingjian

    2013-10-01

    With the development of 3D display and virtual reality technology, its application gets more and more widespread. This paper applies 3D display technology to the monitoring of submarine pipeline. We reconstruct the submarine pipeline and its surrounding submarine terrain in computer using Horde3D graphics rendering engine on the foundation database "submarine pipeline and relative landforms landscape synthesis database" so as to display the virtual scene of submarine pipeline based virtual reality and show the relevant data collected from the monitoring of submarine pipeline.

  17. Immersive Kendo (Gum-do) game with an intelligent cyber fighter

    NASA Astrophysics Data System (ADS)

    Yoon, Jungwon; Kim, Se-Hwan; Ryu, Jeha; Woo, Woontack

    2003-04-01

    This paper presents a new framework of an immersive kendo game with an intelligent cyber-fighter, which has its own internal needs, motivations, sets of multimodal sensors, a motor system, and a behavior system. Unlike conventional interface such as keyboard or joystick, the proposed system provides more natural and comfortable interface by exploiting multimodal interfaces such as 3D vision and speech recognition. In addition, the proposed 3D vision-based interface allows relatively free-movement in 3D space, when it compares with wired tracker-based interfaces. As a result, the user with real sword can experience an immersive fighting with the cyber-fighter in virtual environment. The proposed framework will have wide variety of applications in VR-based edutainment applications.

  18. A virtual reality environment for patient data visualization and endoscopic surgical planning.

    PubMed

    Foo, Jung-Leng; Lobe, Thom; Winer, Eliot

    2009-04-01

    Visualizing patient data in a three-dimensional (3D) representation can be an effective surgical planning tool.As medical imaging technologies improve with faster and higher resolution scans, the use of virtual reality for interacting with medical images adds another level of realism to a 3D representation. The software framework presented in this paper is designed to load and display any DICOM/PACS-compatible 3D image data for visualization and interaction in an immersive virtual environment. In "examiner" mode, the surgeon can interact with a 3D virtual model of the patient by using an intuitive set of controls designed to allow slicing, coloring,and windowing of the image to show different tissue densities and enhance important structures. In the simulated"endoscopic camera" mode, the surgeon can see through the point of view of a virtual endoscopic camera to navigate inside the patient. These tools allow the surgeon to perform virtual endoscopy on any suitable structure.The software is highly scalable, as it can be used on a single desktop computer to a cluster of computers in an immersive multiprojection virtual environment. By wearing a pair of stereo glasses, a surgeon becomes immersed within the model itself, thus providing a sense of realism, as if the surgeon is "inside" the patient.

  19. The Virtual Glovebox (VGX): An Immersive Simulation System for Training Astronauts to Perform Glovebox Experiments in Space

    NASA Technical Reports Server (NTRS)

    Smith, Jeffrey D.; Dalton, Bonnie (Technical Monitor)

    2002-01-01

    The era of the International Space Station (ISS) has finally arrived, providing researchers on Earth a unique opportunity to study long-term effects of weightlessness and the space environment on structures, materials and living systems. Many of the physical, biological and material science experiments planned for ISS will require significant input and expertise from astronauts who must conduct the research, follow complicated assay procedures and collect data and samples in space. Containment is essential for Much of this work, both to protect astronauts from potentially harmful biological, chemical or material elements in the experiments as well as to protect the experiments from contamination by air-born particles In the Space Station environment. When astronauts must open the hardware containing such experiments, glovebox facilities provide the necessary barrier between astronaut and experiment. On Earth, astronauts are laced with the demanding task of preparing for the many glovebox experiments they will perform in space. Only a short time can be devoted to training for each experimental task and gl ovebox research only accounts for a small portion of overall training and mission objectives on any particular ISS mission. The quality of the research also must remain very high, requiring very detailed experience and knowledge of instrumentation, anatomy and specific scientific objectives for those who will conduct the research. This unique set of needs faced by NASA has stemmed the development of a new computer simulation tool, the Virtual Glovebox (VGB), which is designed to provide astronaut crews and support personnel with a means to quickly and accurately prepare and train for glovebox experiments in space.

  20. Visuospatial astronomy education in immersive digital planetariums

    NASA Astrophysics Data System (ADS)

    Yu, K. C.; Sahami, K.

    2008-06-01

    Even simple concepts in astronomy are notoriously difficult for the general public to understand. Many ideas involve three-dimensional (3D) spatial relationships among astronomical objects. However much of the traditional teaching materials used in astronomy education are two-dimensional (2D) in nature, while studies show that visualising mental rotations and perspective changes can be difficult for many. The simplifications that occur when explaining one phenomenon may lead to new misconceptions in other concepts. Properly constructed 3D simulations can provide students with the multiple perspectives necessary for understanding. As a venue for virtual astronomical environments, the new class of digital video planetariums that are appearing in museums and science centres have the potential to bridge the comprehension gap in astronomy learning. We describe a research project which aims to evaluate the effectiveness of visualisations in both immersive and non-immersive settings, by using freshmen undergraduate students from a four-year college. The retention of students over the course of a semester for this study means that student misconceptions can be tracked and recorded weekly via curriculum tests.

  1. Education System Using Interactive 3D Computer Graphics (3D-CG) Animation and Scenario Language for Teaching Materials

    ERIC Educational Resources Information Center

    Matsuda, Hiroshi; Shindo, Yoshiaki

    2006-01-01

    The 3D computer graphics (3D-CG) animation using a virtual actor's speaking is very effective as an educational medium. But it takes a long time to produce a 3D-CG animation. To reduce the cost of producing 3D-CG educational contents and improve the capability of the education system, we have developed a new education system using Virtual Actor.…

  2. 3d-3d correspondence revisited

    DOE PAGES

    Chung, Hee -Joong; Dimofte, Tudor; Gukov, Sergei; ...

    2016-04-21

    In fivebrane compactifications on 3-manifolds, we point out the importance of all flat connections in the proper definition of the effective 3d N = 2 theory. The Lagrangians of some theories with the desired properties can be constructed with the help of homological knot invariants that categorify colored Jones polynomials. Higgsing the full 3d theories constructed this way recovers theories found previously by Dimofte-Gaiotto-Gukov. As a result, we also consider the cutting and gluing of 3-manifolds along smooth boundaries and the role played by all flat connections in this operation.

  3. Virtually There.

    ERIC Educational Resources Information Center

    Lanier, Jaron

    2001-01-01

    Describes tele-immersion, a new medium for human interaction enabled by digital technologies. It combines the display and interaction techniques of virtual reality with new vision technologies that transcend the traditional limitations of a camera. Tele-immersion stations observe people as moving sculptures without favoring a single point of view.…

  4. Memory and visual search in naturalistic 2D and 3D environments

    PubMed Central

    Li, Chia-Ling; Aivar, M. Pilar; Kit, Dmitry M.; Tong, Matthew H.; Hayhoe, Mary M.

    2016-01-01

    The role of memory in guiding attention allocation in daily behaviors is not well understood. In experiments with two-dimensional (2D) images, there is mixed evidence about the importance of memory. Because the stimulus context in laboratory experiments and daily behaviors differs extensively, we investigated the role of memory in visual search, in both two-dimensional (2D) and three-dimensional (3D) environments. A 3D immersive virtual apartment composed of two rooms was created, and a parallel 2D visual search experiment composed of snapshots from the 3D environment was developed. Eye movements were tracked in both experiments. Repeated searches for geometric objects were performed to assess the role of spatial memory. Subsequently, subjects searched for realistic context objects to test for incidental learning. Our results show that subjects learned the room-target associations in 3D but less so in 2D. Gaze was increasingly restricted to relevant regions of the room with experience in both settings. Search for local contextual objects, however, was not facilitated by early experience. Incidental fixations to context objects do not necessarily benefit search performance. Together, these results demonstrate that memory for global aspects of the environment guides search by restricting allocation of attention to likely regions, whereas task relevance determines what is learned from the active search experience. Behaviors in 2D and 3D environments are comparable, although there is greater use of memory in 3D. PMID:27299769

  5. Design of monocular multiview stereoscopic 3D display

    NASA Astrophysics Data System (ADS)

    Sakamoto, Kunio; Saruta, Kazuki; Takeda, Kazutoki

    2001-06-01

    A 3D head mounted display (HMD) system is useful for constructing a virtual space. The authors have developed a 3D HMD system using the monocular stereoscopic display. This paper shows that the 3D vision system using the monocular stereoscopic display and capturing camera builds a 3D virtual space for a telemanipulation using a captured real 3D image. In this paper, we propose the monocular stereoscopic 3D display and capturing camera for a tele- manipulation system. In addition, we describe the result of depth estimation using the multi-focus retinal images.

  6. 3D and Education

    NASA Astrophysics Data System (ADS)

    Meulien Ohlmann, Odile

    2013-02-01

    Today the industry offers a chain of 3D products. Learning to "read" and to "create in 3D" becomes an issue of education of primary importance. 25 years professional experience in France, the United States and Germany, Odile Meulien set up a personal method of initiation to 3D creation that entails the spatial/temporal experience of the holographic visual. She will present some different tools and techniques used for this learning, their advantages and disadvantages, programs and issues of educational policies, constraints and expectations related to the development of new techniques for 3D imaging. Although the creation of display holograms is very much reduced compared to the creation of the 90ies, the holographic concept is spreading in all scientific, social, and artistic activities of our present time. She will also raise many questions: What means 3D? Is it communication? Is it perception? How the seeing and none seeing is interferes? What else has to be taken in consideration to communicate in 3D? How to handle the non visible relations of moving objects with subjects? Does this transform our model of exchange with others? What kind of interaction this has with our everyday life? Then come more practical questions: How to learn creating 3D visualization, to learn 3D grammar, 3D language, 3D thinking? What for? At what level? In which matter? for whom?

  7. Immersive Learning Experiences for Surgical Procedures.

    PubMed

    Cha, Young-Woon; Dou, Mingsong; Chabra, Rohan; Menozzi, Federico; State, Andrei; Wallen, Eric; Fuchs, Henry

    2016-01-01

    This paper introduces a computer-based system that is designed to record a surgical procedure with multiple depth cameras and reconstruct in three dimensions the dynamic geometry of the actions and events that occur during the procedure. The resulting 3D-plus-time data takes the form of dynamic, textured geometry and can be immersively examined at a later time; equipped with a Virtual Reality headset such as Oculus Rift DK2, a user can walk around the reconstruction of the procedure room while controlling playback of the recorded surgical procedure with simple VCR-like controls (play, pause, rewind, fast forward). The reconstruction can be annotated in space and time to provide more information of the scene to users. We expect such a system to be useful in applications such as training of medical students and nurses.

  8. VERS: a virtual environment for reconstructive surgery planning

    NASA Astrophysics Data System (ADS)

    Montgomery, Kevin N.

    1997-05-01

    The virtual environment for reconstructive surgery (VERS) project at the NASA Ames Biocomputation Center is applying virtual reality technology to aid surgeons in planning surgeries. We are working with a craniofacial surgeon at Stanford to assemble and visualize the bone structure of patients requiring reconstructive surgery either through developmental abnormalities or trauma. This project is an extension of our previous work in 3D reconstruction, mesh generation, and immersive visualization. The current VR system, consisting of an SGI Onyx RE2, FakeSpace BOOM and ImmersiveWorkbench, Virtual Technologies CyberGlove and Ascension Technologies tracker, is currently in development and has already been used to visualize defects preoperatively. In the near future it will be used to more fully plan the surgery and compute the projected result to soft tissue structure. This paper presents the work in progress and details the production of a high-performance, collaborative, and networked virtual environment.

  9. Radiological tele-immersion for next generation networks.

    PubMed

    Ai, Z; Dech, F; Rasmussen, M; Silverstein, J C

    2000-01-01

    Since the acquisition of high-resolution three-dimensional patient images has become widespread, medical volumetric datasets (CT or MR) larger than 100 MB and encompassing more than 250 slices are common. It is important to make this patient-specific data quickly available and usable to many specialists at different geographical sites. Web-based systems have been developed to provide volume or surface rendering of medical data over networks with low fidelity, but these cannot adequately handle stereoscopic visualization or huge datasets. State-of-the-art virtual reality techniques and high speed networks have made it possible to create an environment for clinicians geographically distributed to immersively share these massive datasets in real-time. An object-oriented method for instantaneously importing medical volumetric data into Tele-Immersive environments has been developed at the Virtual Reality in Medicine Laboratory (VRMedLab) at the University of Illinois at Chicago (UIC). This networked-VR setup is based on LIMBO, an application framework or template that provides the basic capabilities of Tele-Immersion. We have developed a modular general purpose Tele-Immersion program that automatically combines 3D medical data with the methods for handling the data. For this purpose a DICOM loader for IRIS Performer has been developed. The loader was designed for SGI machines as a shared object, which is executed at LIMBO's runtime. The loader loads not only the selected DICOM dataset, but also methods for rendering, handling, and interacting with the data, bringing networked, real-time, stereoscopic interaction with radiological data to reality. Collaborative, interactive methods currently implemented in the loader include cutting planes and windowing. The Tele-Immersive environment has been tested on the UIC campus over an ATM network. We tested the environment with 3 nodes; one ImmersaDesk at the VRMedLab, one CAVE at the Electronic Visualization Laboratory (EVL) on

  10. 3D movies for teaching seafloor bathymetry, plate tectonics, and ocean circulation in large undergraduate classes

    NASA Astrophysics Data System (ADS)

    Peterson, C. D.; Lisiecki, L. E.; Gebbie, G.; Hamann, B.; Kellogg, L. H.; Kreylos, O.; Kronenberger, M.; Spero, H. J.; Streletz, G. J.; Weber, C.

    2015-12-01

    Geologic problems and datasets are often 3D or 4D in nature, yet projected onto a 2D surface such as a piece of paper or a projection screen. Reducing the dimensionality of data forces the reader to "fill in" that collapsed dimension in their minds, creating a cognitive challenge for the reader, especially new learners. Scientists and students can visualize and manipulate 3D datasets using the virtual reality software developed for the immersive, real-time interactive 3D environment at the KeckCAVES at UC Davis. The 3DVisualizer software (Billen et al., 2008) can also operate on a desktop machine to produce interactive 3D maps of earthquake epicenter locations and 3D bathymetric maps of the seafloor. With 3D projections of seafloor bathymetry and ocean circulation proxy datasets in a virtual reality environment, we can create visualizations of carbon isotope (δ13C) records for academic research and to aid in demonstrating thermohaline circulation in the classroom. Additionally, 3D visualization of seafloor bathymetry allows students to see features of seafloor most people cannot observe first-hand. To enhance lessons on mid-ocean ridges and ocean basin genesis, we have created movies of seafloor bathymetry for a large-enrollment undergraduate-level class, Introduction to Oceanography. In the past four quarters, students have enjoyed watching 3D movies, and in the fall quarter (2015), we will assess how well 3D movies enhance learning. The class will be split into two groups, one who learns about the Mid-Atlantic Ridge from diagrams and lecture, and the other who learns with a supplemental 3D visualization. Both groups will be asked "what does the seafloor look like?" before and after the Mid-Atlantic Ridge lesson. Then the whole class will watch the 3D movie and respond to an additional question, "did the 3D visualization enhance your understanding of the Mid-Atlantic Ridge?" with the opportunity to further elaborate on the effectiveness of the visualization.

  11. Immersive Learning Technologies

    DTIC Science & Technology

    2009-08-20

    Team  Games  James Xu  Virtual Worlds  Keysha Gamor  Mobile  Judy Brown  Web 2.0  Mark Friedman 3 … and It’s Not Just Games  “The smartest... Web 2.0 , Twitter 13 Questions or Comments? Peter Smith Team Lead, Immersive Learning Technologies peter.smith.ctr@adlnet.gov +1.407.384.5572

  12. Virtual Drilling - Sculpturing in 3-D Volumes

    DTIC Science & Technology

    2007-11-02

    and the European Social Fund ized either by using the mouse or by entering the right perspective angle for more accuracy using the appropriate edit...Oct.25-28, 2001, Istanbul, TURKEY Fig. 2. Di erent views of a maxillary incisor after drilling. plication. Many e orts have been made in this

  13. iMedic: a two-handed immersive medical environment for distributed interactive consultation.

    PubMed

    Mlyniec, Paul; Jerald, Jason; Yoganandan, Arun; Seagull, F Jacob; Toledo, Fernando; Schultheis, Udo

    2011-01-01

    We describe a two-handed immersive and distributed 3D medical system that enables intuitive interaction with multimedia objects and space. The system is applicable to a number of virtual reality and tele-consulting scenarios. Various features were implemented, including measurement tools, interactive segmentation, non-orthogonal planar views, and 3D markup. User studies demonstrated the system's effectiveness in fundamental 3D tasks, showing that iMedic's two-handed interface enables placement and construction of 3D objects 4.5-4.7 times as fast as a mouse interface and 1.3-1.7 times as fast as a one-handed wand interface. In addition, avatar-to-avatar collaboration (two iMedic users in a shared space-one subject and one mentor) was shown to be more effective than face-to-face collaboration (one iMedic user/subject and one live mentor) for three tasks.

  14. Cryogenic 3D printing for tissue engineering.

    PubMed

    Adamkiewicz, Michal; Rubinsky, Boris

    2015-12-01

    We describe a new cryogenic 3D printing technology for freezing hydrogels, with a potential impact to tissue engineering. We show that complex frozen hydrogel structures can be generated when the 3D object is printed immersed in a liquid coolant (liquid nitrogen), whose upper surface is maintained at the same level as the highest deposited layer of the object. This novel approach ensures that the process of freezing is controlled precisely, and that already printed frozen layers remain at a constant temperature. We describe the device and present results which illustrate the potential of the new technology.

  15. The Overlapping Worlds View: Analvzing Identity Transformation in Real and Virtual Worlds and the Effects on Learning

    ERIC Educational Resources Information Center

    Evans, Michael A.; Wang, Feihong

    2008-01-01

    Of late, digital game-based learning has attracted game designers, researchers and educators alike. Immersion in the virtual 3D environment of a game may have positive effects on K-12 students' cultivation of self (Dodge et al., 2006). Currently, two opposing views related to game-based identity formation are presented in the literature: the…

  16. Lift-Off: Using Reference Imagery and Freehand Sketching to Create 3D Models in VR.

    PubMed

    Jackson, Bret; Keefe, Daniel F

    2016-04-01

    Three-dimensional modeling has long been regarded as an ideal application for virtual reality (VR), but current VR-based 3D modeling tools suffer from two problems that limit creativity and applicability: (1) the lack of control for freehand modeling, and (2) the difficulty of starting from scratch. To address these challenges, we present Lift-Off, an immersive 3D interface for creating complex models with a controlled, handcrafted style. Artists start outside of VR with 2D sketches, which are then imported and positioned in VR. Then, using a VR interface built on top of image processing algorithms, 2D curves within the sketches are selected interactively and "lifted" into space to create a 3D scaffolding for the model. Finally, artists sweep surfaces along these curves to create 3D models. Evaluations are presented for both long-term users and for novices who each created a 3D sailboat model from the same starting sketch. Qualitative results are positive, with the visual style of the resulting models of animals and other organic subjects as well as architectural models matching what is possible with traditional fine art media. In addition, quantitative data from logging features built into the software are used to characterize typical tool use and suggest areas for further refinement of the interface.

  17. 3D gaze tracking system for NVidia 3D Vision®.

    PubMed

    Wibirama, Sunu; Hamamoto, Kazuhiko

    2013-01-01

    Inappropriate parallax setting in stereoscopic content generally causes visual fatigue and visual discomfort. To optimize three dimensional (3D) effects in stereoscopic content by taking into account health issue, understanding how user gazes at 3D direction in virtual space is currently an important research topic. In this paper, we report the study of developing a novel 3D gaze tracking system for Nvidia 3D Vision(®) to be used in desktop stereoscopic display. We suggest an optimized geometric method to accurately measure the position of virtual 3D object. Our experimental result shows that the proposed system achieved better accuracy compared to conventional geometric method by average errors 0.83 cm, 0.87 cm, and 1.06 cm in X, Y, and Z dimensions, respectively.

  18. Interactive 3D Mars Visualization

    NASA Technical Reports Server (NTRS)

    Powell, Mark W.

    2012-01-01

    The Interactive 3D Mars Visualization system provides high-performance, immersive visualization of satellite and surface vehicle imagery of Mars. The software can be used in mission operations to provide the most accurate position information for the Mars rovers to date. When integrated into the mission data pipeline, this system allows mission planners to view the location of the rover on Mars to 0.01-meter accuracy with respect to satellite imagery, with dynamic updates to incorporate the latest position information. Given this information so early in the planning process, rover drivers are able to plan more accurate drive activities for the rover than ever before, increasing the execution of science activities significantly. Scientifically, this 3D mapping information puts all of the science analyses to date into geologic context on a daily basis instead of weeks or months, as was the norm prior to this contribution. This allows the science planners to judge the efficacy of their previously executed science observations much more efficiently, and achieve greater science return as a result. The Interactive 3D Mars surface view is a Mars terrain browsing software interface that encompasses the entire region of exploration for a Mars surface exploration mission. The view is interactive, allowing the user to pan in any direction by clicking and dragging, or to zoom in or out by scrolling the mouse or touchpad. This set currently includes tools for selecting a point of interest, and a ruler tool for displaying the distance between and positions of two points of interest. The mapping information can be harvested and shared through ubiquitous online mapping tools like Google Mars, NASA WorldWind, and Worldwide Telescope.

  19. Planning, implementation and optimization of future space missions using an immersive visualization environment (IVE) machine.

    PubMed

    Harris, E Nathan; Morgenthaler, George W

    2004-07-01

    Beginning in 1995, a team of 3-D engineering visualization experts assembled at the Lockheed Martin Space Systems Company and began to develop innovative virtual prototyping simulation tools for performing ground processing and real-time visualization of design and planning of aerospace missions. At the University of Colorado, a team of 3-D visualization experts also began developing the science of 3-D visualization and immersive visualization at the newly founded British Petroleum (BP) Center for visualization, which began operations in October, 2001. BP acquired ARCO in the year 2000 and awarded the 3-D flexible IVE developed by ARCO (beginning in 1990) to the University of Colorado, CU, the winner in a competition among 6 Universities. CU then hired Dr. G. Dorn, the leader of the ARCO team as Center Director, and the other experts to apply 3-D immersive visualization to aerospace and to other University Research fields, while continuing research on surface interpretation of seismic data and 3-D volumes. This paper recounts further progress and outlines plans in Aerospace applications at Lockheed Martin and CU.

  20. Clinically Normal Stereopsis Does Not Ensure Performance Benefit from Stereoscopic 3D Depth Cues

    DTIC Science & Technology

    2014-10-28

    Stereopsis, Binocular Vision, Optometry, Depth Perception , 3D vision, 3D human factors, Stereoscopic displays, S3D, Virtual environment 16...Binocular Vision, Optometry, Depth Perception , 3D vision, 3D human factors, Stereoscopic displays, S3D, Virtual environment 1 Distribution A: Approved...disparities (up to ~20 arc min) that are fused into a single binocular percept when presented briefly, and that result in increased perceptions of depth