Science.gov

Sample records for 3-d virtual environment

  1. Virtual Representations in 3D Learning Environments

    ERIC Educational Resources Information Center

    Shonfeld, Miri; Kritz, Miki

    2013-01-01

    This research explores the extent to which virtual worlds can serve as online collaborative learning environments for students by increasing social presence and engagement. 3D environments enable learning, which simulates face-to-face encounters while retaining the advantages of online learning. Students in Education departments created avatars…

  2. Virtual VMASC: A 3D Game Environment

    NASA Technical Reports Server (NTRS)

    Manepalli, Suchitra; Shen, Yuzhong; Garcia, Hector M.; Lawsure, Kaleen

    2010-01-01

    The advantages of creating interactive 3D simulations that allow viewing, exploring, and interacting with land improvements, such as buildings, in digital form are manifold and range from allowing individuals from anywhere in the world to explore those virtual land improvements online, to training military personnel in dealing with war-time environments, and to making those land improvements available in virtual worlds such as Second Life. While we haven't fully explored the true potential of such simulations, we have identified a requirement within our organization to use simulations like those to replace our front-desk personnel and allow visitors to query, naVigate, and communicate virtually with various entities within the building. We implemented the Virtual VMASC 3D simulation of the Virginia Modeling Analysis and Simulation Center (VMASC) office building to not only meet our front-desk requirement but also to evaluate the effort required in designing such a simulation and, thereby, leverage the experience we gained in future projects of this kind. This paper describes the goals we set for our implementation, the software approach taken, the modeling contribution made, and the technologies used such as XNA Game Studio, .NET framework, Autodesk software packages, and, finally, the applicability of our implementation on a variety of architectures including Xbox 360 and PC. This paper also summarizes the result of our evaluation and the lessons learned from our effort.

  3. Virtual hand: a 3D tactile interface to virtual environments

    NASA Astrophysics Data System (ADS)

    Rogowitz, Bernice E.; Borrel, Paul

    2008-02-01

    We introduce a novel system that allows users to experience the sensation of touch in a computer graphics environment. In this system, the user places his/her hand on an array of pins, which is moved about space on a 6 degree-of-freedom robot arm. The surface of the pins defines a surface in the virtual world. This "virtual hand" can move about the virtual world. When the virtual hand encounters an object in the virtual world, the heights of the pins are adjusted so that they represent the object's shape, surface, and texture. A control system integrates pin and robot arm motions to transmit information about objects in the computer graphics world to the user. It also allows the user to edit, change and move the virtual objects, shapes and textures. This system provides a general framework for touching, manipulating, and modifying objects in a 3-D computer graphics environment, which may be useful in a wide range of applications, including computer games, computer aided design systems, and immersive virtual worlds.

  4. Learning in 3D Virtual Environments: Collaboration and Knowledge Spirals

    ERIC Educational Resources Information Center

    Burton, Brian G.; Martin, Barbara N.

    2010-01-01

    The purpose of this case study was to determine if learning occurred within a 3D virtual learning environment by determining if elements of collaboration and Nonaka and Takeuchi's (1995) knowledge spiral were present. A key portion of this research was the creation of a Virtual Learning Environment. This 3D VLE utilized the Torque Game Engine…

  5. A specification of 3D manipulation in virtual environments

    NASA Technical Reports Server (NTRS)

    Su, S. Augustine; Furuta, Richard

    1994-01-01

    In this paper we discuss the modeling of three basic kinds of 3-D manipulations in the context of a logical hand device and our virtual panel architecture. The logical hand device is a useful software abstraction representing hands in virtual environments. The virtual panel architecture is the 3-D component of the 2-D window systems. Both of the abstractions are intended to form the foundation for adaptable 3-D manipulation.

  6. Development of visual 3D virtual environment for control software

    NASA Technical Reports Server (NTRS)

    Hirose, Michitaka; Myoi, Takeshi; Amari, Haruo; Inamura, Kohei; Stark, Lawrence

    1991-01-01

    Virtual environments for software visualization may enable complex programs to be created and maintained. A typical application might be for control of regional electric power systems. As these encompass broader computer networks than ever, construction of such systems becomes very difficult. Conventional text-oriented environments are useful in programming individual processors. However, they are obviously insufficient to program a large and complicated system, that includes large numbers of computers connected to each other; such programming is called 'programming in the large.' As a solution for this problem, the authors are developing a graphic programming environment wherein one can visualize complicated software in virtual 3D world. One of the major features of the environment is the 3D representation of concurrent process. 3D representation is used to supply both network-wide interprocess programming capability (capability for 'programming in the large') and real-time programming capability. The authors' idea is to fuse both the block diagram (which is useful to check relationship among large number of processes or processors) and the time chart (which is useful to check precise timing for synchronization) into a single 3D space. The 3D representation gives us a capability for direct and intuitive planning or understanding of complicated relationship among many concurrent processes. To realize the 3D representation, a technology to enable easy handling of virtual 3D object is a definite necessity. Using a stereo display system and a gesture input device (VPL DataGlove), our prototype of the virtual workstation has been implemented. The workstation can supply the 'sensation' of the virtual 3D space to a programmer. Software for the 3D programming environment is implemented on the workstation. According to preliminary assessments, a 50 percent reduction of programming effort is achieved by using the virtual 3D environment. The authors expect that the 3D

  7. What Are the Learning Affordances of 3-D Virtual Environments?

    ERIC Educational Resources Information Center

    Dalgarno, Barney; Lee, Mark J. W.

    2010-01-01

    This article explores the potential learning benefits of three-dimensional (3-D) virtual learning environments (VLEs). Drawing on published research spanning two decades, it identifies a set of unique characteristics of 3-D VLEs, which includes aspects of their representational fidelity and aspects of the learner-computer interactivity they…

  8. Educational Visualizations in 3D Collaborative Virtual Environments: A Methodology

    ERIC Educational Resources Information Center

    Fominykh, Mikhail; Prasolova-Forland, Ekaterina

    2012-01-01

    Purpose: Collaborative virtual environments (CVEs) have become increasingly popular in educational settings and the role of 3D content is becoming more and more important. Still, there are many challenges in this area, such as lack of empirical studies that provide design for educational activities in 3D CVEs and lack of norms of how to support…

  9. Game-Like Language Learning in 3-D Virtual Environments

    ERIC Educational Resources Information Center

    Berns, Anke; Gonzalez-Pardo, Antonio; Camacho, David

    2013-01-01

    This paper presents our recent experiences with the design of game-like applications in 3-D virtual environments as well as its impact on student motivation and learning. Therefore our paper starts with a brief analysis of the motivational aspects of videogames and virtual worlds (VWs). We then go on to explore the possible benefits of both in the…

  10. Measuring Knowledge Acquisition in 3D Virtual Learning Environments.

    PubMed

    Nunes, Eunice P dos Santos; Roque, Licínio G; Nunes, Fatima de Lourdes dos Santos

    2016-01-01

    Virtual environments can contribute to the effective learning of various subjects for people of all ages. Consequently, they assist in reducing the cost of maintaining physical structures of teaching, such as laboratories and classrooms. However, the measurement of how learners acquire knowledge in such environments is still incipient in the literature. This article presents a method to evaluate the knowledge acquisition in 3D virtual learning environments (3D VLEs) by using the learner's interactions in the VLE. Three experiments were conducted that demonstrate the viability of using this method and its computational implementation. The results suggest that it is possible to automatically assess learning in predetermined contexts and that some types of user interactions in 3D VLEs are correlated with the user's learning differential. PMID:26915117

  11. Consultation virtual collaborative environment for 3D medicine.

    PubMed

    Krsek, Premysl; Spanel, Michal; Svub, Miroslav; Stancl, Vít; Siler, Ondrej; Sára, Vítezslav

    2008-01-01

    This article focuses on the problems of consultation virtual collaborative environment, which is designed to support 3D medical applications. This system allows loading CT/MR data from PACS system, segmentation and 3D models of tissues. It allows distant 3D consultations of the data between technicians and surgeons. System is designed as three-layer client-server architecture. Communication between clients and server is done via HTTP/HTTPS protocol. Results and tests have confirmed, that today's standard network latency and dataflow do not affect the usability of our system. PMID:19162770

  12. Cognitive Aspects of Collaboration in 3d Virtual Environments

    NASA Astrophysics Data System (ADS)

    Juřík, V.; Herman, L.; Kubíček, P.; Stachoň, Z.; Šašinka, Č.

    2016-06-01

    Human-computer interaction has entered the 3D era. The most important models representing spatial information — maps — are transferred into 3D versions regarding the specific content to be displayed. Virtual worlds (VW) become promising area of interest because of possibility to dynamically modify content and multi-user cooperation when solving tasks regardless to physical presence. They can be used for sharing and elaborating information via virtual images or avatars. Attractiveness of VWs is emphasized also by possibility to measure operators' actions and complex strategies. Collaboration in 3D environments is the crucial issue in many areas where the visualizations are important for the group cooperation. Within the specific 3D user interface the operators' ability to manipulate the displayed content is explored regarding such phenomena as situation awareness, cognitive workload and human error. For such purpose, the VWs offer a great number of tools for measuring the operators' responses as recording virtual movement or spots of interest in the visual field. Study focuses on the methodological issues of measuring the usability of 3D VWs and comparing them with the existing principles of 2D maps. We explore operators' strategies to reach and interpret information regarding the specific type of visualization and different level of immersion.

  13. Virtual environment display for a 3D audio room simulation

    NASA Technical Reports Server (NTRS)

    Chapin, William L.; Foster, Scott H.

    1992-01-01

    The development of a virtual environment simulation system integrating a 3D acoustic audio model with an immersive 3D visual scene is discussed. The system complements the acoustic model and is specified to: allow the listener to freely move about the space, a room of manipulable size, shape, and audio character, while interactively relocating the sound sources; reinforce the listener's feeling of telepresence in the acoustical environment with visual and proprioceptive sensations; enhance the audio with the graphic and interactive components, rather than overwhelm or reduce it; and serve as a research testbed and technology transfer demonstration. The hardware/software design of two demonstration systems, one installed and one portable, are discussed through the development of four iterative configurations.

  14. Virtual environment display for a 3D audio room simulation

    NASA Astrophysics Data System (ADS)

    Chapin, William L.; Foster, Scott

    1992-06-01

    Recent developments in virtual 3D audio and synthetic aural environments have produced a complex acoustical room simulation. The acoustical simulation models a room with walls, ceiling, and floor of selected sound reflecting/absorbing characteristics and unlimited independent localizable sound sources. This non-visual acoustic simulation, implemented with 4 audio ConvolvotronsTM by Crystal River Engineering and coupled to the listener with a Poihemus IsotrakTM, tracking the listener's head position and orientation, and stereo headphones returning binaural sound, is quite compelling to most listeners with eyes closed. This immersive effect should be reinforced when properly integrated into a full, multi-sensory virtual environment presentation. This paper discusses the design of an interactive, visual virtual environment, complementing the acoustic model and specified to: 1) allow the listener to freely move about the space, a room of manipulable size, shape, and audio character, while interactively relocating the sound sources; 2) reinforce the listener's feeling of telepresence into the acoustical environment with visual and proprioceptive sensations; 3) enhance the audio with the graphic and interactive components, rather than overwhelm or reduce it; and 4) serve as a research testbed and technology transfer demonstration. The hardware/software design of two demonstration systems, one installed and one portable, are discussed through the development of four iterative configurations. The installed system implements a head-coupled, wide-angle, stereo-optic tracker/viewer and multi-computer simulation control. The portable demonstration system implements a head-mounted wide-angle, stereo-optic display, separate head and pointer electro-magnetic position trackers, a heterogeneous parallel graphics processing system, and object oriented C++ program code.

  15. 3D Technology Selection for a Virtual Learning Environment by Blending ISO 9126 Standard and AHP

    ERIC Educational Resources Information Center

    Cetin, Aydin; Guler, Inan

    2011-01-01

    Web3D presents many opportunities for learners in a virtual world or virtual environment over the web. This is a great opportunity for open-distance education institutions to benefit from web3d technologies to create courses with interactive 3d materials. There are many open source and commercial products offering 3d technologies over the web…

  16. From Multi-User Virtual Environment to 3D Virtual Learning Environment

    ERIC Educational Resources Information Center

    Livingstone, Daniel; Kemp, Jeremy; Edgar, Edmund

    2008-01-01

    While digital virtual worlds have been used in education for a number of years, advances in the capabilities and spread of technology have fed a recent boom in interest in massively multi-user 3D virtual worlds for entertainment, and this in turn has led to a surge of interest in their educational applications. In this paper we briefly review the…

  17. Rapid prototyping 3D virtual world interfaces within a virtual factory environment

    NASA Technical Reports Server (NTRS)

    Kosta, Charles Paul; Krolak, Patrick D.

    1993-01-01

    On-going work into user requirements analysis using CLIPS (NASA/JSC) expert systems as an intelligent event simulator has led to research into three-dimensional (3D) interfaces. Previous work involved CLIPS and two-dimensional (2D) models. Integral to this work was the development of the University of Massachusetts Lowell parallel version of CLIPS, called PCLIPS. This allowed us to create both a Software Bus and a group problem-solving environment for expert systems development. By shifting the PCLIPS paradigm to use the VEOS messaging protocol we have merged VEOS (HlTL/Seattle) and CLIPS into a distributed virtual worlds prototyping environment (VCLIPS). VCLIPS uses the VEOS protocol layer to allow multiple experts to cooperate on a single problem. We have begun to look at the control of a virtual factory. In the virtual factory there are actors and objects as found in our Lincoln Logs Factory of the Future project. In this artificial reality architecture there are three VCLIPS entities in action. One entity is responsible for display and user events in the 3D virtual world. Another is responsible for either simulating the virtual factory or communicating with the real factory. The third is a user interface expert. The interface expert maps user input levels, within the current prototype, to control information for the factory. The interface to the virtual factory is based on a camera paradigm. The graphics subsystem generates camera views of the factory on standard X-Window displays. The camera allows for view control and object control. Control or the factory is accomplished by the user reaching into the camera views to perform object interactions. All communication between the separate CLIPS expert systems is done through VEOS.

  18. Contextual EFL Learning in a 3D Virtual Environment

    ERIC Educational Resources Information Center

    Lan, Yu-Ju

    2015-01-01

    The purposes of the current study are to develop virtually immersive EFL learning contexts for EFL learners in Taiwan to pre- and review English materials beyond the regular English class schedule. A 2-iteration action research lasting for one semester was conducted to evaluate the effects of virtual contexts on learners' EFL learning. 132…

  19. Socialisation for Learning at a Distance in a 3-D Multi-User Virtual Environment

    ERIC Educational Resources Information Center

    Edirisingha, Palitha; Nie, Ming; Pluciennik, Mark; Young, Ruth

    2009-01-01

    This paper reports findings of a pilot study that examined the pedagogical potential of "Second Life" (SL), a popular three-dimensional multi-user virtual environment (3-D MUVE) developed by the Linden Lab. The study is part of a 1-year research and development project titled "Modelling of Secondlife Environments" (http://www.le.ac.uk/moose)…

  20. The Cognitive Apprenticeship Theory for the Teaching of Mathematics in an Online 3D Virtual Environment

    ERIC Educational Resources Information Center

    Bouta, Hara; Paraskeva, Fotini

    2013-01-01

    Research spanning two decades shows that there is a continuing development of 3D virtual worlds and investment in such environments for educational purposes. Research stresses the need for these environments to be well-designed and for suitable pedagogies to be implemented in the teaching practice in order for these worlds to be fully effective.…

  1. Three Primary School Students' Cognition about 3D Rotation in a Virtual Reality Learning Environment

    ERIC Educational Resources Information Center

    Yeh, Andy

    2010-01-01

    This paper reports on three primary school students' explorations of 3D rotation in a virtual reality learning environment (VRLE) named VRMath. When asked to investigate if you would face the same direction when you turn right 45 degrees first then roll up 45 degrees, or when you roll up 45 degrees first then turn right 45 degrees, the students…

  2. GEARS a 3D Virtual Learning Environment and Virtual Social and Educational World Used in Online Secondary Schools

    ERIC Educational Resources Information Center

    Barkand, Jonathan; Kush, Joseph

    2009-01-01

    Virtual Learning Environments (VLEs) are becoming increasingly popular in online education environments and have multiple pedagogical advantages over more traditional approaches to education. VLEs include 3D worlds where students can engage in simulated learning activities such as Second Life. According to Claudia L'Amoreaux at Linden Lab, "at…

  3. Applying a 3D Situational Virtual Learning Environment to the Real World Business--An Extended Research in Marketing

    ERIC Educational Resources Information Center

    Wang, Shwu-huey

    2012-01-01

    In order to understand (1) what kind of students can be facilitated through the help of three-dimensional virtual learning environment (3D VLE), and (2) the relationship between a conventional test (ie, paper and pencil test) and the 3D VLE used in this study, the study designs a 3D virtual supermarket (3DVS) to help students transform their role…

  4. Modulation of cortical activity in 2D versus 3D virtual reality environments: an EEG study.

    PubMed

    Slobounov, Semyon M; Ray, William; Johnson, Brian; Slobounov, Elena; Newell, Karl M

    2015-03-01

    There is a growing empirical evidence that virtual reality (VR) is valuable for education, training, entertaining and medical rehabilitation due to its capacity to represent real-life events and situations. However, the neural mechanisms underlying behavioral confounds in VR environments are still poorly understood. In two experiments, we examined the effect of fully immersive 3D stereoscopic presentations and less immersive 2D VR environments on brain functions and behavioral outcomes. In Experiment 1 we examined behavioral and neural underpinnings of spatial navigation tasks using electroencephalography (EEG). In Experiment 2, we examined EEG correlates of postural stability and balance. Our major findings showed that fully immersive 3D VR induced a higher subjective sense of presence along with enhanced success rate of spatial navigation compared to 2D. In Experiment 1 power of frontal midline EEG (FM-theta) was significantly higher during the encoding phase of route presentation in the 3D VR. In Experiment 2, the 3D VR resulted in greater postural instability and modulation of EEG patterns as a function of 3D versus 2D environments. The findings support the inference that the fully immersive 3D enriched-environment requires allocation of more brain and sensory resources for cognitive/motor control during both tasks than 2D presentations. This is further evidence that 3D VR tasks using EEG may be a promising approach for performance enhancement and potential applications in clinical/rehabilitation settings. PMID:25448267

  5. Using virtual 3D audio in multispeech channel and multimedia environments

    NASA Astrophysics Data System (ADS)

    Orosz, Michael D.; Karplus, Walter J.; Balakrishnan, Jerry D.

    2000-08-01

    The advantages and disadvantages of using virtual 3-D audio in mission-critical, multimedia display interfaces were evaluated. The 3D audio platform seems to be an especially promising candidate for aircraft cockpits, flight control rooms, and other command and control environments in which operators must make mission-critical decisions while handling demanding and routine tasks. Virtual audio signal processing creates the illusion for a listener wearing conventional earphones that each of a multiplicity of simultaneous speech or audio channels is originating from a different, program- specified location in virtual space. To explore the possible uses of this new, readily available technology, a test bed simulating some of the conditions experienced by the chief flight test coordinator at NASA's Dryden Flight Research Center was designed and implemented. Thirty test subjects simultaneously performed routine tasks requiring constant hand-eye coordination, while monitoring four speech channels, each generating continuous speech signals, for the occurrence of pre-specified keywords. Performance measures included accuracy in identifying the keywords, accuracy in identifying the speaker of the keyword, and response time. We found substantial improvements on all of these measures when comparing virtual audio with conventional, monaural transmissions. We also explored the effect on operator performance of different spatial configurations of the audio sources in 3-D space, simulated movement (dither) in the source locations, and of providing graphical redundancy. Some of these manipulations were less effective and may even decrease performance efficiency, even though they improve some aspects of the virtual space simulation.

  6. Using the CAVE virtual-reality environment as an aid to 3-D electromagnetic field computation

    SciTech Connect

    Turner, L.R.; Levine, D.; Huang, M.; Papka, M; Kettunen, L.

    1995-08-01

    One of the major problems in three-dimensional (3-D) field computation is visualizing the resulting 3-D field distributions. A virtual-reality environment, such as the CAVE, (CAVE Automatic Virtual Environment) is helping to overcome this problem, thus making the results of computation more usable for designers and users of magnets and other electromagnetic devices. As a demonstration of the capabilities of the CAVE, the elliptical multipole wiggler (EMW), an insertion device being designed for the Advanced Photon Source (APS) now being commissioned at Argonne National Laboratory (ANL), wa made visible, along with its fields and beam orbits. Other uses of the CAVE in preprocessing and postprocessing computation for electromagnetic applications are also discussed.

  7. The cognitive apprenticeship theory for the teaching of mathematics in an online 3D virtual environment

    NASA Astrophysics Data System (ADS)

    Bouta, Hara; Paraskeva, Fotini

    2013-03-01

    Research spanning two decades shows that there is a continuing development of 3D virtual worlds and investment in such environments for educational purposes. Research stresses the need for these environments to be well-designed and for suitable pedagogies to be implemented in the teaching practice in order for these worlds to be fully effective. To this end, we propose a pedagogical framework based on the cognitive apprenticeship for deriving principles and guidelines to inform the design, development and use of a 3D virtual environment. This study examines how the use of a 3D virtual world facilitates the teaching of mathematics in primary education by combining design principles and guidelines based on the Cognitive Apprenticeship Theory and the teaching methods that this theory introduces. We focus specifically on 5th and 6th grade students' engagement (behavioral, affective and cognitive) while learning fractional concepts over a period of two class sessions. Quantitative and qualitative analyses indicate considerable improvement in the engagement of the students who participated in the experiment. This paper presents the findings regarding students' cognitive engagement in the process of comprehending basic fractional concepts - notoriously hard for students to master. The findings are encouraging and suggestions are made for further research.

  8. Going Virtual… or Not: Development and Testing of a 3D Virtual Astronomy Environment

    NASA Astrophysics Data System (ADS)

    Ruzhitskaya, L.; Speck, A.; Ding, N.; Baldridge, S.; Witzig, S.; Laffey, J.

    2013-04-01

    We present our preliminary results of a pilot study of students' knowledge transfer of an astronomy concept into a new environment. We also share our discoveries on what aspects of a 3D environment students consider being motivational and discouraging for their learning. This study was conducted among 64 non-science major students enrolled in an astronomy laboratory course. During the course, students learned the concept and applications of Kepler's laws using a 2D interactive environment. Later in the semester, the students were placed in a 3D environment in which they were asked to conduct observations and to answers a set of questions pertaining to the Kepler's laws of planetary motion. In this study, we were interested in observing scrutinizing and assessing students' behavior: from choices that they made while creating their avatars (virtual representations) to tools they choose to use, to their navigational patterns, to their levels of discourse in the environment. These helped us to identify what features of the 3D environment our participants found to be helpful and interesting and what tools created unnecessary clutter and distraction. The students' social behavior patterns in the virtual environment together with their answers to the questions helped us to determine how well they understood Kepler's laws, how well they could transfer the concepts to a new situation, and at what point a motivational tool such as a 3D environment becomes a disruption to the constructive learning. Our founding confirmed that students construct deeper knowledge of a concept when they are fully immersed in the environment.

  9. Versatile, Immersive, Creative and Dynamic Virtual 3-D Healthcare Learning Environments: A Review of the Literature

    PubMed Central

    2008-01-01

    The author provides a critical overview of three-dimensional (3-D) virtual worlds and “serious gaming” that are currently being developed and used in healthcare professional education and medicine. The relevance of this e-learning innovation for teaching students and professionals is debatable and variables influencing adoption, such as increased knowledge, self-directed learning, and peer collaboration, by academics, healthcare professionals, and business executives are examined while looking at various Web 2.0/3.0 applications. There is a need for more empirical research in order to unearth the pedagogical outcomes and advantages associated with this e-learning technology. A brief description of Roger’s Diffusion of Innovations Theory and Siemens’ Connectivism Theory for today’s learners is presented as potential underlying pedagogical tenets to support the use of virtual 3-D learning environments in higher education and healthcare. PMID:18762473

  10. Versatile, immersive, creative and dynamic virtual 3-D healthcare learning environments: a review of the literature.

    PubMed

    Hansen, Margaret M

    2008-01-01

    The author provides a critical overview of three-dimensional (3-D) virtual worlds and "serious gaming" that are currently being developed and used in healthcare professional education and medicine. The relevance of this e-learning innovation for teaching students and professionals is debatable and variables influencing adoption, such as increased knowledge, self-directed learning, and peer collaboration, by academics, healthcare professionals, and business executives are examined while looking at various Web 2.0/3.0 applications. There is a need for more empirical research in order to unearth the pedagogical outcomes and advantages associated with this e-learning technology. A brief description of Roger's Diffusion of Innovations Theory and Siemens' Connectivism Theory for today's learners is presented as potential underlying pedagogical tenets to support the use of virtual 3-D learning environments in higher education and healthcare. PMID:18762473

  11. The Learner Characteristics, Features of Desktop 3D Virtual Reality Environments, and College Chemistry Instruction: A Structural Equation Modeling Analysis

    ERIC Educational Resources Information Center

    Merchant, Zahira; Goetz, Ernest T.; Keeney-Kennicutt, Wendy; Kwok, Oi-man; Cifuentes, Lauren; Davis, Trina J.

    2012-01-01

    We examined a model of the impact of a 3D desktop virtual reality environment on the learner characteristics (i.e. perceptual and psychological variables) that can enhance chemistry-related learning achievements in an introductory college chemistry class. The relationships between the 3D virtual reality features and the chemistry learning test as…

  12. Molecular surface point environments for virtual screening and the elucidation of binding patterns (MOLPRINT 3D).

    PubMed

    Bender, Andreas; Mussa, Hamse Y; Gill, Gurprem S; Glen, Robert C

    2004-12-16

    A novel method (MOLPRINT 3D) for virtual screening and the elucidation of ligand-receptor binding patterns is introduced that is based on environments of molecular surface points. The descriptor uses points relative to the molecular coordinates, thus it is translationally and rotationally invariant. Due to its local nature, conformational variations cause only minor changes in the descriptor. If surface point environments are combined with the Tanimoto coefficient and applied to virtual screening, they achieve retrieval rates comparable to that of two-dimensional (2D) fingerprints. The identification of active structures with minimal 2D similarity ("scaffold hopping") is facilitated. In combination with information-gain-based feature selection and a naive Bayesian classifier, information from multiple molecules can be combined and classification performance can be improved. Selected features are consistent with experimentally determined binding patterns. Examples are given for angiotensin-converting enzyme inhibitors, 3-hydroxy-3-methylglutaryl-coenzyme A reductase inhibitors, and thromboxane A2 antagonists. PMID:15588092

  13. A Voice and Mouse Input Interface for 3D Virtual Environments

    NASA Technical Reports Server (NTRS)

    Kao, David L.; Bryson, Steve T.

    2003-01-01

    There have been many successful stories on how 3D input devices can be fully integrated into an immersive virtual environment. Electromagnetic trackers, optical trackers, gloves, and flying mice are just some of these input devices. Though we can use existing 3D input devices that are commonly used for VR applications, there are several factors that prevent us from choosing these input devices for our applications. One main factor is that most of these tracking devices are not suitable for prolonged use due to human fatigue associated with using them. A second factor is that many of them would occupy additional office space. Another factor is that many of the 3D input devices are expensive due to the unusual hardware that are required. For our VR applications, we want a user interface that would work naturally with standard equipment. In this paper, we demonstrate applications or our proposed muitimodal interface using a 3D dome display. We also show that effective data analysis can be achieved while the scientists view their data rendered inside the dome display and perform user interactions simply using the mouse and voice input. Though the sphere coordinate grid seems to be ideal for interaction using a 3D dome display, we can also use other non-spherical grids as well.

  14. 3-D Imaging In Virtual Environment: A Scientific Clinical and Teaching Tool

    NASA Technical Reports Server (NTRS)

    Ross, Muriel D.; DeVincenzi, Donald L. (Technical Monitor)

    1996-01-01

    The advent of powerful graphics workstations and computers has led to the advancement of scientific knowledge through three-dimensional (3-D) reconstruction and imaging of biological cells and tissues. The Biocomputation Center at NASA Ames Research Center pioneered the effort to produce an entirely computerized method for reconstruction of objects from serial sections studied in a transmission electron microscope (TEM). The software developed, ROSS (Reconstruction of Serial Sections), is now being distributed to users across the United States through Space Act Agreements. The software is in widely disparate fields such as geology, botany, biology and medicine. In the Biocomputation Center, ROSS serves as the basis for development of virtual environment technologies for scientific and medical use. This report will describe the Virtual Surgery Workstation Project that is ongoing with clinicians at Stanford University Medical Center, and the role of the Visible Human data in the project.

  15. Architecture of web services in the enhancement of real-time 3D video virtualization in cloud environment

    NASA Astrophysics Data System (ADS)

    Bada, Adedayo; Wang, Qi; Alcaraz-Calero, Jose M.; Grecos, Christos

    2016-04-01

    This paper proposes a new approach to improving the application of 3D video rendering and streaming by jointly exploring and optimizing both cloud-based virtualization and web-based delivery. The proposed web service architecture firstly establishes a software virtualization layer based on QEMU (Quick Emulator), an open-source virtualization software that has been able to virtualize system components except for 3D rendering, which is still in its infancy. The architecture then explores the cloud environment to boost the speed of the rendering at the QEMU software virtualization layer. The capabilities and inherent limitations of Virgil 3D, which is one of the most advanced 3D virtual Graphics Processing Unit (GPU) available, are analyzed through benchmarking experiments and integrated into the architecture to further speed up the rendering. Experimental results are reported and analyzed to demonstrate the benefits of the proposed approach.

  16. iVirtualWorld: A Domain-Oriented End-User Development Environment for Building 3D Virtual Chemistry Experiments

    ERIC Educational Resources Information Center

    Zhong, Ying

    2013-01-01

    Virtual worlds are well-suited for building virtual laboratories for educational purposes to complement hands-on physical laboratories. However, educators may face technical challenges because developing virtual worlds requires skills in programming and 3D design. Current virtual world building tools are developed for users who have programming…

  17. A Parameterizable Framework for Replicated Experiments in Virtual 3D Environments

    NASA Astrophysics Data System (ADS)

    Biella, Daniel; Luther, Wolfram

    This paper reports on a parameterizable 3D framework that provides 3D content developers with an initial spatial starting configuration, metaphorical connectors for accessing exhibits or interactive 3D learning objects or experiments, and other optional 3D extensions, such as a multimedia room, a gallery, username identification tools and an avatar selection room. The framework is implemented in X3D and uses a Web-based content management system. It has been successfully used for an interactive virtual museum for key historical experiments and in two additional interactive e-learning implementations: an African arts museum and a virtual science centre. It can be shown that, by reusing the framework, the production costs for the latter two implementations can be significantly reduced and content designers can focus on developing educational content instead of producing cost-intensive out-of-focus 3D objects.

  18. Sensor Spatial Distortion, Visual Latency, and Update Rate Effects on 3D Tracking in Virtual Environments

    NASA Technical Reports Server (NTRS)

    Ellis, S. R.; Adelstein, B. D.; Baumeler, S.; Jense, G. J.; Jacoby, R. H.; Trejo, Leonard (Technical Monitor)

    1998-01-01

    Several common defects that we have sought to minimize in immersing virtual environments are: static sensor spatial distortion, visual latency, and low update rates. Human performance within our environments during large amplitude 3D tracking was assessed by objective and subjective methods in the presence and absence of these defects. Results show that 1) removal of our relatively small spatial sensor distortion had minor effects on the tracking activity, 2) an Adapted Cooper-Harper controllability scale proved the most sensitive subjective indicator of the degradation of dynamic fidelity caused by increasing latency and decreasing frame rates, and 3) performance, as measured by normalized RMS tracking error or subjective impressions, was more markedly influenced by changing visual latency than by update rate.

  19. A Collaborative Virtual Environment for Situated Language Learning Using VEC3D

    ERIC Educational Resources Information Center

    Shih, Ya-Chun; Yang, Mau-Tsuen

    2008-01-01

    A 3D virtually synchronous communication architecture for situated language learning has been designed to foster communicative competence among undergraduate students who have studied English as a foreign language (EFL). We present an innovative approach that offers better e-learning than the previous virtual reality educational applications. The…

  20. Using a Quest in a 3D Virtual Environment for Student Interaction and Vocabulary Acquisition in Foreign Language Learning

    ERIC Educational Resources Information Center

    Kastoudi, Denise

    2011-01-01

    The gaming and interactional nature of the virtual environment of Second Life offers opportunities for language learning beyond the traditional pedagogy. This study case examined the potential of 3D virtual quest games to enhance vocabulary acquisition through interaction, negotiation of meaning and noticing. Four adult students of English at…

  1. Integration of virtual and real scenes within an integral 3D imaging environment

    NASA Astrophysics Data System (ADS)

    Ren, Jinsong; Aggoun, Amar; McCormick, Malcolm

    2002-11-01

    The Imaging Technologies group at De Montfort University has developed an integral 3D imaging system, which is seen as the most likely vehicle for 3D television avoiding psychological effects. To create real fascinating three-dimensional television programs, a virtual studio that performs the task of generating, editing and integrating the 3D contents involving virtual and real scenes is required. The paper presents, for the first time, the procedures, factors and methods of integrating computer-generated virtual scenes with real objects captured using the 3D integral imaging camera system. The method of computer generation of 3D integral images, where the lens array is modelled instead of the physical camera is described. In the model each micro-lens that captures different elemental images of the virtual scene is treated as an extended pinhole camera. An integration process named integrated rendering is illustrated. Detailed discussion and deep investigation are focused on depth extraction from captured integral 3D images. The depth calculation method from the disparity and the multiple baseline method that is used to improve the precision of depth estimation are also presented. The concept of colour SSD and its further improvement in the precision is proposed and verified.

  2. Elderly Healthcare Monitoring Using an Avatar-Based 3D Virtual Environment

    PubMed Central

    Pouke, Matti; Häkkilä, Jonna

    2013-01-01

    Homecare systems for elderly people are becoming increasingly important due to both economic reasons as well as patients’ preferences. Sensor-based surveillance technologies are an expected future trend, but research so far has devoted little attention to the User Interface (UI) design of such systems and the user-centric design approach. In this paper, we explore the possibilities of an avatar-based 3D visualization system, which exploits wearable sensors and human activity simulations. We present a technical prototype and the evaluation of alternative concept designs for UIs based on a 3D virtual world. The evaluation was conducted with homecare providers through focus groups and an online survey. Our results show firstly that systems taking advantage of 3D virtual world visualization techniques have potential especially due to the privacy preserving and simplified information presentation style, and secondly that simple representations and glancability should be emphasized in the design. The identified key use cases highlight that avatar-based 3D presentations can be helpful if they provide an overview as well as details on demand. PMID:24351747

  3. The Effect of 3D Virtual Learning Environment on Secondary School Third Grade Students' Attitudes toward Mathematics

    ERIC Educational Resources Information Center

    Simsek, Irfan

    2016-01-01

    With this research, in Second Life environment which is a three dimensional online virtual world, it is aimed to reveal the effects of student attitudes toward mathematics courses and design activities which will enable the third grade students of secondary school (primary education seventh grade) to see the 3D objects in mathematics courses in a…

  4. Fusion of image and laser-scanning data in a large-scale 3D virtual environment

    NASA Astrophysics Data System (ADS)

    Shih, Jhih-Syuan; Lin, Ta-Te

    2013-05-01

    Construction of large-scale 3D virtual environment is important in many fields such as robotic navigation, urban planning, transportation, and remote sensing, etc. Laser scanning approach is the most common method used in constructing 3D models. This paper proposes an automatic method to fuse image and laser-scanning data in a large-scale 3D virtual environment. The system comprises a laser-scanning device installed on a robot platform and the software for data fusion and visualization. The algorithms of data fusion and scene integration are presented. Experiments were performed for the reconstruction of outdoor scenes to test and demonstrate the functionality of the system. We also discuss the efficacy of the system and technical problems involved in this proposed method.

  5. The Use of 3D Virtual Learning Environments in Training Foreign Language Pre-Service Teachers

    ERIC Educational Resources Information Center

    Can, Tuncer; Simsek, Irfan

    2015-01-01

    The recent developments in computer and Internet technologies and in three dimensional modelling necessitates the new approaches and methods in the education field and brings new opportunities to the higher education. The Internet and virtual learning environments have changed the learning opportunities by diversifying the learning options not…

  6. Research on the key technologies of 3D spatial data organization and management for virtual building environments

    NASA Astrophysics Data System (ADS)

    Gong, Jun; Zhu, Qing

    2006-10-01

    As the special case of VGE in the fields of AEC (architecture, engineering and construction), Virtual Building Environment (VBE) has been broadly concerned. Highly complex, large-scale 3d spatial data is main bottleneck of VBE applications, so 3d spatial data organization and management certainly becomes the core technology for VBE. This paper puts forward 3d spatial data model for VBE, and the performance to implement it is very high. Inherent storage method of CAD data makes data redundant, and doesn't concern efficient visualization, which is a practical bottleneck to integrate CAD model, so An Efficient Method to Integrate CAD Model Data is put forward. Moreover, Since the 3d spatial indices based on R-tree are usually limited by their weakness of low efficiency due to the severe overlap of sibling nodes and the uneven size of nodes, a new node-choosing algorithm of R-tree are proposed.

  7. Quality of Grasping and the Role of Haptics in a 3-D Immersive Virtual Reality Environment in Individuals With Stroke.

    PubMed

    Levin, Mindy F; Magdalon, Eliane C; Michaelsen, Stella M; Quevedo, Antonio A F

    2015-11-01

    Reaching and grasping parameters with and without haptic feedback were characterized in people with chronic post-stroke behaviors. Twelve (67 ± 10 years) individuals with chronic stroke and arm/hand paresis (Fugl-Meyer Assessment-Arm: ≥ 46/66 pts) participated. Three dimensional (3-D) temporal and spatial kinematics of reaching and grasping movements to three objects (can: cylindrical grasp; screwdriver: power grasp; pen: precision grasp) in a physical environment (PE) with and without additional haptic feedback and a 3-D virtual environment (VE) with haptic feedback were recorded. Participants reached, grasped and transported physical and virtual objects using similar movement strategies in all conditions. Reaches made in VE were less smooth and slower compared to the PE. Arm and trunk kinematics were similar in both environments and glove conditions. For grasping, stroke subjects preserved aperture scaling to object size but used wider hand apertures with longer delays between times to maximal reaching velocity and maximal grasping aperture. Wearing the glove decreased reaching velocity. Our results in a small group of subjects suggest that providing haptic information in the VE did not affect the validity of reaching and grasping movement. Small disparities in movement parameters between environments may be due to differences in perception of object distance in VE. Reach-to-grasp kinematics to smaller objects may be improved by better 3-D rendering. Comparable kinematics between environments and conditions is encouraging for the incorporation of high quality VEs in rehabilitation programs aimed at improving upper limb recovery. PMID:25594971

  8. Virtually supportive: A feasibility pilot study of an online support group for dementia caregivers in a 3D virtual environment

    PubMed Central

    O’Connor, Mary-Frances; Arizmendi, Brian J.; Kaszniak, Alfred W.

    2014-01-01

    Caregiver support groups effectively reduce stress from caring for someone with dementia. These same demands can prevent participation in a group. The present feasibility study investigated a virtual online caregiver support group to bring the support group into the home. While online groups have been shown to be helpful, submissions to a message board (vs. live conversation) can feel impersonal. By using avatars, participants interacted via real-time chat in a virtual environment in an 8-week support group. Data indicated lower levels of perceived stress, depression and loneliness across participants. Importantly, satisfaction reports also indicate that caregivers overcame the barriers to participation, and had a strong sense of the group’s presence. This study provides the framework for an accessible and low cost online support group for a dementia caregiver. The study demonstrates the feasibility of interactive group in a virtual environment for engaging members in meaningful interaction. PMID:24984911

  9. On the Usability and Usefulness of 3d (geo)visualizations - a Focus on Virtual Reality Environments

    NASA Astrophysics Data System (ADS)

    Çöltekin, A.; Lokka, I.; Zahner, M.

    2016-06-01

    Whether and when should we show data in 3D is an on-going debate in communities conducting visualization research. A strong opposition exists in the information visualization (Infovis) community, and seemingly unnecessary/unwarranted use of 3D, e.g., in plots, bar or pie charts, is heavily criticized. The scientific visualization (Scivis) community, on the other hand, is more supportive of the use of 3D as it allows `seeing' invisible phenomena, or designing and printing things that are used in e.g., surgeries, educational settings etc. Geographic visualization (Geovis) stands between the Infovis and Scivis communities. In geographic information science, most visuo-spatial analyses have been sufficiently conducted in 2D or 2.5D, including analyses related to terrain and much of the urban phenomena. On the other hand, there has always been a strong interest in 3D, with similar motivations as in Scivis community. Among many types of 3D visualizations, a popular one that is exploited both for visual analysis and visualization is the highly realistic (geo)virtual environments. Such environments may be engaging and memorable for the viewers because they offer highly immersive experiences. However, it is not yet well-established if we should opt to show the data in 3D; and if yes, a) what type of 3D we should use, b) for what task types, and c) for whom. In this paper, we identify some of the central arguments for and against the use of 3D visualizations around these three considerations in a concise interdisciplinary literature review.

  10. Design and application of real-time visual attention model for the exploration of 3D virtual environments.

    PubMed

    Hillaire, Sébastien; Lécuyer, Anatole; Regia-Corte, Tony; Cozot, Rémi; Royan, Jérôme; Breton, Gaspard

    2012-03-01

    This paper studies the design and application of a novel visual attention model designed to compute user's gaze position automatically, i.e., without using a gaze-tracking system. The model we propose is specifically designed for real-time first-person exploration of 3D virtual environments. It is the first model adapted to this context which can compute in real time a continuous gaze point position instead of a set of 3D objects potentially observed by the user. To do so, contrary to previous models which use a mesh-based representation of visual objects, we introduce a representation based on surface-elements. Our model also simulates visual reflexes and the cognitive processes which take place in the brain such as the gaze behavior associated to first-person navigation in the virtual environment. Our visual attention model combines both bottom-up and top-down components to compute a continuous gaze point position on screen that hopefully matches the user's one. We conducted an experiment to study and compare the performance of our method with a state-of-the-art approach. Our results are found significantly better with sometimes more than 100 percent of accuracy gained. This suggests that computing a gaze point in a 3D virtual environment in real time is possible and is a valid approach, compared to object-based approaches. Finally, we expose different applications of our model when exploring virtual environments. We present different algorithms which can improve or adapt the visual feedback of virtual environments based on gaze information. We first propose a level-of-detail approach that heavily relies on multiple-texture sampling. We show that it is possible to use the gaze information of our visual attention model to increase visual quality where the user is looking, while maintaining a high-refresh rate. Second, we introduce the use of the visual attention model in three visual effects inspired by the human visual system namely: depth-of-field blur, camera

  11. Learning in 3-D Virtual Worlds: Rethinking Media Literacy

    ERIC Educational Resources Information Center

    Qian, Yufeng

    2008-01-01

    3-D virtual worlds, as a new form of learning environments in the 21st century, hold great potential in education. Learning in such environments, however, demands a broader spectrum of literacy skills. This article identifies a new set of media literacy skills required in 3-D virtual learning environments by reviewing exemplary 3-D virtual…

  12. Inspiring Equal Contribution and Opportunity in a 3D Multi-User Virtual Environment: Bringing Together Men Gamers and Women Non-Gamers in Second Life[R

    ERIC Educational Resources Information Center

    deNoyelles, Aimee; Seo, Kay Kyeong-Ju

    2012-01-01

    A 3D multi-user virtual environment holds promise to support and enhance student online learning communities due to its ability to promote global synchronous interaction and collaboration, rich multisensory experience and expression, and elaborate design capabilities. Second Life[R], a multi-user virtual environment intended for adult users 18 and…

  13. Development of a system based on 3D vision, interactive virtual environments, ergonometric signals and a humanoid for stroke rehabilitation.

    PubMed

    Ibarra Zannatha, Juan Manuel; Tamayo, Alejandro Justo Malo; Sánchez, Angel David Gómez; Delgado, Jorge Enrique Lavín; Cheu, Luis Eduardo Rodríguez; Arévalo, Wilson Alexander Sierra

    2013-11-01

    This paper presents a stroke rehabilitation (SR) system for the upper limbs, developed as an interactive virtual environment (IVE) based on a commercial 3D vision system (a Microsoft Kinect), a humanoid robot (an Aldebaran's Nao), and devices producing ergonometric signals. In one environment, the rehabilitation routines, developed by specialists, are presented to the patient simultaneously by the humanoid and an avatar inside the IVE. The patient follows the rehabilitation task, while his avatar copies his gestures that are captured by the Kinect 3D vision system. The information of the patient movements, together with the signals obtained from the ergonometric measurement devices, is used also to supervise and to evaluate the rehabilitation progress. The IVE can also present an RGB image of the patient. In another environment, that uses the same base elements, four game routines--Touch the balls 1 and 2, Simon says, and Follow the point--are used for rehabilitation. These environments are designed to create a positive influence in the rehabilitation process, reduce costs, and engage the patient. PMID:23827333

  14. 3D Elevation Program—Virtual USA in 3D

    USGS Publications Warehouse

    Lukas, Vicki; Stoker, J.M.

    2016-01-01

    The U.S. Geological Survey (USGS) 3D Elevation Program (3DEP) uses a laser system called ‘lidar’ (light detection and ranging) to create a virtual reality map of the Nation that is very accurate. 3D maps have many uses with new uses being discovered all the time.  

  15. 3D Virtual Reality for Teaching Astronomy

    NASA Astrophysics Data System (ADS)

    Speck, Angela; Ruzhitskaya, L.; Laffey, J.; Ding, N.

    2012-01-01

    We are developing 3D virtual learning environments (VLEs) as learning materials for an undergraduate astronomy course, in which will utilize advances both in technologies available and in our understanding of the social nature of learning. These learning materials will be used to test whether such VLEs can indeed augment science learning so that it is more engaging, active, visual and effective. Our project focuses on the challenges and requirements of introductory college astronomy classes. Here we present our virtual world of the Jupiter system and how we plan to implement it to allow students to learn course material - physical laws and concepts in astronomy - while engaging them into exploration of the Jupiter's system, encouraging their imagination, curiosity, and motivation. The VLE can allow students to work individually or collaboratively. The 3D world also provides an opportunity for research in astronomy education to investigate impact of social interaction, gaming features, and use of manipulatives offered by a learning tool on students’ motivation and learning outcomes. Use of this VLE is also a valuable source for exploration of how the learners’ spatial awareness can be enhanced by working in 3D environment. We will present the Jupiter-system environment along with a preliminary study of the efficacy and usability of our Jupiter 3D VLE.

  16. Implementing Advanced Characteristics of X3D Collaborative Virtual Environments for Supporting e-Learning: The Case of EVE Platform

    ERIC Educational Resources Information Center

    Bouras, Christos; Triglianos, Vasileios; Tsiatsos, Thrasyvoulos

    2014-01-01

    Three dimensional Collaborative Virtual Environments are a powerful form of collaborative telecommunication applications, enabling the users to share a common three-dimensional space and interact with each other as well as with the environment surrounding them, in order to collaboratively solve problems or aid learning processes. Such an…

  17. Designing Virtual Museum Using Web3D Technology

    NASA Astrophysics Data System (ADS)

    Zhao, Jianghai

    VRT was born to have the potentiality of constructing an effective learning environment due to its 3I characteristics: Interaction, Immersion and Imagination. It is now applied in education in a more profound way along with the development of VRT. Virtual Museum is one of the applications. The Virtual Museum is based on the WEB3D technology and extensibility is the most important factor. Considering the advantage and disadvantage of each WEB3D technology, VRML, CULT3D AND VIEWPOINT technologies are chosen. A web chatroom based on flash and ASP technology is also been created in order to make the Virtual Museum an interactive learning environment.

  18. The use of a low-cost visible light 3D scanner to create virtual reality environment models of actors and objects

    NASA Astrophysics Data System (ADS)

    Straub, Jeremy

    2015-05-01

    A low-cost 3D scanner has been developed with a parts cost of approximately USD $5,000. This scanner uses visible light sensing to capture both structural as well as texture and color data of a subject. This paper discusses the use of this type of scanner to create 3D models for incorporation into a virtual reality environment. It describes the basic scanning process (which takes under a minute for a single scan), which can be repeated to collect multiple positions, if needed for actor model creation. The efficacy of visible light versus other scanner types is also discussed.

  19. Realistic terrain visualization based on 3D virtual world technology

    NASA Astrophysics Data System (ADS)

    Huang, Fengru; Lin, Hui; Chen, Bin; Xiao, Cai

    2010-11-01

    The rapid advances in information technologies, e.g., network, graphics processing, and virtual world, have provided challenges and opportunities for new capabilities in information systems, Internet applications, and virtual geographic environments, especially geographic visualization and collaboration. In order to achieve meaningful geographic capabilities, we need to explore and understand how these technologies can be used to construct virtual geographic environments to help to engage geographic research. The generation of three-dimensional (3D) terrain plays an important part in geographical visualization, computer simulation, and virtual geographic environment applications. The paper introduces concepts and technologies of virtual worlds and virtual geographic environments, explores integration of realistic terrain and other geographic objects and phenomena of natural geographic environment based on SL/OpenSim virtual world technologies. Realistic 3D terrain visualization is a foundation of construction of a mirror world or a sand box model of the earth landscape and geographic environment. The capabilities of interaction and collaboration on geographic information are discussed as well. Further virtual geographic applications can be developed based on the foundation work of realistic terrain visualization in virtual environments.

  20. Realistic terrain visualization based on 3D virtual world technology

    NASA Astrophysics Data System (ADS)

    Huang, Fengru; Lin, Hui; Chen, Bin; Xiao, Cai

    2009-09-01

    The rapid advances in information technologies, e.g., network, graphics processing, and virtual world, have provided challenges and opportunities for new capabilities in information systems, Internet applications, and virtual geographic environments, especially geographic visualization and collaboration. In order to achieve meaningful geographic capabilities, we need to explore and understand how these technologies can be used to construct virtual geographic environments to help to engage geographic research. The generation of three-dimensional (3D) terrain plays an important part in geographical visualization, computer simulation, and virtual geographic environment applications. The paper introduces concepts and technologies of virtual worlds and virtual geographic environments, explores integration of realistic terrain and other geographic objects and phenomena of natural geographic environment based on SL/OpenSim virtual world technologies. Realistic 3D terrain visualization is a foundation of construction of a mirror world or a sand box model of the earth landscape and geographic environment. The capabilities of interaction and collaboration on geographic information are discussed as well. Further virtual geographic applications can be developed based on the foundation work of realistic terrain visualization in virtual environments.

  1. Reorienting in Virtual 3D Environments: Do Adult Humans Use Principal Axes, Medial Axes or Local Geometry?

    PubMed Central

    Ambosta, Althea H.; Reichert, James F.; Kelly, Debbie M.

    2013-01-01

    Studies have shown that animals, including humans, use the geometric properties of environments to orient. It has been proposed that orientation is accomplished primarily by encoding the principal axes (i.e., global geometry) of an environment. However, recent research has shown that animals use local information such as wall length and corner angles as well as local shape parameters (i.e., medial axes) to orient. The goal of the current study was to determine whether adult humans reorient according to global geometry based on principal axes or whether reliance is on local geometry such as wall length and sense information or medial axes. Using a virtual environment task, participants were trained to select a response box located at one of two geometrically identical corners within a featureless rectangular-shaped environment. Participants were subsequently tested in a transformed L-shaped environment that allowed for a dissociation of strategies based on principal axes, medial axes and local geometry. Results showed that participants relied primarily on a medial axes strategy to reorient in the L-shaped test environment. Importantly, the search behaviour of participants could not be explained by a principal axes-based strategy. PMID:24223869

  2. The Effect of the Use of the 3-D Multi-User Virtual Environment "Second Life" on Student Motivation and Language Proficiency in Courses of Spanish as a Foreign Language

    ERIC Educational Resources Information Center

    Pares-Toral, Maria T.

    2013-01-01

    The ever increasing popularity of virtual worlds, also known as 3-D multi-user virtual environments (MUVEs) or simply virtual worlds provides language instructors with a new tool they can exploit in their courses. For now, "Second Life" is one of the most popular MUVEs used for teaching and learning, and although "Second Life"…

  3. [3D virtual endoscopy of heart].

    PubMed

    Du, Aan; Yang, Xin; Xue, Haihong; Yao, Liping; Sun, Kun

    2012-10-01

    In this paper, we present a virtual endoscopy (VE) for diagnosis of heart diseases, which is proved efficient and affordable, easy to popularize for viewing the interior of the heart. The dual source CT (DSCT) data were used as primary data in our system. The 3D structure of virtual heart was reconstructed with 3D texture mapping technology based on graphics processing unit (GPU), and could be displayed dynamically in real time. When we displayed it in real time, we could not only observe the inside of the chambers of heart but also examine from the new angle of view by the 3D data which were already clipped according to doctor's desire. In the pattern of observation, we used both mutual interactive mode and auto mode. In the auto mode, we used Dijkstra Algorithm which treated the 3D Euler distance as weighting factor to find out the view path quickly, and, used view path to calculate the four chamber plane. PMID:23198444

  4. 3D Simulation: Microgravity Environments and Applications

    NASA Technical Reports Server (NTRS)

    Hunter, Steve L.; Dischinger, Charles; Estes, Samantha; Parker, Nelson C. (Technical Monitor)

    2001-01-01

    Most, if not all, 3-D and Virtual Reality (VR) software programs are designed for one-G gravity applications. Space environments simulations require gravity effects of one one-thousandth to one one-million of that of the Earth's surface (10(exp -3) - 10(exp -6) G), thus one must be able to generate simulations that replicate those microgravity effects upon simulated astronauts. Unfortunately, the software programs utilized by the National Aeronautical and Space Administration does not have the ability to readily neutralize the one-G gravity effect. This pre-programmed situation causes the engineer or analysis difficulty during micro-gravity simulations. Therefore, microgravity simulations require special techniques or additional code in order to apply the power of 3D graphic simulation to space related applications. This paper discusses the problem and possible solutions to allow microgravity 3-D/VR simulations to be completed successfully without program code modifications.

  5. The SEE Experience: Edutainment in 3D Virtual Worlds.

    ERIC Educational Resources Information Center

    Di Blas, Nicoletta; Paolini, Paolo; Hazan, Susan

    Shared virtual worlds are innovative applications where several users, represented by Avatars, simultaneously access via Internet a 3D space. Users cooperate through interaction with the environment and with each other, manipulating objects and chatting as they go. Apart from in the well documented online action games industry, now often played…

  6. Cross-Cultural Discussions in a 3D Virtual Environment and Their Affordances for Learners' Motivation and Foreign Language Discussion Skills

    ERIC Educational Resources Information Center

    Jauregi, Kristi; Kuure, Leena; Bastian, Pim; Reinhardt, Dennis; Koivisto, Tuomo

    2015-01-01

    Within the European TILA project a case study was carried out where pupils from schools in Finland and the Netherlands engaged in debating sessions using the 3D virtual world of OpenSim once a week for a period of 5 weeks. The case study had two main objectives: (1) to study the impact that the discussion tasks undertaken in a virtual environment…

  7. 3D Virtual Reality Check: Learner Engagement and Constructivist Theory

    ERIC Educational Resources Information Center

    Bair, Richard A.

    2013-01-01

    The inclusion of three-dimensional (3D) virtual tools has created a need to communicate the engagement of 3D tools and specify learning gains that educators and the institutions, which are funding 3D tools, can expect. A review of literature demonstrates that specific models and theories for 3D Virtual Reality (VR) learning do not exist "per…

  8. The development of a virtual 3D model of the renal corpuscle from serial histological sections for E-learning environments.

    PubMed

    Roth, Jeremy A; Wilson, Timothy D; Sandig, Martin

    2015-01-01

    Histology is a core subject in the anatomical sciences where learners are challenged to interpret two-dimensional (2D) information (gained from histological sections) to extrapolate and understand the three-dimensional (3D) morphology of cells, tissues, and organs. In gross anatomical education 3D models and learning tools have been associated with improved learning outcomes, but similar tools have not been created for histology education to visualize complex cellular structure-function relationships. This study outlines steps in creating a virtual 3D model of the renal corpuscle from serial, semi-thin, histological sections obtained from epoxy resin-embedded kidney tissue. The virtual renal corpuscle model was generated by digital segmentation to identify: Bowman's capsule, nuclei of epithelial cells in the parietal capsule, afferent arteriole, efferent arteriole, proximal convoluted tubule, distal convoluted tubule, glomerular capillaries, podocyte nuclei, nuclei of extraglomerular mesangial cells, nuclei of epithelial cells of the macula densa in the distal convoluted tubule. In addition to the imported images of the original sections the software generates, and allows for visualization of, images of virtual sections generated in any desired orientation, thus serving as a "virtual microtome". These sections can be viewed separately or with the 3D model in transparency. This approach allows for the development of interactive e-learning tools designed to enhance histology education of microscopic structures with complex cellular interrelationships. Future studies will focus on testing the efficacy of interactive virtual 3D models for histology education. PMID:25808044

  9. Towards a virtual C. elegans: a framework for simulation and visualization of the neuromuscular system in a 3D physical environment.

    PubMed

    Palyanov, Andrey; Khayrulin, Sergey; Larson, Stephen D; Dibert, Alexander

    The nematode C. elegans is the only animal with a known neuronal wiring diagram, or "connectome". During the last three decades, extensive studies of the C. elegans have provided wide-ranging data about it, but few systematic ways of integrating these data into a dynamic model have been put forward. Here we present a detailed demonstration of a virtual C. elegans aimed at integrating these data in the form of a 3D dynamic model operating in a simulated physical environment. Our current demonstration includes a realistic flexible worm body model, muscular system and a partially implemented ventral neural cord. Our virtual C. elegans demonstrates successful forward and backward locomotion when sending sinusoidal patterns of neuronal activity to groups of motor neurons. To account for the relatively slow propagation velocity and the attenuation of neuronal signals, we introduced "pseudo neurons" into our model to simulate simplified neuronal dynamics. The pseudo neurons also provide a good way of visualizing the nervous system's structure and activity dynamics. PMID:22935967

  10. Analytical augmentation of 3D simulation environments

    NASA Astrophysics Data System (ADS)

    Loughran, Julia J.; Stahl, Marchelle M.

    1998-05-01

    This paper describes an approach for augmenting three- dimensional (3D) virtual environments (VEs) with analytic information and multimedia annotations to enhance training and education applications. Analytic or symbolic information in VEs is presented as bar charts, text, graphical overlays, or with the use of color. Analytic results can be computed and displayed in the VE at run-time or, more likely, while replaying a simulation. These annotations would typically include computations of pre-defined Measures of Performance (MOPs) or Measures of Effectiveness (MOEs) associated with the training or educational goals of the simulation. Multimedia annotations are inserted into the VE by the user and may include: a drawing or whiteboarding capability, enabling participants to insert written text and/or graphics into the two-dimensional (2D) or 3D world; audio comments, and/or video recordings. These annotations can clarify a point, capture teacher feedback, or elaborate on the student's perspective or understanding of the experience. The annotations are captured in the VE either synchronously or asynchronously from the users (students and instructors), during simulation execution or afterward during a replay. When replaying or reviewing the simulation, the embedded annotations can be reviewed by a single user or by multiple users through the use of collaboration technologies. By augmenting 3D virtual environments with analytic and multimedia annotations, the education and training experience may be enhanced. The annotations can offer more effective feedback, enhance understanding, and increase participation. They may also support distance learning by promoting student/teacher interaction without co-location.

  11. Building intuitive 3D interfaces for virtual reality systems

    NASA Astrophysics Data System (ADS)

    Vaidya, Vivek; Suryanarayanan, Srikanth; Seitel, Mathias; Mullick, Rakesh

    2007-03-01

    An exploration of techniques for developing intuitive, and efficient user interfaces for virtual reality systems. Work seeks to understand which paradigms from the better-understood world of 2D user interfaces remain viable within 3D environments. In order to establish this a new user interface was created that applied various understood principles of interface design. A user study was then performed where it was compared with an earlier interface for a series of medical visualization tasks.

  12. Inertial Motion-Tracking Technology for Virtual 3-D

    NASA Technical Reports Server (NTRS)

    2005-01-01

    In the 1990s, NASA pioneered virtual reality research. The concept was present long before, but, prior to this, the technology did not exist to make a viable virtual reality system. Scientists had theories and ideas they knew that the concept had potential, but the computers of the 1970s and 1980s were not fast enough, sensors were heavy and cumbersome, and people had difficulty blending fluidly with the machines. Scientists at Ames Research Center built upon the research of previous decades and put the necessary technology behind them, making the theories of virtual reality a reality. Virtual reality systems depend on complex motion-tracking sensors to convey information between the user and the computer to give the user the feeling that he is operating in the real world. These motion-tracking sensors measure and report an object s position and orientation as it changes. A simple example of motion tracking would be the cursor on a computer screen moving in correspondence to the shifting of the mouse. Tracking in 3-D, necessary to create virtual reality, however, is much more complex. To be successful, the perspective of the virtual image seen on the computer must be an accurate representation of what is seen in the real world. As the user s head or camera moves, turns, or tilts, the computer-generated environment must change accordingly with no noticeable lag, jitter, or distortion. Historically, the lack of smooth and rapid tracking of the user s motion has thwarted the widespread use of immersive 3-D computer graphics. NASA uses virtual reality technology for a variety of purposes, mostly training of astronauts. The actual missions are costly and dangerous, so any opportunity the crews have to practice their maneuvering in accurate situations before the mission is valuable and instructive. For that purpose, NASA has funded a great deal of virtual reality research, and benefited from the results.

  13. Gravity and spatial orientation in virtual 3D-mazes.

    PubMed

    Vidal, Manuel; Lipshits, Mark; McIntyre, Joseph; Berthoz, Alain

    2003-01-01

    In order to bring new insights into the processing of 3D spatial information, we conducted experiments on the capacity of human subjects to memorize 3D-structured environments, such as buildings with several floors or the potentially complex 3D structure of an orbital space station. We had subjects move passively in one of two different exploration modes, through a visual virtual environment that consisted of a series of connected tunnels. In upright displacement, self-rotation when going around corners in the tunnels was limited to yaw rotations. For horizontal translations, subjects faced forward in the direction of motion. When moving up or down through vertical segments of the 3D tunnels, however, subjects facing the tunnel wall, remaining upright as if moving up and down in a glass elevator. In the unconstrained displacement mode, subjects would appear to climb or dive face-forward when moving vertically; thus, in this mode subjects could experience visual flow consistent with rotations about any of the 3 canonical axes. In a previous experiment, subjects were asked to determine whether a static, outside view of a test tunnel corresponded or not to the tunnel through which they had just passed. Results showed that performance was better on this task for the upright than for the unconstrained displacement mode; i.e. when subjects remained "upright" with respect to the virtual environment as defined by subject's posture in the first segment. This effect suggests that gravity may provide a key reference frame used in the shift between egocentric and allocentric representations of the 3D virtual world. To check whether it is the polarizing effects of gravity that leads to the favoring of the upright displacement mode, the experimental paradigm was adapted for orbital flight and performed by cosmonauts onboard the International Space Station. For these flight experiments the previous recognition task was replaced by a computerized reconstruction task, which proved

  14. Interaction Design and Usability of Learning Spaces in 3D Multi-user Virtual Worlds

    NASA Astrophysics Data System (ADS)

    Minocha, Shailey; Reeves, Ahmad John

    Three-dimensional virtual worlds are multimedia, simulated environments, often managed over the Web, which users can 'inhabit' and interact via their own graphical, self-representations known as 'avatars'. 3D virtual worlds are being used in many applications: education/training, gaming, social networking, marketing and commerce. Second Life is the most widely used 3D virtual world in education. However, problems associated with usability, navigation and way finding in 3D virtual worlds may impact on student learning and engagement. Based on empirical investigations of learning spaces in Second Life, this paper presents design guidelines to improve the usability and ease of navigation in 3D spaces. Methods of data collection include semi-structured interviews with Second Life students, educators and designers. The findings have revealed that design principles from the fields of urban planning, Human- Computer Interaction, Web usability, geography and psychology can influence the design of spaces in 3D multi-user virtual environments.

  15. Andragogical Characteristics and Expectations of University of Hawai'i Adult Learners in a 3D Multi-User Virtual Environment

    ERIC Educational Resources Information Center

    Meeder, Rebecca L.

    2012-01-01

    The purpose of this study was to discover which andragogical characteristics and expectations of adult learners manifested themselves in the three-dimensional, multi-user virtual environment known as Second Life. This digital ethnographic study focused specifically on adult students within the University of Hawai'i Second Life group and their…

  16. Sensorized Garment Augmented 3D Pervasive Virtual Reality System

    NASA Astrophysics Data System (ADS)

    Gulrez, Tauseef; Tognetti, Alessandro; de Rossi, Danilo

    Virtual reality (VR) technology has matured to a point where humans can navigate in virtual scenes; however, providing them with a comfortable fully immersive role in VR remains a challenge. Currently available sensing solutions do not provide ease of deployment, particularly in the seated position due to sensor placement restrictions over the body, and optic-sensing requires a restricted indoor environment to track body movements. Here we present a 52-sensor laden garment interfaced with VR, which offers both portability and unencumbered user movement in a VR environment. This chapter addresses the systems engineering aspects of our pervasive computing solution of the interactive sensorized 3D VR and presents the initial results and future research directions. Participants navigated in a virtual art gallery using natural body movements that were detected by their wearable sensor shirt and then mapped the signals to electrical control signals responsible for VR scene navigation. The initial results are positive, and offer many opportunities for use in computationally intelligentman-machine multimedia control.

  17. The virtual reality 3D city of Ningbo

    NASA Astrophysics Data System (ADS)

    Chen, Weimin; Wu, Dun

    2009-09-01

    In 2005, Ningbo Design Research Institute of Mapping & Surveying started the development of concepts and an implementation of Virtual Reality Ningbo System (VRNS). VRNS is being developed under the digital city technological framework and well supported by computing advances, space technologies, and commercial innovations. It has become the best solution for integrating, managing, presenting, and distributing complex city information. VRNS is not only a 3D-GIS launch project but also a technology innovation. The traditional domain of surveying and mapping has changed greatly in Ningbo. Geo-information systems are developing towards a more reality-, three dimension- and Service-Oriented Architecture-based system. The VRNS uses technology such as 3D modeling, user interface design, view scene modeling, real-time rendering and interactive roaming under a virtual environment. Two applications of VRNS already being used are for city planning and high-rise buildings' security management. The final purpose is to develop VRNS into a powerful public information platform, and to achieve that heterogeneous city information resources share this one single platform.

  18. The virtual reality 3D city of Ningbo

    NASA Astrophysics Data System (ADS)

    Chen, Weimin; Wu, Dun

    2010-11-01

    In 2005, Ningbo Design Research Institute of Mapping & Surveying started the development of concepts and an implementation of Virtual Reality Ningbo System (VRNS). VRNS is being developed under the digital city technological framework and well supported by computing advances, space technologies, and commercial innovations. It has become the best solution for integrating, managing, presenting, and distributing complex city information. VRNS is not only a 3D-GIS launch project but also a technology innovation. The traditional domain of surveying and mapping has changed greatly in Ningbo. Geo-information systems are developing towards a more reality-, three dimension- and Service-Oriented Architecture-based system. The VRNS uses technology such as 3D modeling, user interface design, view scene modeling, real-time rendering and interactive roaming under a virtual environment. Two applications of VRNS already being used are for city planning and high-rise buildings' security management. The final purpose is to develop VRNS into a powerful public information platform, and to achieve that heterogeneous city information resources share this one single platform.

  19. ESL Teacher Training in 3D Virtual Worlds

    ERIC Educational Resources Information Center

    Kozlova, Iryna; Priven, Dmitri

    2015-01-01

    Although language learning in 3D Virtual Worlds (VWs) has become a focus of recent research, little is known about the knowledge and skills teachers need to acquire to provide effective task-based instruction in 3D VWs and the type of teacher training that best prepares instructors for such an endeavor. This study employs a situated learning…

  20. Virtual 3d City Modeling: Techniques and Applications

    NASA Astrophysics Data System (ADS)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2013-08-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as Building, Tree, Vegetation, and some manmade feature belonging to urban area. There are various terms used for 3D city models such as "Cybertown", "Cybercity", "Virtual City", or "Digital City". 3D city models are basically a computerized or digital model of a city contains the graphic representation of buildings and other objects in 2.5 or 3D. Generally three main Geomatics approach are using for Virtual 3-D City models generation, in first approach, researcher are using Conventional techniques such as Vector Map data, DEM, Aerial images, second approach are based on High resolution satellite images with LASER scanning, In third method, many researcher are using Terrestrial images by using Close Range Photogrammetry with DSM & Texture mapping. We start this paper from the introduction of various Geomatics techniques for 3D City modeling. These techniques divided in to two main categories: one is based on Automation (Automatic, Semi-automatic and Manual methods), and another is Based on Data input techniques (one is Photogrammetry, another is Laser Techniques). After details study of this, finally in short, we are trying to give the conclusions of this study. In the last, we are trying to give the conclusions of this research paper and also giving a short view for justification and analysis, and present trend for 3D City modeling. This paper gives an overview about the Techniques related with "Generation of Virtual 3-D City models using Geomatics Techniques" and the Applications of Virtual 3D City models. Photogrammetry, (Close range, Aerial, Satellite), Lasergrammetry, GPS, or combination of these modern Geomatics techniques play a major role to create a virtual 3-D City model. Each and every techniques and method has some advantages and some drawbacks. Point cloud model is a modern trend for virtual 3-D city model. Photo-realistic, Scalable, Geo-referenced virtual 3

  1. Dynamic 3D echocardiography in virtual reality

    PubMed Central

    van den Bosch, Annemien E; Koning, Anton HJ; Meijboom, Folkert J; McGhie, Jackie S; Simoons, Maarten L; van der Spek, Peter J; Bogers, Ad JJC

    2005-01-01

    Background This pilot study was performed to evaluate whether virtual reality is applicable for three-dimensional echocardiography and if three-dimensional echocardiographic 'holograms' have the potential to become a clinically useful tool. Methods Three-dimensional echocardiographic data sets from 2 normal subjects and from 4 patients with a mitral valve pathological condition were included in the study. The three-dimensional data sets were acquired with the Philips Sonos 7500 echo-system and transferred to the BARCO (Barco N.V., Kortrijk, Belgium) I-space. Ten independent observers assessed the 6 three-dimensional data sets with and without mitral valve pathology. After 10 minutes' instruction in the I-Space, all of the observers could use the virtual pointer that is necessary to create cut planes in the hologram. Results The 10 independent observers correctly assessed the normal and pathological mitral valve in the holograms (analysis time approximately 10 minutes). Conclusion this report shows that dynamic holographic imaging of three-dimensional echocardiographic data is feasible. However, the applicability and use-fullness of this technology in clinical practice is still limited. PMID:16375768

  2. Improvements in education in pathology: virtual 3D specimens.

    PubMed

    Kalinski, Thomas; Zwönitzer, Ralf; Jonczyk-Weber, Thomas; Hofmann, Harald; Bernarding, Johannes; Roessner, Albert

    2009-01-01

    Virtual three-dimensional (3D) specimens correspond to 3D visualizations of real pathological specimens on a computer display. We describe a simple method for the digitalization of such specimens from high-quality digital images. The images were taken during a whole rotation of a specimen, and merged together into a JPEG2000 multi-document file. The files were made available in the internet (http://patho.med.uni-magdeburg.de/research.shtml) and obtained very positive ratings by medical students. Virtual 3D specimens expand the application of digital techniques in pathology, and will contribute significantly to the successful introduction of knowledge databases and electronic learning platforms. PMID:19457621

  3. Organizational Learning Goes Virtual?: A Study of Employees' Learning Achievement in Stereoscopic 3D Virtual Reality

    ERIC Educational Resources Information Center

    Lau, Kung Wong

    2015-01-01

    Purpose: This study aims to deepen understanding of the use of stereoscopic 3D technology (stereo3D) in facilitating organizational learning. The emergence of advanced virtual technologies, in particular to the stereo3D virtual reality, has fundamentally changed the ways in which organizations train their employees. However, in academic or…

  4. Digital Geology from field to 3D modelling and Google Earth virtual environment: methods and goals from the Furlo Gorge (Northern Apennines - Italy)

    NASA Astrophysics Data System (ADS)

    De Donatis, Mauro; Susini, Sara

    2014-05-01

    A new map of the Furlo Gorge was surveyed and elaborated in a digital way. In every step of work we used digital tools as mobile GIS and 3D modelling software. Phase 1st Starting in the lab, planning the field project development, base cartography, forms and data base were designed in the way we thought was the best for collecting and store data in order of producing a digital n­-dimensional map. Bedding attitudes, outcrops sketches and description, stratigraphic logs, structural features and other informations were collected and organised in a structured database using rugged tablet PC, GPS receiver, digital cameras and later also an Android smartphone with some survey apps in-­house developed. A new mobile GIS (BeeGIS) was developed starting from an open source GIS (uDig): a number of tools like GPS connection, pen drawing annotations, geonotes, fieldbook, photo synchronization and geotagging were originally designed. Phase 2nd After some month of digital field work, all the informations were elaborated for drawing a geologic map in GIS environment. For that we use both commercial (ArcGIS) and open source (gvSig, QGIS, uDig) without big technical problems. Phase 3rd When we get to the step of building a 3D model (using 3DMove), passing trough the assisted drawing of cross-­sections (2DMove), we discovered a number of problems in the interpretation of geological structures (thrusts, normal faults) and more in the interpretation of stratigraphic thickness and boundaries and their relationships with topography. Phase 4th Before an "on­-armchair" redrawing of map, we decide to go back to the field and check directly what was wrong. Two main vantages came from this: (1) the mistakes we found could be reinterpreted and corrected directly in the field having all digital tools we need; (2) previous interpretations could be stored in GIS layers keeping memory of the previous work (also mistakes). Phase 5th A 3D model built with 3D Move is already almost self

  5. Identifying Virtual 3D Geometric Shapes with a Vibrotactile Glove.

    PubMed

    Martínez, Jonatan; García, Arturo; Oliver, Miguel; Molina, José Pascual; González, Pascual

    2016-01-01

    The emergence of off-screen interaction devices is bringing the field of virtual reality to a broad range of applications where virtual objects can be manipulated without the use of traditional peripherals. However, to facilitate object interaction, other stimuli such as haptic feedback are necessary to improve the user experience. To enable the identification of virtual 3D objects without visual feedback, a haptic display based on a vibrotactile glove and multiple points of contact gives users an enhanced sensation of touching a virtual object with their hands. Experimental results demonstrate the capacity of this technology in practical applications. PMID:25137722

  6. Employing Virtual Humans for Education and Training in X3D/VRML Worlds

    ERIC Educational Resources Information Center

    Ieronutti, Lucio; Chittaro, Luca

    2007-01-01

    Web-based education and training provides a new paradigm for imparting knowledge; students can access the learning material anytime by operating remotely from any location. Web3D open standards, such as X3D and VRML, support Web-based delivery of Educational Virtual Environments (EVEs). EVEs have a great potential for learning and training…

  7. [Development of a software for 3D virtual phantom design].

    PubMed

    Zou, Lian; Xie, Zhao; Wu, Qi

    2014-02-01

    In this paper, we present a 3D virtual phantom design software, which was developed based on object-oriented programming methodology and dedicated to medical physics research. This software was named Magical Phan tom (MPhantom), which is composed of 3D visual builder module and virtual CT scanner. The users can conveniently construct any complex 3D phantom, and then export the phantom as DICOM 3.0 CT images. MPhantom is a user-friendly and powerful software for 3D phantom configuration, and has passed the real scene's application test. MPhantom will accelerate the Monte Carlo simulation for dose calculation in radiation therapy and X ray imaging reconstruction algorithm research. PMID:24804488

  8. Novel interactive virtual showcase based on 3D multitouch technology

    NASA Astrophysics Data System (ADS)

    Yang, Tao; Liu, Yue; Lu, You; Wang, Yongtian

    2009-11-01

    A new interactive virtual showcase is proposed in this paper. With the help of virtual reality technology, the user of the proposed system can watch the virtual objects floating in the air from all four sides and interact with the virtual objects by touching the four surfaces of the virtual showcase. Unlike traditional multitouch system, this system cannot only realize multi-touch on a plane to implement 2D translation, 2D scaling, and 2D rotation of the objects; it can also realize the 3D interaction of the virtual objects by recognizing and analyzing the multi-touch that can be simultaneously captured from the four planes. Experimental results show the potential of the proposed system to be applied in the exhibition of historical relics and other precious goods.

  9. Virtual view adaptation for 3D multiview video streaming

    NASA Astrophysics Data System (ADS)

    Petrovic, Goran; Do, Luat; Zinger, Sveta; de With, Peter H. N.

    2010-02-01

    Virtual views in 3D-TV and multi-view video systems are reconstructed images of the scene generated synthetically from the original views. In this paper, we analyze the performance of streaming virtual views over IP-networks with a limited and time-varying available bandwidth. We show that the average video quality perceived by the user can be improved with an adaptive streaming strategy aiming at maximizing the average video quality. Our adaptive 3D multi-view streaming can provide a quality improvement of 2 dB on the average - over non-adaptive streaming. We demonstrate that an optimized virtual view adaptation algorithm needs to be view-dependent and achieve an improvement of up to 0.7 dB. We analyze our adaptation strategies under dynamic available bandwidth in the network.

  10. The Development of a Virtual 3D Model of the Renal Corpuscle from Serial Histological Sections for E-Learning Environments

    ERIC Educational Resources Information Center

    Roth, Jeremy A.; Wilson, Timothy D.; Sandig, Martin

    2015-01-01

    Histology is a core subject in the anatomical sciences where learners are challenged to interpret two-dimensional (2D) information (gained from histological sections) to extrapolate and understand the three-dimensional (3D) morphology of cells, tissues, and organs. In gross anatomical education 3D models and learning tools have been associated…

  11. Structural properties of spatial representations in blind people: Scanning images constructed from haptic exploration or from locomotion in a 3-D audio virtual environment.

    PubMed

    Afonso, Amandine; Blum, Alan; Katz, Brian F G; Tarroux, Philippe; Borst, Grégoire; Denis, Michel

    2010-07-01

    When people scan mental images, their response times increase linearly with increases in the distance to be scanned, which is generally taken as reflecting the fact that their internal representations incorporate the metric properties of the corresponding objects. In view of this finding, we investigated the structural properties of spatial mental images created from nonvisual sources in three groups (blindfolded sighted, late blind, and congenitally blind). In Experiment 1, blindfolded sighted and late blind participants created metrically accurate spatial representations of a small-scale spatial configuration under both verbal and haptic learning conditions. In Experiment 2, late and congenitally blind participants generated accurate spatial mental images after both verbal and locomotor learning of a full-scale navigable space (created by an immersive audio virtual reality system), whereas blindfolded sighted participants were selectively impaired in their ability to generate precise spatial representations from locomotor experience. These results attest that in the context of a permanent lack of sight, encoding spatial information on the basis of the most reliable currently functional system (the sensorimotor system) is crucial for building a metrically accurate representation of a spatial environment. The results also highlight the potential of spatialized audio-rendering technology for exploring the spatial representations of visually impaired participants. PMID:20551339

  12. Analyzing Visitors' Discourse, Attitudes, Perceptions, and Knowledge Acquisition in an Art Museum Tour after Using a 3D Virtual Environment

    ERIC Educational Resources Information Center

    D'Alba, Adriana

    2012-01-01

    The main purpose of this mixed methods research was to explore and analyze visitors' overall experience while they attended a museum exhibition, and examine how this experience was affected by previously using a virtual 3dimensional representation of the museum itself. The research measured knowledge acquisition in a virtual museum, and…

  13. Virtual image display as a backlight for 3D.

    PubMed

    Travis, Adrian; MacCrann, Niall; Emerton, Neil; Kollin, Joel; Georgiou, Andreas; Lanier, Jaron; Bathiche, Stephen

    2013-07-29

    We describe a device which has the potential to be used both as a virtual image display and as a backlight. The pupil of the emitted light fills the device approximately to its periphery and the collimated emission can be scanned both horizontally and vertically in the manner needed to illuminate an eye in any position. The aim is to reduce the power needed to illuminate a liquid crystal panel but also to enable a smooth transition from 3D to a virtual image as the user nears the screen. PMID:23938645

  14. Computer Assisted Virtual Environment - CAVE

    SciTech Connect

    Erickson, Phillip; Podgorney, Robert; Weingartner, Shawn; Whiting, Eric

    2014-01-14

    Research at the Center for Advanced Energy Studies is taking on another dimension with a 3-D device known as a Computer Assisted Virtual Environment. The CAVE uses projection to display high-end computer graphics on three walls and the floor. By wearing 3-D glasses to create depth perception and holding a wand to move and rotate images, users can delve into data.

  15. Computer Assisted Virtual Environment - CAVE

    ScienceCinema

    Erickson, Phillip; Podgorney, Robert; Weingartner, Shawn; Whiting, Eric

    2014-06-09

    Research at the Center for Advanced Energy Studies is taking on another dimension with a 3-D device known as a Computer Assisted Virtual Environment. The CAVE uses projection to display high-end computer graphics on three walls and the floor. By wearing 3-D glasses to create depth perception and holding a wand to move and rotate images, users can delve into data.

  16. 3-D Sound for Virtual Reality and Multimedia

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Trejo, Leonard J. (Technical Monitor)

    2000-01-01

    Technology and applications for the rendering of virtual acoustic spaces are reviewed. Chapter 1 deals with acoustics and psychoacoustics. Chapters 2 and 3 cover cues to spatial hearing and review psychoacoustic literature. Chapter 4 covers signal processing and systems overviews of 3-D sound systems. Chapter 5 covers applications to computer workstations, communication systems, aeronautics and space, and sonic arts. Chapter 6 lists resources. This TM is a reprint of the 1994 book from Academic Press.

  17. Comparative analysis of video processing and 3D rendering for cloud video games using different virtualization technologies

    NASA Astrophysics Data System (ADS)

    Bada, Adedayo; Alcaraz-Calero, Jose M.; Wang, Qi; Grecos, Christos

    2014-05-01

    This paper describes a comprehensive empirical performance evaluation of 3D video processing employing the physical/virtual architecture implemented in a cloud environment. Different virtualization technologies, virtual video cards and various 3D benchmarks tools have been utilized in order to analyse the optimal performance in the context of 3D online gaming applications. This study highlights 3D video rendering performance under each type of hypervisors, and other factors including network I/O, disk I/O and memory usage. Comparisons of these factors under well-known virtual display technologies such as VNC, Spice and Virtual 3D adaptors reveal the strengths and weaknesses of the various hypervisors with respect to 3D video rendering and streaming.

  18. Teaching Digital Natives: 3-D Virtual Science Lab in the Middle School Science Classroom

    ERIC Educational Resources Information Center

    Franklin, Teresa J.

    2008-01-01

    This paper presents the development of a 3-D virtual environment in Second Life for the delivery of standards-based science content for middle school students in the rural Appalachian region of Southeast Ohio. A mixed method approach in which quantitative results of improved student learning and qualitative observations of implementation within…

  19. Design and Implementation of a 3D Multi-User Virtual World for Language Learning

    ERIC Educational Resources Information Center

    Ibanez, Maria Blanca; Garcia, Jose Jesus; Galan, Sergio; Maroto, David; Morillo, Diego; Kloos, Carlos Delgado

    2011-01-01

    The best way to learn is by having a good teacher and the best language learning takes place when the learner is immersed in an environment where the language is natively spoken. 3D multi-user virtual worlds have been claimed to be useful for learning, and the field of exploiting them for education is becoming more and more active thanks to the…

  20. Combination of Virtual Tours, 3d Model and Digital Data in a 3d Archaeological Knowledge and Information System

    NASA Astrophysics Data System (ADS)

    Koehl, M.; Brigand, N.

    2012-08-01

    The site of the Engelbourg ruined castle in Thann, Alsace, France, has been for some years the object of all the attention of the city, which is the owner, and also of partners like historians and archaeologists who are in charge of its study. The valuation of the site is one of the main objective, as well as its conservation and its knowledge. The aim of this project is to use the environment of the virtual tour viewer as new base for an Archaeological Knowledge and Information System (AKIS). With available development tools we add functionalities in particular through diverse scripts that convert the viewer into a real 3D interface. By beginning with a first virtual tour that contains about fifteen panoramic images, the site of about 150 times 150 meters can be completely documented by offering the user a real interactivity and that makes visualization very concrete, almost lively. After the choice of pertinent points of view, panoramic images were realized. For the documentation, other sets of images were acquired at various seasons and climate conditions, which allow documenting the site in different environments and states of vegetation. The final virtual tour was deducted from them. The initial 3D model of the castle, which is virtual too, was also joined in the form of panoramic images for completing the understanding of the site. A variety of types of hotspots were used to connect the whole digital documentation to the site, including videos (as reports during the acquisition phases, during the restoration works, during the excavations, etc.), digital georeferenced documents (archaeological reports on the various constituent elements of the castle, interpretation of the excavations and the searches, description of the sets of collected objects, etc.). The completely personalized interface of the system allows either to switch from a panoramic image to another one, which is the classic case of the virtual tours, or to go from a panoramic photographic image

  1. Virtual reality 3D headset based on DMD light modulators

    SciTech Connect

    Bernacki, Bruce E.; Evans, Allan; Tang, Edward

    2014-06-13

    We present the design of an immersion-type 3D headset suitable for virtual reality applications based upon digital micro-mirror devices (DMD). Our approach leverages silicon micro mirrors offering 720p resolution displays in a small form-factor. Supporting chip sets allow rapid integration of these devices into wearable displays with high resolution and low power consumption. Applications include night driving, piloting of UAVs, fusion of multiple sensors for pilots, training, vision diagnostics and consumer gaming. Our design is described in which light from the DMD is imaged to infinity and the user’s own eye lens forms a real image on the user’s retina.

  2. 3D Reconstruction of virtual colon structures from colonoscopy images.

    PubMed

    Hong, DongHo; Tavanapong, Wallapak; Wong, Johnny; Oh, JungHwan; de Groen, Piet C

    2014-01-01

    This paper presents the first fully automated reconstruction technique of 3D virtual colon segments from individual colonoscopy images. It is the basis of new software applications that may offer great benefits for improving quality of care for colonoscopy patients. For example, a 3D map of the areas inspected and uninspected during colonoscopy can be shown on request of the endoscopist during the procedure. The endoscopist may revisit the suggested uninspected areas to reduce the chance of missing polyps that reside in these areas. The percentage of the colon surface seen by the endoscopist can be used as a coarse objective indicator of the quality of the procedure. The derived virtual colon models can be stored for post-procedure training of new endoscopists to teach navigation techniques that result in a higher level of procedure quality. Our technique does not require a prior CT scan of the colon or any global positioning device. Our experiments on endoscopy images of an Olympus synthetic colon model reveal encouraging results with small average reconstruction errors (4.1 mm for the fold depths and 12.1 mm for the fold circumferences). PMID:24225230

  3. A Web 2.0/Web3D Hybrid Platform for Engaging Students in e-Learning Environments

    ERIC Educational Resources Information Center

    de Byl, Penny; Taylor, Janet

    2007-01-01

    This paper explores the Web 2.0 ethos with respect to the application of pedagogy within 3D online virtual environments. 3D worlds can create a synthetic experience capturing the essence of "being" in a particular world or context. The AliveX3D platform adopts the Web 2.0 ethos and applies it to online 3D virtual environment forming a Web…

  4. Virtual reality 3D headset based on DMD light modulators

    NASA Astrophysics Data System (ADS)

    Bernacki, Bruce E.; Evans, Allan; Tang, Edward

    2014-06-01

    We present the design of an immersion-type 3D headset suitable for virtual reality applications based upon digital micromirror devices (DMD). Current methods for presenting information for virtual reality are focused on either polarizationbased modulators such as liquid crystal on silicon (LCoS) devices, or miniature LCD or LED displays often using lenses to place the image at infinity. LCoS modulators are an area of active research and development, and reduce the amount of viewing light by 50% due to the use of polarization. Viewable LCD or LED screens may suffer low resolution, cause eye fatigue, and exhibit a "screen door" or pixelation effect due to the low pixel fill factor. Our approach leverages a mature technology based on silicon micro mirrors delivering 720p resolution displays in a small form-factor with high fill factor. Supporting chip sets allow rapid integration of these devices into wearable displays with high-definition resolution and low power consumption, and many of the design methods developed for DMD projector applications can be adapted to display use. Potential applications include night driving with natural depth perception, piloting of UAVs, fusion of multiple sensors for pilots, training, vision diagnostics and consumer gaming. Our design concept is described in which light from the DMD is imaged to infinity and the user's own eye lens forms a real image on the user's retina resulting in a virtual retinal display.

  5. Approach to Constructing 3d Virtual Scene of Irrigation Area Using Multi-Source Data

    NASA Astrophysics Data System (ADS)

    Cheng, S.; Dou, M.; Wang, J.; Zhang, S.; Chen, X.

    2015-10-01

    For an irrigation area that is often complicated by various 3D artificial ground features and natural environment, disadvantages of traditional 2D GIS in spatial data representation, management, query, analysis and visualization is becoming more and more evident. Building a more realistic 3D virtual scene is thus especially urgent for irrigation area managers and decision makers, so that they can carry out various irrigational operations lively and intuitively. Based on previous researchers' achievements, a simple, practical and cost-effective approach was proposed in this study, by adopting3D geographic information system (3D GIS), remote sensing (RS) technology. Based on multi-source data such as Google Earth (GE) high-resolution remote sensing image, ASTER G-DEM, hydrological facility maps and so on, 3D terrain model and ground feature models were created interactively. Both of the models were then rendered with texture data and integrated under ArcGIS platform. A vivid, realistic 3D virtual scene of irrigation area that has a good visual effect and possesses primary GIS functions about data query and analysis was constructed.Yet, there is still a long way to go for establishing a true 3D GIS for the irrigation are: issues of this study were deeply discussed and future research direction was pointed out in the end of the paper.

  6. Second Life, a 3-D Animated Virtual World: An Alternative Platform for (Art) Education

    ERIC Educational Resources Information Center

    Han, Hsiao-Cheng

    2011-01-01

    3-D animated virtual worlds are no longer only for gaming. With the advance of technology, animated virtual worlds not only are found on every computer, but also connect users with the internet. Today, virtual worlds are created not only by companies, but also through the collaboration of users. Online 3-D animated virtual worlds provide a new…

  7. Inertial Sensor-Based Touch and Shake Metaphor for Expressive Control of 3D Virtual Avatars.

    PubMed

    Patil, Shashidhar; Chintalapalli, Harinadha Reddy; Kim, Dubeom; Chai, Youngho

    2015-01-01

    In this paper, we present an inertial sensor-based touch and shake metaphor for expressive control of a 3D virtual avatar in a virtual environment. An intuitive six degrees-of-freedom wireless inertial motion sensor is used as a gesture and motion control input device with a sensor fusion algorithm. The algorithm enables user hand motions to be tracked in 3D space via magnetic, angular rate, and gravity sensors. A quaternion-based complementary filter is implemented to reduce noise and drift. An algorithm based on dynamic time-warping is developed for efficient recognition of dynamic hand gestures with real-time automatic hand gesture segmentation. Our approach enables the recognition of gestures and estimates gesture variations for continuous interaction. We demonstrate the gesture expressivity using an interactive flexible gesture mapping interface for authoring and controlling a 3D virtual avatar and its motion by tracking user dynamic hand gestures. This synthesizes stylistic variations in a 3D virtual avatar, producing motions that are not present in the motion database using hand gesture sequences from a single inertial motion sensor. PMID:26094629

  8. Inertial Sensor-Based Touch and Shake Metaphor for Expressive Control of 3D Virtual Avatars

    PubMed Central

    Patil, Shashidhar; Chintalapalli, Harinadha Reddy; Kim, Dubeom; Chai, Youngho

    2015-01-01

    In this paper, we present an inertial sensor-based touch and shake metaphor for expressive control of a 3D virtual avatar in a virtual environment. An intuitive six degrees-of-freedom wireless inertial motion sensor is used as a gesture and motion control input device with a sensor fusion algorithm. The algorithm enables user hand motions to be tracked in 3D space via magnetic, angular rate, and gravity sensors. A quaternion-based complementary filter is implemented to reduce noise and drift. An algorithm based on dynamic time-warping is developed for efficient recognition of dynamic hand gestures with real-time automatic hand gesture segmentation. Our approach enables the recognition of gestures and estimates gesture variations for continuous interaction. We demonstrate the gesture expressivity using an interactive flexible gesture mapping interface for authoring and controlling a 3D virtual avatar and its motion by tracking user dynamic hand gestures. This synthesizes stylistic variations in a 3D virtual avatar, producing motions that are not present in the motion database using hand gesture sequences from a single inertial motion sensor. PMID:26094629

  9. Acoustic simulation in realistic 3D virtual scenes

    NASA Astrophysics Data System (ADS)

    Gozard, Patrick; Le Goff, Alain; Naz, Pierre; Cathala, Thierry; Latger, Jean

    2003-09-01

    The simulation workshop CHORALE developed in collaboration with OKTAL SE company for the French MoD is used by government services and industrial companies for weapon system validation and qualification trials in the infrared domain. The main operational reference for CHORALE is the assessment of the infrared guidance system of the Storm Shadow missile French version, called Scalp. The use of CHORALE workshop is now extended to the acoustic domain. The main objective is the simulation of the detection of moving vehicles in realistic 3D virtual scenes. This article briefly describes the acoustic model in CHORALE. The 3D scene is described by a set of polygons. Each polygon is characterized by its acoustic resistivity or its complex impedance. Sound sources are associated with moving vehicles and are characterized by their spectra and directivities. A microphone sensor is defined by its position, its frequency band and its sensitivity. The purpose of the acoustic simulation is to calculate the incoming acoustic pressure on microphone sensors. CHORALE is based on a generic ray tracing kernel. This kernel possesses original capabilities: computation time is nearly independent on the scene complexity, especially the number of polygons, databases are enhanced with precise physical data, special mechanisms of antialiasing have been developed that enable to manage very accurate details. The ray tracer takes into account the wave geometrical divergence and the atmospheric transmission. The sound wave refraction is simulated and rays cast in the 3D scene are curved according to air temperature gradient. Finally, sound diffraction by edges (hill, wall,...) is also taken into account.

  10. 3D virtual colonoscopy with real-time volume rendering

    NASA Astrophysics Data System (ADS)

    Wan, Ming; Li, Wei J.; Kreeger, Kevin; Bitter, Ingmar; Kaufman, Arie E.; Liang, Zhengrong; Chen, Dongqing; Wax, Mark R.

    2000-04-01

    In our previous work, we developed a virtual colonoscopy system on a high-end 16-processor SGI Challenge with an expensive hardware graphics accelerator. The goal of this work is to port the system to a low cost PC in order to increase its availability for mass screening. Recently, Mitsubishi Electric has developed a volume-rendering PC board, called VolumePro, which includes 128 MB of RAM and vg500 rendering chip. The vg500 chip, based on Cube-4 technology, can render a 2563 volume at 30 frames per second. High image quality of volume rendering inside the colon is guaranteed by the full lighting model and 3D interpolation supported by the vg500 chip. However, the VolumePro board is lacking some features required by our interactive colon navigation. First, VolumePro currently does not support perspective projection which is paramount for interior colon navigation. Second, the patient colon data is usually much larger than 2563 and cannot be rendered in real-time. In this paper, we present our solutions to these problems, including simulated perspective projection and axis aligned boxing techniques, and demonstrate the high performance of our virtual colonoscopy system on low cost PCs.

  11. Seamless 3D interaction for virtual tables, projection planes, and CAVEs

    NASA Astrophysics Data System (ADS)

    Encarnacao, L. M.; Bimber, Oliver; Schmalstieg, Dieter; Barton, Robert J., III

    2000-08-01

    The Virtual Table presents stereoscopic graphics to a user in a workbench-like setting. This device shares with other large- screen display technologies (such as data walls and surround- screen projection systems) the lack of human-centered unencumbered user interfaces and 3D interaction technologies. Such shortcomings present severe limitations to the application of virtual reality (VR) technology to time- critical applications as well as employment scenarios that involve heterogeneous groups of end-users without high levels of computer familiarity and expertise. Traditionally such employment scenarios are common in planning-related application areas such as mission rehearsal and command and control. For these applications, a high grade of flexibility with respect to the system requirements (display and I/O devices) as well as to the ability to seamlessly and intuitively switch between different interaction modalities and interaction are sought. Conventional VR techniques may be insufficient to meet this challenge. This paper presents novel approaches for human-centered interfaces to Virtual Environments focusing on the Virtual Table visual input device. It introduces new paradigms for 3D interaction in virtual environments (VE) for a variety of application areas based on pen-and-clipboard, mirror-in-hand, and magic-lens metaphors, and introduces new concepts for combining VR and augmented reality (AR) techniques. It finally describes approaches toward hybrid and distributed multi-user interaction environments and concludes by hypothesizing on possible use cases for defense applications.

  12. Instructors' Perceptions of Three-Dimensional (3D) Virtual Worlds: Instructional Use, Implementation and Benefits for Adult Learners

    ERIC Educational Resources Information Center

    Stone, Sophia Jeffries

    2009-01-01

    The purpose of this dissertation research study was to explore instructors' perceptions of the educational application of three-dimensional (3D) virtual worlds in a variety of academic discipline areas and to assess the strengths and limitations this virtual environment presents for teaching adult learners. The guiding research question for this…

  13. Objective and subjective quality assessment of geometry compression of reconstructed 3D humans in a 3D virtual room

    NASA Astrophysics Data System (ADS)

    Mekuria, Rufael; Cesar, Pablo; Doumanis, Ioannis; Frisiello, Antonella

    2015-09-01

    Compression of 3D object based video is relevant for 3D Immersive applications. Nevertheless, the perceptual aspects of the degradation introduced by codecs for meshes and point clouds are not well understood. In this paper we evaluate the subjective and objective degradations introduced by such codecs in a state of art 3D immersive virtual room. In the 3D immersive virtual room, users are captured with multiple cameras, and their surfaces are reconstructed as photorealistic colored/textured 3D meshes or point clouds. To test the perceptual effect of compression and transmission, we render degraded versions with different frame rates in different contexts (near/far) in the scene. A quantitative subjective study with 16 users shows that negligible distortion of decoded surfaces compared to the original reconstructions can be achieved in the 3D virtual room. In addition, a qualitative task based analysis in a full prototype field trial shows increased presence, emotion, user and state recognition of the reconstructed 3D Human representation compared to animated computer avatars.

  14. Participatory Gis: Experimentations for a 3d Social Virtual Globe

    NASA Astrophysics Data System (ADS)

    Brovelli, M. A.; Minghini, M.; Zamboni, G.

    2013-08-01

    The dawn of GeoWeb 2.0, the geographic extension of Web 2.0, has opened new possibilities in terms of online dissemination and sharing of geospatial contents, thus laying the foundations for a fruitful development of Participatory GIS (PGIS). The purpose of the study is to investigate the extension of PGIS applications, which are quite mature in the traditional bi-dimensional framework, up to the third dimension. More in detail, the system should couple a powerful 3D visualization with an increase of public participation by means of a tool allowing data collecting from mobile devices (e.g. smartphones and tablets). The PGIS application, built using the open source NASA World Wind virtual globe, is focussed on the cultural and tourism heritage of Como city, located in Northern Italy. An authentication mechanism was implemented, which allows users to create and manage customized projects through cartographic mash-ups of Web Map Service (WMS) layers. Saved projects populate a catalogue which is available to the entire community. Together with historical maps and the current cartography of the city, the system is also able to manage geo-tagged multimedia data, which come from user field-surveys performed through mobile devices and report POIs (Points Of Interest). Each logged user can then contribute to POIs characterization by adding textual and multimedia information (e.g. images, audios and videos) directly on the globe. All in all, the resulting application allows users to create and share contributions as it usually happens on social platforms, additionally providing a realistic 3D representation enhancing the expressive power of data.

  15. Presence Pedagogy: Teaching and Learning in a 3D Virtual Immersive World

    ERIC Educational Resources Information Center

    Bronack, Stephen; Sanders, Robert; Cheney, Amelia; Riedl, Richard; Tashner, John; Matzen, Nita

    2008-01-01

    As the use of 3D immersive virtual worlds in higher education expands, it is important to examine which pedagogical approaches are most likely to bring about success. AET Zone, a 3D immersive virtual world in use for more than seven years, is one embodiment of pedagogical innovation that capitalizes on what virtual worlds have to offer to social…

  16. Anesthesiology training using 3D imaging and virtual reality

    NASA Astrophysics Data System (ADS)

    Blezek, Daniel J.; Robb, Richard A.; Camp, Jon J.; Nauss, Lee A.

    1996-04-01

    Current training for regional nerve block procedures by anesthesiology residents requires expert supervision and the use of cadavers; both of which are relatively expensive commodities in today's cost-conscious medical environment. We are developing methods to augment and eventually replace these training procedures with real-time and realistic computer visualizations and manipulations of the anatomical structures involved in anesthesiology procedures, such as nerve plexus injections (e.g., celiac blocks). The initial work is focused on visualizations: both static images and rotational renderings. From the initial results, a coherent paradigm for virtual patient and scene representation will be developed.

  17. Dissection of C. elegans behavioral genetics in 3-D environments

    PubMed Central

    Kwon, Namseop; Hwang, Ara B.; You, Young-Jai; V. Lee, Seung-Jae; Ho Je, Jung

    2015-01-01

    The nematode Caenorhabditis elegans is a widely used model for genetic dissection of animal behaviors. Despite extensive technical advances in imaging methods, it remains challenging to visualize and quantify C. elegans behaviors in three-dimensional (3-D) natural environments. Here we developed an innovative 3-D imaging method that enables quantification of C. elegans behavior in 3-D environments. Furthermore, for the first time, we characterized 3-D-specific behavioral phenotypes of mutant worms that have defects in head movement or mechanosensation. This approach allowed us to reveal previously unknown functions of genes in behavioral regulation. We expect that our 3-D imaging method will facilitate new investigations into genetic basis of animal behaviors in natural 3-D environments. PMID:25955271

  18. Integration of the virtual 3D model of a control system with the virtual controller

    NASA Astrophysics Data System (ADS)

    Herbuś, K.; Ociepka, P.

    2015-11-01

    Nowadays the design process includes simulation analysis of different components of a constructed object. It involves the need for integration of different virtual object to simulate the whole investigated technical system. The paper presents the issues related to the integration of a virtual 3D model of a chosen control system of with a virtual controller. The goal of integration is to verify the operation of an adopted object of in accordance with the established control program. The object of the simulation work is the drive system of a tunneling machine for trenchless work. In the first stage of work was created an interactive visualization of functioning of the 3D virtual model of a tunneling machine. For this purpose, the software of the VR (Virtual Reality) class was applied. In the elaborated interactive application were created adequate procedures allowing controlling the drive system of a translatory motion, a rotary motion and the drive system of a manipulator. Additionally was created the procedure of turning on and off the output crushing head, mounted on the last element of the manipulator. In the elaborated interactive application have been established procedures for receiving input data from external software, on the basis of the dynamic data exchange (DDE), which allow controlling actuators of particular control systems of the considered machine. In the next stage of work, the program on a virtual driver, in the ladder diagram (LD) language, was created. The control program was developed on the basis of the adopted work cycle of the tunneling machine. The element integrating the virtual model of the tunneling machine for trenchless work with the virtual controller is the application written in a high level language (Visual Basic). In the developed application was created procedures responsible for collecting data from the running, in a simulation mode, virtual controller and transferring them to the interactive application, in which is verified the

  19. Virtual Reality: The Future of Animated Virtual Instructor, the Technology and Its Emergence to a Productive E-Learning Environment.

    ERIC Educational Resources Information Center

    Jiman, Juhanita

    This paper discusses the use of Virtual Reality (VR) in e-learning environments where an intelligent three-dimensional (3D) virtual person plays the role of an instructor. With the existence of this virtual instructor, it is hoped that the teaching and learning in the e-environment will be more effective and productive. This virtual 3D animated…

  20. The Virtual Radiopharmacy Laboratory: A 3-D Simulation for Distance Learning

    ERIC Educational Resources Information Center

    Alexiou, Antonios; Bouras, Christos; Giannaka, Eri; Kapoulas, Vaggelis; Nani, Maria; Tsiatsos, Thrasivoulos

    2004-01-01

    This article presents Virtual Radiopharmacy Laboratory (VR LAB), a virtual laboratory accessible through the Internet. VR LAB is designed and implemented in the framework of the VirRAD European project. This laboratory represents a 3D simulation of a radio-pharmacy laboratory, where learners, represented by 3D avatars, can experiment on…

  1. 3D Inhabited Virtual Worlds: Interactivity and Interaction between Avatars, Autonomous Agents, and Users.

    ERIC Educational Resources Information Center

    Jensen, Jens F.

    This paper addresses some of the central questions currently related to 3-Dimensional Inhabited Virtual Worlds (3D-IVWs), their virtual interactions, and communication, drawing from the theory and methodology of sociology, interaction analysis, interpersonal communication, semiotics, cultural studies, and media studies. First, 3D-IVWs--seen as a…

  2. Issues and Challenges of Teaching and Learning in 3D Virtual Worlds: Real Life Case Studies

    ERIC Educational Resources Information Center

    Pfeil, Ulrike; Ang, Chee Siang; Zaphiris, Panayiotis

    2009-01-01

    We aimed to study the characteristics and usage patterns of 3D virtual worlds in the context of teaching and learning. To achieve this, we organised a full-day workshop to explore, discuss and investigate the educational use of 3D virtual worlds. Thirty participants took part in the workshop. All conversations were recorded and transcribed for…

  3. Navigation in virtual environments

    NASA Astrophysics Data System (ADS)

    Arthur, Erik; Hancock, Peter A.; Telke, Susan

    1996-06-01

    Virtual environments show great promise in the area of training. ALthough such synthetic environments project homeomorphic physical representations of real- world layouts, it is not known how individuals develop models to match such environments. To evaluate this process, the present experiment examined the accuracy of triadic representations of objects having learned them previously under different conditions. The layout consisted of four different colored spheres arranged on a flat plane. These objects could be viewed in either a free navigation virtual environment condition (NAV) or a single body position virtual environment condition. The first condition allowed active exploration of the environment while the latter condition allowed the participant only a passive opportunity to observe form a single viewpoint. These viewing conditions were a between-subject variable with ten participants randomly assigned to each condition. Performance was assessed by the response latency to judge the accuracy of a layout of three objects over different rotations. Results showed linear increases in response latency as the rotation angle increased from the initial perspective in SBP condition. The NAV condition did not show a similar effect of rotation angle. These results suggest that the spatial knowledge acquisition from virtual environments through navigation is similar to actual navigation.

  4. The Engelbourg's ruins: from 3D TLS point cloud acquisition to 3D virtual and historic models

    NASA Astrophysics Data System (ADS)

    Koehl, Mathieu; Berger, Solveig; Nobile, Sylvain

    2014-05-01

    The Castle of Engelbourg was built at the beginning of the 13th century, at the top of the Schlossberg. It is situated on the territory of the municipality of Thann (France), at the crossroads of Alsace and Lorraine, and dominates the outlet of the valley of Thur. Its strategic position was one of the causes of its systematic destructions during the 17th century, and Louis XIV finished his fate by ordering his demolition in 1673. Today only few vestiges remain, of which a section of the main tower from about 7m of diameter and 4m of wide laying on its slice, unique characteristic in the regional castral landscape. It is visible since the valley, was named "the Eye of the witch", and became a key attraction of the region. The site, which extends over approximately one hectare, is for several years the object of numerous archaeological studies and is at the heart of a project of valuation of the vestiges today. It was indeed a key objective, among the numerous planned works, to realize a 3D model of the site in its current state, in other words, a virtual model "such as seized", exploitable as well from a cultural and tourist point of view as by scientists and in archaeological researches. The team of the ICube/INSA lab had in responsibility the realization of this model, the acquisition of the data until the delivery of the virtual model, thanks to 3D TLS and topographic surveying methods. It was also planned to integrate into this 3D model, data of 2D archives, stemming from series of former excavations. The objectives of this project were the following ones: • Acquisition of 3D digital data of the site and 3D modelling • Digitization of the 2D archaeological data and integration in the 3D model • Implementation of a database connected to the 3D model • Virtual Visit of the site The obtained results allowed us to visualize every 3D object individually, under several forms (point clouds, 3D meshed objects and models, etc.) and at several levels of detail

  5. Virtual and Printed 3D Models for Teaching Crystal Symmetry and Point Groups

    ERIC Educational Resources Information Center

    Casas, Lluís; Estop, Euge`nia

    2015-01-01

    Both, virtual and printed 3D crystal models can help students and teachers deal with chemical education topics such as symmetry and point groups. In the present paper, two freely downloadable tools (interactive PDF files and a mobile app) are presented as examples of the application of 3D design to study point-symmetry. The use of 3D printing to…

  6. LiveView3D: Real Time Data Visualization for the Aerospace Testing Environment

    NASA Technical Reports Server (NTRS)

    Schwartz, Richard J.; Fleming, Gary A.

    2006-01-01

    This paper addresses LiveView3D, a software package and associated data visualization system for use in the aerospace testing environment. The LiveView3D system allows researchers to graphically view data from numerous wind tunnel instruments in real time in an interactive virtual environment. The graphical nature of the LiveView3D display provides researchers with an intuitive view of the measurement data, making it easier to interpret the aerodynamic phenomenon under investigation. LiveView3D has been developed at the NASA Langley Research Center and has been applied in the Langley Unitary Plan Wind Tunnel (UPWT). This paper discusses the capabilities of the LiveView3D system, provides example results from its application in the UPWT, and outlines features planned for future implementation.

  7. Learning to Collaborate: Designing Collaboration in a 3-D Game Environment

    ERIC Educational Resources Information Center

    Hamalainen, Raija; Manninen, Tony; Jarvela, Sanna; Hakkinen, Paivi

    2006-01-01

    To respond to learning needs, Computer-Supported Collaborative Learning (CSCL) must provide instructional support. The particular focus of this paper is on designing collaboration in a 3-D virtual game environment intended to make learning more effective by promoting student opportunities for interaction. The empirical experiment eScape, which…

  8. Research on 3D virtual campus scene modeling based on 3ds Max and VRML

    NASA Astrophysics Data System (ADS)

    Kang, Chuanli; Zhou, Yanliu; Liang, Xianyue

    2015-12-01

    With the rapid development of modem technology, the digital information management and the virtual reality simulation technology has become a research hotspot. Virtual campus 3D model can not only express the real world objects of natural, real and vivid, and can expand the campus of the reality of time and space dimension, the combination of school environment and information. This paper mainly uses 3ds Max technology to create three-dimensional model of building and on campus buildings, special land etc. And then, the dynamic interactive function is realized by programming the object model in 3ds Max by VRML .This research focus on virtual campus scene modeling technology and VRML Scene Design, and the scene design process in a variety of real-time processing technology optimization strategy. This paper guarantees texture map image quality and improve the running speed of image texture mapping. According to the features and architecture of Guilin University of Technology, 3ds Max, AutoCAD and VRML were used to model the different objects of the virtual campus. Finally, the result of virtual campus scene is summarized.

  9. Evaluation of Home Delivery of Lectures Utilizing 3D Virtual Space Infrastructure

    ERIC Educational Resources Information Center

    Nishide, Ryo; Shima, Ryoichi; Araie, Hiromu; Ueshima, Shinichi

    2007-01-01

    Evaluation experiments have been essential in exploring home delivery of lectures for which users can experience campus lifestyle and distant learning through 3D virtual space. This paper discusses the necessity of virtual space for distant learners by examining the effects of virtual space. The authors have pursued the possibility of…

  10. Collaborative virtual environments art exhibition

    NASA Astrophysics Data System (ADS)

    Dolinsky, Margaret; Anstey, Josephine; Pape, Dave E.; Aguilera, Julieta C.; Kostis, Helen-Nicole; Tsoupikova, Daria

    2005-03-01

    This panel presentation will exhibit artwork developed in CAVEs and discuss how art methodologies enhance the science of VR through collaboration, interaction and aesthetics. Artists and scientists work alongside one another to expand scientific research and artistic expression and are motivated by exhibiting collaborative virtual environments. Looking towards the arts, such as painting and sculpture, computer graphics captures a visual tradition. Virtual reality expands this tradition to not only what we face, but to what surrounds us and even what responds to our body and its gestures. Art making that once was isolated to the static frame and an optimal point of view is now out and about, in fully immersive mode within CAVEs. Art knowledge is a guide to how the aesthetics of 2D and 3D worlds affect, transform, and influence the social, intellectual and physical condition of the human body through attention to psychology, spiritual thinking, education, and cognition. The psychological interacts with the physical in the virtual in such a way that each facilitates, enhances and extends the other, culminating in a "go together" world. Attention to sharing art experience across high-speed networks introduces a dimension of liveliness and aliveness when we "become virtual" in real time with others.

  11. Shared virtual environments for telerehabilitation.

    PubMed

    Popescu, George V; Burdea, Grigore; Boian, Rares

    2002-01-01

    Current VR telerehabilitation systems use offline remote monitoring from the clinic and patient-therapist videoconferencing. Such "store and forward" and video-based systems cannot implement medical services involving patient therapist direct interaction. Real-time telerehabilitation applications (including remote therapy) can be developed using a shared Virtual Environment (VE) architecture. We developed a two-user shared VE for hand telerehabilitation. Each site has a telerehabilitation workstation with a videocamera and a Rutgers Master II (RMII) force feedback glove. Each user can control a virtual hand and interact hapticly with virtual objects. Simulated physical interactions between therapist and patient are implemented using hand force feedback. The therapist's graphic interface contains several virtual panels, which allow control over the rehabilitation process. These controls start a videoconferencing session, collect patient data, or apply therapy. Several experimental telerehabilitation scenarios were successfully tested on a LAN. A Web-based approach to "real-time" patient telemonitoring--the monitoring portal for hand telerehabilitation--was also developed. The therapist interface is implemented as a Java3D applet that monitors patient hand movement. The monitoring portal gives real-time performance on off-the-shelf desktop workstations. PMID:15458115

  12. Virtual System Environments

    SciTech Connect

    Vallee, Geoffroy R; Naughton, III, Thomas J; Ong, Hong Hoe; Tikotekar, Anand A; Engelmann, Christian; Bland, Wesley B; Aderholdt, Ferrol; Scott, Stephen L

    2008-01-01

    Distributed and parallel systems are typically managed with "static" settings: the operating system (OS) and the runtime environment (RTE) are specified at a given time and cannot be changed to fit an application's needs. This means that every time application developers want to use their application on a new execution platform, the application has to be ported to this new environment, which may be expensive in terms of application modifications and developer time. However, the science resides in the applications and not in the OS or the RTE. Therefore, it should be beneficial to adapt the OS and the RTE to the application instead of adapting the applications to the OS and the RTE. This document presents the concept of Virtual System Environments (VSE), which enables application developers to specify and create a virtual environment that properly fits their application's needs. For that four challenges have to be addressed: (i) definition of the VSE itself by the application developers, (ii) deployment of the VSE, (iii) system administration for the platform, and (iv) protection of the platform from the running VSE. We therefore present an integrated tool for the definition and deployment of VSEs on top of traditional and virtual (i.e., using system-level virtualization) execution platforms. This tool provides the capability to choose the degree of delegation for system administration tasks and the degree of protection from the application (e.g., using virtual machines). To summarize, the VSE concept enables the customization of the OS/RTE used for the execution of application by users without compromising local system administration rules and execution platform protection constraints.

  13. 3-D Virtual and Physical Reconstruction of Bendego Iron

    NASA Astrophysics Data System (ADS)

    Belmonte, S. L. R.; Zucolotto, M. E.; Fontes, R. C.; dos Santos, J. R. L.

    2012-09-01

    The use of 3D laser scanning to meteoritic to preserve the original shape of the meteorites before cutting and the facility of saved the datas in STL format (stereolithography) to print three-dimensional physical models and generate a digital replica.

  14. Special Section: New Ways to Detect Colon Cancer 3-D virtual screening now being used

    MedlinePlus

    ... body) from the National Library of Medicine's Visible Human project (www.nlm.nih.gov). By 1996, Kaufman and his colleagues had patented a pioneering computer software system and techniques for 3-D virtual ...

  15. Spilling the beans on java 3D: a tool for the virtual anatomist.

    PubMed

    Guttmann, G D

    1999-04-15

    The computing world has just provided the anatomist with another tool: Java 3D, within the Java 2 platform. On December 9, 1998, Sun Microsystems released Java 2. Java 3D classes are now included in the jar (Java Archive) archives of the extensions directory of Java 2. Java 3D is also a part of the Java Media Suite of APIs (Application Programming Interfaces). But what is Java? How does Java 3D work? How do you view Java 3D objects? A brief introduction to the concepts of Java and object-oriented programming is provided. Also, there is a short description of the tools of Java 3D and of the Java 3D viewer. Thus, the virtual anatomist has another set of computer tools to use for modeling various aspects of anatomy, such as embryological development. Also, the virtual anatomist will be able to assist the surgeon with virtual surgery using the tools found in Java 3D. Java 3D will be able to fulfill gaps, such as the lack of platform independence, interactivity, and manipulability of 3D images, currently existing in many anatomical computer-aided learning programs. PMID:10321435

  16. 3D Visualisation and Artistic Imagery to Enhance Interest in "Hidden Environments"--New Approaches to Soil Science

    ERIC Educational Resources Information Center

    Gilford, J.; Falconer, R. E.; Wade, R.; Scott-Brown, K. C.

    2014-01-01

    Interactive Virtual Environments (VEs) have the potential to increase student interest in soil science. Accordingly a bespoke "soil atlas" was created using Java3D as an interactive 3D VE, to show soil information in the context of (and as affected by) the over-lying landscape. To display the below-ground soil characteristics, four sets…

  17. Design of Learning Spaces in 3D Virtual Worlds: An Empirical Investigation of "Second Life"

    ERIC Educational Resources Information Center

    Minocha, Shailey; Reeves, Ahmad John

    2010-01-01

    "Second Life" (SL) is a three-dimensional (3D) virtual world, and educational institutions are adopting SL to support their teaching and learning. Although the question of how 3D learning spaces should be designed to support student learning and engagement has been raised among SL educators and designers, there is hardly any guidance or research…

  18. Virtual 3D interactive system with embedded multiwavelength optical sensor array and sequential devices

    NASA Astrophysics Data System (ADS)

    Wang, Guo-Zhen; Huang, Yi-Pai; Hu, Kuo-Jui

    2012-06-01

    We proposed a virtual 3D-touch system by bare finger, which can detect the 3-axis (x, y, z) information of finger. This system has multi-wavelength optical sensor array embedded on the backplane of TFT panel and sequentail devices on the border of TFT panel. We had developed reflecting mode which can be worked by bare finger for the 3D interaction. A 4-inch mobile 3D-LCD with this proposed system was successfully been demonstrated already.

  19. Sockeye: a 3D environment for comparative genomics.

    PubMed

    Montgomery, Stephen B; Astakhova, Tamara; Bilenky, Mikhail; Birney, Ewan; Fu, Tony; Hassel, Maik; Melsopp, Craig; Rak, Marcin; Robertson, A Gordon; Sleumer, Monica; Siddiqui, Asim S; Jones, Steven J M

    2004-05-01

    Comparative genomics techniques are used in bioinformatics analyses to identify the structural and functional properties of DNA sequences. As the amount of available sequence data steadily increases, the ability to perform large-scale comparative analyses has become increasingly relevant. In addition, the growing complexity of genomic feature annotation means that new approaches to genomic visualization need to be explored. We have developed a Java-based application called Sockeye that uses three-dimensional (3D) graphics technology to facilitate the visualization of annotation and conservation across multiple sequences. This software uses the Ensembl database project to import sequence and annotation information from several eukaryotic species. A user can additionally import their own custom sequence and annotation data. Individual annotation objects are displayed in Sockeye by using custom 3D models. Ensembl-derived and imported sequences can be analyzed by using a suite of multiple and pair-wise alignment algorithms. The results of these comparative analyses are also displayed in the 3D environment of Sockeye. By using the Java3D API to visualize genomic data in a 3D environment, we are able to compactly display cross-sequence comparisons. This provides the user with a novel platform for visualizing and comparing genomic feature organization. PMID:15123592

  20. 3D recovery of human gaze in natural environments

    NASA Astrophysics Data System (ADS)

    Paletta, Lucas; Santner, Katrin; Fritz, Gerald; Mayer, Heinz

    2013-01-01

    The estimation of human attention has recently been addressed in the context of human robot interaction. Today, joint work spaces already exist and challenge cooperating systems to jointly focus on common objects, scenes and work niches. With the advent of Google glasses and increasingly affordable wearable eye-tracking, monitoring of human attention will soon become ubiquitous. The presented work describes for the first time a method for the estimation of human fixations in 3D environments that does not require any artificial landmarks in the field of view and enables attention mapping in 3D models. It enables full 3D recovery of the human view frustum and the gaze pointer in a previously acquired 3D model of the environment in real time. The study on the precision of this method reports a mean projection error ≈1.1 cm and a mean angle error ≈0.6° within the chosen 3D model - the precision does not go below the one of the technical instrument (≈1°). This innovative methodology will open new opportunities for joint attention studies as well as for bringing new potential into automated processing for human factors technologies.

  1. A method of 3-D data information storage with virtual holography

    NASA Astrophysics Data System (ADS)

    Huang, Zhen; Liu, Guodong; Ren, Zhong; Zeng, Lüming

    2008-12-01

    In this paper, a new method of 3-D data cube based on virtual holographic storage is presented. Firstly, the data information is encoded in the form of 3-D data cube with a certain algorithm, in which the interval along coordinates between every data is d. Using the plane-scanning method, the 3-D cube can be described as a assembly of slices which are parallel planes along the coordinates at an interval of d. The dot on the slice represents a bit. The bright one means "1", while the dark one means "0". Secondly, a hologram of the 3-D cube is obtained by computer with virtual optics technology. All the information of a 3-D cube can be described by a 2-D hologram. At last, the hologram is inputted in the SLM, and recorded in the recording material by intersecting two coherent laser beams. When the 3-D data is exported, a reference light illuminates the hologram, and a CCD is used to get the object image which is a hologram of the 3-D data. Then the 3-D data is computed with virtual optical technology. Compared with 2-D data page storage, the 3-D data cube storage has outstanding performance in larger capacity of data storage and higher security of data.

  2. The cranial nerve skywalk: A 3D tutorial of cranial nerves in a virtual platform.

    PubMed

    Richardson-Hatcher, April; Hazzard, Matthew; Ramirez-Yanez, German

    2014-01-01

    Visualization of the complex courses of the cranial nerves by students in the health-related professions is challenging through either diagrams in books or plastic models in the gross laboratory. Furthermore, dissection of the cranial nerves in the gross laboratory is an extremely meticulous task. Teaching and learning the cranial nerve pathways is difficult using two-dimensional (2D) illustrations alone. Three-dimensional (3D) models aid the teacher in describing intricate and complex anatomical structures and help students visualize them. The study of the cranial nerves can be supplemented with 3D, which permits the students to fully visualize their distribution within the craniofacial complex. This article describes the construction and usage of a virtual anatomy platform in Second Life™, which contains 3D models of the cranial nerves III, V, VII, and IX. The Cranial Nerve Skywalk features select cranial nerves and the associated autonomic pathways in an immersive online environment. This teaching supplement was introduced to groups of pre-healthcare professional students in gross anatomy courses at both institutions and student feedback is included. PMID:24678025

  3. The Virtual-casing Principle For 3D Toroidal Systems

    SciTech Connect

    Lazerson, Samuel A.

    2014-02-24

    The capability to calculate the magnetic field due to the plasma currents in a toroidally confined magnetic fusion equilibrium is of manifest relevance to equilibrium reconstruction and stellarator divertor design. Two methodologies arise for calculating such quantities. The first being a volume integral over the plasma current density for a given equilibrium. Such an integral is computationally expensive. The second is a surface integral over a surface current on the equilibrium boundary. This method is computationally desirable as the calculation does not grow as the radial resolution of the volume integral. This surface integral method has come to be known as the "virtual-casing principle". In this paper, a full derivation of this method is presented along with a discussion regarding its optimal application.

  4. 3D structure of nucleon with virtuality distributions

    NASA Astrophysics Data System (ADS)

    Radyushkin, Anatoly

    2014-09-01

    We describe a new approach to transverse momentum dependence in hard processes. Our starting point is coordinate representation for matrix elements of operators (in the simplest case, bilocal O (0 , z)) describing a hadron with momentum p. Treated as functions of (pz) and z2, they are parametrized through parton virtuality distribution (PVD) Φ (x , σ) , with x being Fourier-conjugate to (pz) and σ Laplace-conjugate to z2. For intervals with z+ = 0 , we introduce the transverse momentum distribution (TMD) f (x ,k⊥) , and write it in terms of PVD Φ (x , σ) . The results of covariant calculations, written in terms of Φ (x , σ) are converted into expressions involving f (x ,k⊥) . We propose models for soft PVDs/TMDs,and describe how one can generate high-k⊥ tails of TMDs from primordial soft distributions. We describe a new approach to transverse momentum dependence in hard processes. Our starting point is coordinate representation for matrix elements of operators (in the simplest case, bilocal O (0 , z)) describing a hadron with momentum p. Treated as functions of (pz) and z2, they are parametrized through parton virtuality distribution (PVD) Φ (x , σ) , with x being Fourier-conjugate to (pz) and σ Laplace-conjugate to z2. For intervals with z+ = 0 , we introduce the transverse momentum distribution (TMD) f (x ,k⊥) , and write it in terms of PVD Φ (x , σ) . The results of covariant calculations, written in terms of Φ (x , σ) are converted into expressions involving f (x ,k⊥) . We propose models for soft PVDs/TMDs,and describe how one can generate high-k⊥ tails of TMDs from primordial soft distributions. Supported by Jefferson Science Associates, LLC under U.S. DOE Contract #DE-AC05-06OR23177 and by U.S. DOE Grant #DE-FG02-97ER41028.

  5. Memory and visual search in naturalistic 2D and 3D environments

    PubMed Central

    Li, Chia-Ling; Aivar, M. Pilar; Kit, Dmitry M.; Tong, Matthew H.; Hayhoe, Mary M.

    2016-01-01

    The role of memory in guiding attention allocation in daily behaviors is not well understood. In experiments with two-dimensional (2D) images, there is mixed evidence about the importance of memory. Because the stimulus context in laboratory experiments and daily behaviors differs extensively, we investigated the role of memory in visual search, in both two-dimensional (2D) and three-dimensional (3D) environments. A 3D immersive virtual apartment composed of two rooms was created, and a parallel 2D visual search experiment composed of snapshots from the 3D environment was developed. Eye movements were tracked in both experiments. Repeated searches for geometric objects were performed to assess the role of spatial memory. Subsequently, subjects searched for realistic context objects to test for incidental learning. Our results show that subjects learned the room-target associations in 3D but less so in 2D. Gaze was increasingly restricted to relevant regions of the room with experience in both settings. Search for local contextual objects, however, was not facilitated by early experience. Incidental fixations to context objects do not necessarily benefit search performance. Together, these results demonstrate that memory for global aspects of the environment guides search by restricting allocation of attention to likely regions, whereas task relevance determines what is learned from the active search experience. Behaviors in 2D and 3D environments are comparable, although there is greater use of memory in 3D. PMID:27299769

  6. Memory and visual search in naturalistic 2D and 3D environments.

    PubMed

    Li, Chia-Ling; Aivar, M Pilar; Kit, Dmitry M; Tong, Matthew H; Hayhoe, Mary M

    2016-06-01

    The role of memory in guiding attention allocation in daily behaviors is not well understood. In experiments with two-dimensional (2D) images, there is mixed evidence about the importance of memory. Because the stimulus context in laboratory experiments and daily behaviors differs extensively, we investigated the role of memory in visual search, in both two-dimensional (2D) and three-dimensional (3D) environments. A 3D immersive virtual apartment composed of two rooms was created, and a parallel 2D visual search experiment composed of snapshots from the 3D environment was developed. Eye movements were tracked in both experiments. Repeated searches for geometric objects were performed to assess the role of spatial memory. Subsequently, subjects searched for realistic context objects to test for incidental learning. Our results show that subjects learned the room-target associations in 3D but less so in 2D. Gaze was increasingly restricted to relevant regions of the room with experience in both settings. Search for local contextual objects, however, was not facilitated by early experience. Incidental fixations to context objects do not necessarily benefit search performance. Together, these results demonstrate that memory for global aspects of the environment guides search by restricting allocation of attention to likely regions, whereas task relevance determines what is learned from the active search experience. Behaviors in 2D and 3D environments are comparable, although there is greater use of memory in 3D. PMID:27299769

  7. Novel 3D modeling methods for virtual fabrication and EDA compatible design of MEMS via parametric libraries

    NASA Astrophysics Data System (ADS)

    Schröpfer, Gerold; Lorenz, Gunar; Rouvillois, Stéphane; Breit, Stephen

    2010-06-01

    This paper provides a brief summary of the state-of-the-art of MEMS-specific modeling techniques and describes the validation of new models for a parametric component library. Two recently developed 3D modeling tools are described in more detail. The first one captures a methodology for designing MEMS devices and simulating them together with integrated electronics within a standard electronic design automation (EDA) environment. The MEMS designer can construct the MEMS model directly in a 3D view. The resulting 3D model differs from a typical feature-based 3D CAD modeling tool in that there is an underlying behavioral model and parametric layout associated with each MEMS component. The model of the complete MEMS device that is shared with the standard EDA environment can be fully parameterized with respect to manufacturing- and design-dependent variables. Another recent innovation is a process modeling tool that allows accurate and highly realistic visualization of the step-by-step creation of 3D micro-fabricated devices. The novelty of the tool lies in its use of voxels (3D pixels) rather than conventional 3D CAD techniques to represent the 3D geometry. Case studies for experimental devices are presented showing how the examination of these virtual prototypes can reveal design errors before mask tape out, support process development before actual fabrication and also enable failure analysis after manufacturing.

  8. Coevrage Estimation of Geosensor in 3d Vector Environments

    NASA Astrophysics Data System (ADS)

    Afghantoloee, A.; Doodman, S.; Karimipour, F.; Mostafavi, M. A.

    2014-10-01

    Sensor deployment optimization to achieve the maximum spatial coverage is one of the main issues in Wireless geoSensor Networks (WSN). The model of the environment is an imperative parameter that influences the accuracy of geosensor coverage. In most of recent studies, the environment has been modeled by Digital Surface Model (DSM). However, the advances in technology to collect 3D vector data at different levels, especially in urban models can enhance the quality of geosensor deployment in order to achieve more accurate coverage estimations. This paper proposes an approach to calculate the geosensor coverage in 3D vector environments. The approach is applied on some case studies and compared with DSM based methods.

  9. Understanding Human Perception of Building Categories in Virtual 3d Cities - a User Study

    NASA Astrophysics Data System (ADS)

    Tutzauer, P.; Becker, S.; Niese, T.; Deussen, O.; Fritsch, D.

    2016-06-01

    Virtual 3D cities are becoming increasingly important as a means of visually communicating diverse urban-related information. To get a deeper understanding of a human's cognitive experience of virtual 3D cities, this paper presents a user study on the human ability to perceive building categories (e.g. residential home, office building, building with shops etc.) from geometric 3D building representations. The study reveals various dependencies between geometric properties of the 3D representations and the perceptibility of the building categories. Knowledge about which geometries are relevant, helpful or obstructive for perceiving a specific building category is derived. The importance and usability of such knowledge is demonstrated based on a perception-guided 3D building abstraction process.

  10. Design and implementation of a 3D ocean virtual reality and visualization engine

    NASA Astrophysics Data System (ADS)

    Chen, Ge; Li, Bo; Tian, Fenglin; Ji, Pengbo; Li, Wenqing

    2012-12-01

    In this study, a 3D virtual reality and visualization engine for rendering the ocean, named VV-Ocean, is designed for marine applications. The design goals of VV-Ocean aim at high fidelity simulation of ocean environment, visualization of massive and multidimensional marine data, and imitation of marine lives. VV-Ocean is composed of five modules, i.e. memory management module, resources management module, scene management module, rendering process management module and interaction management module. There are three core functions in VV-Ocean: reconstructing vivid virtual ocean scenes, visualizing real data dynamically in real time, imitating and simulating marine lives intuitively. Based on VV-Ocean, we establish a sea-land integration platform which can reproduce drifting and diffusion processes of oil spilling from sea bottom to surface. Environment factors such as ocean current and wind field have been considered in this simulation. On this platform oil spilling process can be abstracted as movements of abundant oil particles. The result shows that oil particles blend with water well and the platform meets the requirement for real-time and interactive rendering. VV-Ocean can be widely used in ocean applications such as demonstrating marine operations, facilitating maritime communications, developing ocean games, reducing marine hazards, forecasting the weather over oceans, serving marine tourism, and so on. Finally, further technological improvements of VV-Ocean are discussed.

  11. Mobile viewer system for virtual 3D space using infrared LED point markers and camera

    NASA Astrophysics Data System (ADS)

    Sakamoto, Kunio; Taneji, Shoto

    2006-09-01

    The authors have developed a 3D workspace system using collaborative imaging devices. A stereoscopic display enables this system to project 3D information. In this paper, we describe the position detecting system for a see-through 3D viewer. A 3D display system is useful technology for virtual reality, mixed reality and augmented reality. We have researched spatial imaging and interaction system. We have ever proposed 3D displays using the slit as a parallax barrier, the lenticular screen and the holographic optical elements(HOEs) for displaying active image 1)2)3)4). The purpose of this paper is to propose the interactive system using these 3D imaging technologies. The observer can view virtual images in the real world when the user watches the screen of a see-through 3D viewer. The goal of our research is to build the display system as follows; when users see the real world through the mobile viewer, the display system gives users virtual 3D images, which is floating in the air, and the observers can touch these floating images and interact them such that kids can make play clay. The key technologies of this system are the position recognition system and the spatial imaging display. The 3D images are presented by the improved parallax barrier 3D display. Here the authors discuss the measuring method of the mobile viewer using infrared LED point markers and a camera in the 3D workspace (augmented reality world). The authors show the geometric analysis of the proposed measuring method, which is the simplest method using a single camera not the stereo camera, and the results of our viewer system.

  12. A new approach towards image based virtual 3D city modeling by using close range photogrammetry

    NASA Astrophysics Data System (ADS)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2014-05-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing day to day for various engineering and non-engineering applications. Generally three main image based approaches are using for virtual 3D city models generation. In first approach, researchers used Sketch based modeling, second method is Procedural grammar based modeling and third approach is Close range photogrammetry based modeling. Literature study shows that till date, there is no complete solution available to create complete 3D city model by using images. These image based methods also have limitations This paper gives a new approach towards image based virtual 3D city modeling by using close range photogrammetry. This approach is divided into three sections. First, data acquisition process, second is 3D data processing, and third is data combination process. In data acquisition process, a multi-camera setup developed and used for video recording of an area. Image frames created from video data. Minimum required and suitable video image frame selected for 3D processing. In second section, based on close range photogrammetric principles and computer vision techniques, 3D model of area created. In third section, this 3D model exported to adding and merging of other pieces of large area. Scaling and alignment of 3D model was done. After applying the texturing and rendering on this model, a final photo-realistic textured 3D model created. This 3D model transferred into walk-through model or in movie form. Most of the processing steps are automatic. So this method is cost effective and less laborious. Accuracy of this model is good. For this research work, study area is the campus of department of civil engineering, Indian Institute of Technology, Roorkee. This campus acts as a prototype for city. Aerial photography is restricted in many country

  13. Virtual environments from panoramic images

    NASA Astrophysics Data System (ADS)

    Chapman, David P.; Deacon, Andrew

    1998-12-01

    A number of recent projects have demonstrated the utility of Internet-enabled image databases for the documentation of complex, inaccessible and potentially hazardous environments typically encountered in the petrochemical and nuclear industries. Unfortunately machine vision and image processing techniques have not, to date, enabled the automatic extraction geometrical data from such images and thus 3D CAD modeling remains an expensive and laborious manual activity. Recent developments in panoramic image capture and presentation offer an alternative intermediate deliverable which, in turn, offers some of the benefits of a 3D model at a fraction of the cost. Panoramic image display tools such as Apple's QuickTime VR (QTVR) and Live Spaces RealVR provide compelling and accessible digital representations of the real world and justifiably claim to 'put the reality in Virtual Reality.' This paper will demonstrate how such technologies can be customized, extended and linked to facility management systems delivered over a corporate intra-net to enable end users to become familiar with remote sites and extract simple dimensional data. In addition strategies for the integration of such images with documents gathered from 2D or 3D CAD and Process and Instrumentation Diagrams (P&IDs) will be described as will techniques for precise 'As-Built' modeling using the calibrated images from which panoramas have been derived and the use of textures from these images to increase the realism of rendered scenes. A number of case studies relating to both nuclear and process engineering will demonstrate the extent to which such solution are scaleable in order to deal with the very large volumes of image data required to fully document the large, complex facilities typical of these industry sectors.

  14. Full Immersive Virtual Environment Cave[TM] in Chemistry Education

    ERIC Educational Resources Information Center

    Limniou, M.; Roberts, D.; Papadopoulos, N.

    2008-01-01

    By comparing two-dimensional (2D) chemical animations designed for computer's desktop with three-dimensional (3D) chemical animations designed for the full immersive virtual reality environment CAVE[TM] we studied how virtual reality environments could raise student's interest and motivation for learning. By using the 3ds max[TM], we can visualize…

  15. Visualizing the process of interaction in a 3D environment

    NASA Astrophysics Data System (ADS)

    Vaidya, Vivek; Suryanarayanan, Srikanth; Krishnan, Kajoli; Mullick, Rakesh

    2007-03-01

    As the imaging modalities used in medicine transition to increasingly three-dimensional data the question of how best to interact with and analyze this data becomes ever more pressing. Immersive virtual reality systems seem to hold promise in tackling this, but how individuals learn and interact in these environments is not fully understood. Here we will attempt to show some methods in which user interaction in a virtual reality environment can be visualized and how this can allow us to gain greater insight into the process of interaction/learning in these systems. Also explored is the possibility of using this method to improve understanding and management of ergonomic issues within an interface.

  16. Exploring Cultural Heritage Resources in a 3d Collaborative Environment

    NASA Astrophysics Data System (ADS)

    Respaldiza, A.; Wachowicz, M.; Vázquez Hoehne, A.

    2012-06-01

    Cultural heritage is a complex and diverse concept, which brings together a wide domain of information. Resources linked to a cultural heritage site may consist of physical artefacts, books, works of art, pictures, historical maps, aerial photographs, archaeological surveys and 3D models. Moreover, all these resources are listed and described by a set of a variety of metadata specifications that allow their online search and consultation on the most basic characteristics of them. Some examples include Norma ISO 19115, Dublin Core, AAT, CDWA, CCO, DACS, MARC, MoReq, MODS, MuseumDat, TGN, SPECTRUM, VRA Core and Z39.50. Gateways are in place to fit in these metadata standards into those used in a SDI (ISO 19115 or INSPIRE), but substantial work still remains to be done for the complete incorporation of cultural heritage information. Therefore, the aim of this paper is to demonstrate how the complexity of cultural heritage resources can be dealt with by a visual exploration of their metadata within a 3D collaborative environment. The 3D collaborative environments are promising tools that represent the new frontier of our capacity of learning, understanding, communicating and transmitting culture.

  17. Embodied collaboration support system for 3D shape evaluation in virtual space

    NASA Astrophysics Data System (ADS)

    Okubo, Masashi; Watanabe, Tomio

    2005-12-01

    Collaboration mainly consists of two tasks; one is each partner's task that is performed by the individual, the other is communication with each other. Both of them are very important objectives for all the collaboration support system. In this paper, a collaboration support system for 3D shape evaluation in virtual space is proposed on the basis of both studies in 3D shape evaluation and communication support in virtual space. The proposed system provides the two viewpoints for each task. One is the viewpoint of back side of user's own avatar for the smooth communication. The other is that of avatar's eye for 3D shape evaluation. Switching the viewpoints satisfies the task conditions for 3D shape evaluation and communication. The system basically consists of PC, HMD and magnetic sensors, and users can share the embodied interaction by observing interaction between their avatars in virtual space. However, the HMD and magnetic sensors, which are put on the users, would restrict the nonverbal communication. Then, we have tried to compensate the loss of nodding of partner's avatar by introducing the speech-driven embodied interactive actor InterActor. Sensory evaluation by paired comparison of 3D shapes in the collaborative situation in virtual space and in real space and the questionnaire are performed. The result demonstrates the effectiveness of InterActor's nodding in the collaborative situation.

  18. 3D Laser Triangulation for Plant Phenotyping in Challenging Environments

    PubMed Central

    Kjaer, Katrine Heinsvig; Ottosen, Carl-Otto

    2015-01-01

    To increase the understanding of how the plant phenotype is formed by genotype and environmental interactions, simple and robust high-throughput plant phenotyping methods should be developed and considered. This would not only broaden the application range of phenotyping in the plant research community, but also increase the ability for researchers to study plants in their natural environments. By studying plants in their natural environment in high temporal resolution, more knowledge on how multiple stresses interact in defining the plant phenotype could lead to a better understanding of the interaction between plant responses and epigenetic regulation. In the present paper, we evaluate a commercial 3D NIR-laser scanner (PlantEye, Phenospex B.V., Herleen, The Netherlands) to track daily changes in plant growth with high precision in challenging environments. Firstly, we demonstrate that the NIR laser beam of the scanner does not affect plant photosynthetic performance. Secondly, we demonstrate that it is possible to estimate phenotypic variation amongst the growth pattern of ten genotypes of Brassica napus L. (rapeseed), using a simple linear correlation between scanned parameters and destructive growth measurements. Our results demonstrate the high potential of 3D laser triangulation for simple measurements of phenotypic variation in challenging environments and in a high temporal resolution. PMID:26066990

  19. The virtual environment display system

    NASA Technical Reports Server (NTRS)

    Mcgreevy, Michael W.

    1991-01-01

    Virtual environment technology is a display and control technology that can surround a person in an interactive computer generated or computer mediated virtual environment. It has evolved at NASA-Ames since 1984 to serve NASA's missions and goals. The exciting potential of this technology, sometimes called Virtual Reality, Artificial Reality, or Cyberspace, has been recognized recently by the popular media, industry, academia, and government organizations. Much research and development will be necessary to bring it to fruition.

  20. Using virtual reality technology and hand tracking technology to create software for training surgical skills in 3D game

    NASA Astrophysics Data System (ADS)

    Zakirova, A. A.; Ganiev, B. A.; Mullin, R. I.

    2015-11-01

    The lack of visible and approachable ways of training surgical skills is one of the main problems in medical education. Existing simulation training devices are not designed to teach students, and are not available due to the high cost of the equipment. Using modern technologies such as virtual reality and hands movements fixation technology we want to create innovative method of learning the technics of conducting operations in 3D game format, which can make education process interesting and effective. Creating of 3D format virtual simulator will allow to solve several conceptual problems at once: opportunity of practical skills improvement unlimited by the time without the risk for patient, high realism of environment in operational and anatomic body structures, using of game mechanics for information perception relief and memorization of methods acceleration, accessibility of this program.

  1. Exploring 3-D Virtual Reality Technology for Spatial Ability and Chemistry Achievement

    ERIC Educational Resources Information Center

    Merchant, Z.; Goetz, E. T.; Keeney-Kennicutt, W.; Cifuentes, L.; Kwok, O.; Davis, T. J.

    2013-01-01

    We investigated the potential of Second Life® (SL), a three-dimensional (3-D) virtual world, to enhance undergraduate students' learning of a vital chemistry concept. A quasi-experimental pre-posttest control group design was used to conduct the study. A total of 387 participants completed three assignment activities either in SL or using…

  2. Supporting Distributed Team Working in 3D Virtual Worlds: A Case Study in Second Life

    ERIC Educational Resources Information Center

    Minocha, Shailey; Morse, David R.

    2010-01-01

    Purpose: The purpose of this paper is to report on a study into how a three-dimensional (3D) virtual world (Second Life) can facilitate socialisation and team working among students working on a team project at a distance. This models the situation in many commercial sectors where work is increasingly being conducted across time zones and between…

  3. Virtual rough samples to test 3D nanometer-scale scanning electron microscopy stereo photogrammetry

    NASA Astrophysics Data System (ADS)

    Villarrubia, J. S.; Tondare, V. N.; Vladár, A. E.

    2016-03-01

    The combination of scanning electron microscopy for high spatial resolution, images from multiple angles to provide 3D information, and commercially available stereo photogrammetry software for 3D reconstruction offers promise for nanometer-scale dimensional metrology in 3D. A method is described to test 3D photogrammetry software by the use of virtual samples—mathematical samples from which simulated images are made for use as inputs to the software under test. The virtual sample is constructed by wrapping a rough skin with any desired power spectral density around a smooth near-trapezoidal line with rounded top corners. Reconstruction is performed with images simulated from different angular viewpoints. The software's reconstructed 3D model is then compared to the known geometry of the virtual sample. Three commercial photogrammetry software packages were tested. Two of them produced results for line height and width that were within close to 1 nm of the correct values. All of the packages exhibited some difficulty in reconstructing details of the surface roughness.

  4. Web-based Three-dimensional Virtual Body Structures: W3D-VBS

    PubMed Central

    Temkin, Bharti; Acosta, Eric; Hatfield, Paul; Onal, Erhan; Tong, Alex

    2002-01-01

    Major efforts are being made to improve the teaching of human anatomy to foster cognition of visuospatial relationships. The Visible Human Project of the National Library of Medicine makes it possible to create virtual reality-based applications for teaching anatomy. Integration of traditional cadaver and illustration-based methods with Internet-based simulations brings us closer to this goal. Web-based three-dimensional Virtual Body Structures (W3D-VBS) is a next-generation immersive anatomical training system for teaching human anatomy over the Internet. It uses Visible Human data to dynamically explore, select, extract, visualize, manipulate, and stereoscopically palpate realistic virtual body structures with a haptic device. Tracking user’s progress through evaluation tools helps customize lesson plans. A self-guided “virtual tour” of the whole body allows investigation of labeled virtual dissections repetitively, at any time and place a user requires it. PMID:12223495

  5. Web-based three-dimensional Virtual Body Structures: W3D-VBS.

    PubMed

    Temkin, Bharti; Acosta, Eric; Hatfield, Paul; Onal, Erhan; Tong, Alex

    2002-01-01

    Major efforts are being made to improve the teaching of human anatomy to foster cognition of visuospatial relationships. The Visible Human Project of the National Library of Medicine makes it possible to create virtual reality-based applications for teaching anatomy. Integration of traditional cadaver and illustration-based methods with Internet-based simulations brings us closer to this goal. Web-based three-dimensional Virtual Body Structures (W3D-VBS) is a next-generation immersive anatomical training system for teaching human anatomy over the Internet. It uses Visible Human data to dynamically explore, select, extract, visualize, manipulate, and stereoscopically palpate realistic virtual body structures with a haptic device. Tracking user's progress through evaluation tools helps customize lesson plans. A self-guided "virtual tour" of the whole body allows investigation of labeled virtual dissections repetitively, at any time and place a user requires it. PMID:12223495

  6. EXPLORING ENVIRONMENTAL DATA IN A HIGHLY IMMERSIVE VIRTUAL REALITY ENVIRONMENT

    EPA Science Inventory

    Geography inherently fills a 3D space and yet we struggle with displaying geography using, primaarily, 2D display devices. Virtual environments offer a more realistically-dimensioned display space and this is being realized in the expanding area of research on 3D Geographic Infor...

  7. Virtualization, virtual environments, and content-based retrieval of three-dimensional information for cultural applications

    NASA Astrophysics Data System (ADS)

    Paquet, Eric; Peters, Shawn; Beraldin, J. A.; Valzano, Virginia; Bandiera, Adriana

    2003-01-01

    The present paper proposes a virtual environment for visualizing virtualized cultural and historical sites. The proposed environment is based on a distributed asynchronous architecture and supports stereo vision and tiled wall display. The system is mobile and can run from two laptops. This virtual environment addresses the problems of intellectual property protection and multimedia information retrieval through encryptation and content-based management respectively. Experimental results with a fully textured 3D model of the Crypt of Santa Cristina in Italy are presented, evaluating the performances of the proposed virtual environment.

  8. Accuracy of 3D Virtual Planning of Corrective Osteotomies of the Distal Radius.

    PubMed

    Stockmans, Filip; Dezillie, Marleen; Vanhaecke, Jeroen

    2013-11-01

    Corrective osteotomies of the distal radius for symptomatic malunion are time-tested procedures that rely on accurate corrections. Patients with combined intra- and extra-articular malunions present a challenging deformity. Virtual planning and patient-specific instruments (PSIs) to transfer the planning into the operating room have been used both to simplify the surgery and to make it more accurate. This report focuses on the clinically achieved accuracy in four patients treated between 2008 and 2012 with virtual planning and PSIs for a combined intra- and extraarticular malunion of the distal radius. The accuracy of the correction is quantified by comparing the virtual three-dimensional (3D) planning model with the postoperative 3D bone model. For the extraarticular malunion the 3D volar tilt, 3D radial inclination and 3D ulnar variance are measured. The volar tilt is undercorrected in all cases with an average of -6 ± 6°. The average difference between the postoperative and planned 3D radial inclination was -1 ± 5°. The average difference between the postoperative and planned 3D ulnar variances is 0 ± 1 mm. For the evaluation of the intraarticular malunion, both the arc method of measurement and distance map measurement are used. The average postoperative maximum gap is 2.1 ± 0.9 mm. The average maximum postoperative step-off is 1.3 ± 0.4 mm. The average distance between the postoperative and planned articular surfaces is 1.1 ± 0.6 mm as determined in the distance map measurement. There is a tendency to achieve higher accuracy as experience builds up, both on the surgeon's side and on the design engineering side. We believe this technology holds the potential to achieve consistent accuracy of very complex corrections. PMID:24436834

  9. Accuracy of 3D Virtual Planning of Corrective Osteotomies of the Distal Radius

    PubMed Central

    Stockmans, Filip; Dezillie, Marleen; Vanhaecke, Jeroen

    2013-01-01

    Corrective osteotomies of the distal radius for symptomatic malunion are time-tested procedures that rely on accurate corrections. Patients with combined intra- and extra-articular malunions present a challenging deformity. Virtual planning and patient-specific instruments (PSIs) to transfer the planning into the operating room have been used both to simplify the surgery and to make it more accurate. This report focuses on the clinically achieved accuracy in four patients treated between 2008 and 2012 with virtual planning and PSIs for a combined intra- and extraarticular malunion of the distal radius. The accuracy of the correction is quantified by comparing the virtual three-dimensional (3D) planning model with the postoperative 3D bone model. For the extraarticular malunion the 3D volar tilt, 3D radial inclination and 3D ulnar variance are measured. The volar tilt is undercorrected in all cases with an average of –6 ± 6°. The average difference between the postoperative and planned 3D radial inclination was –1 ± 5°. The average difference between the postoperative and planned 3D ulnar variances is 0 ± 1 mm. For the evaluation of the intraarticular malunion, both the arc method of measurement and distance map measurement are used. The average postoperative maximum gap is 2.1 ± 0.9 mm. The average maximum postoperative step-off is 1.3 ± 0.4 mm. The average distance between the postoperative and planned articular surfaces is 1.1 ± 0.6 mm as determined in the distance map measurement. There is a tendency to achieve higher accuracy as experience builds up, both on the surgeon's side and on the design engineering side. We believe this technology holds the potential to achieve consistent accuracy of very complex corrections. PMID:24436834

  10. An Interactive 3D Virtual Anatomy Puzzle for Learning and Simulation - Initial Demonstration and Evaluation.

    PubMed

    Messier, Erik; Wilcox, Jascha; Dawson-Elli, Alexander; Diaz, Gabriel; Linte, Cristian A

    2016-01-01

    To inspire young students (grades 6-12) to become medical practitioners and biomedical engineers, it is necessary to expose them to key concepts of the field in a way that is both exciting and informative. Recent advances in medical image acquisition, manipulation, processing, visualization, and display have revolutionized the approach in which the human body and internal anatomy can be seen and studied. It is now possible to collect 3D, 4D, and 5D medical images of patient specific data, and display that data to the end user using consumer level 3D stereoscopic display technology. Despite such advancements, traditional 2D modes of content presentation such as textbooks and slides are still the standard didactic equipment used to teach young students anatomy. More sophisticated methods of display can help to elucidate the complex 3D relationships between structures that are so often missed when viewing only 2D media, and can instill in students an appreciation for the interconnection between medicine and technology. Here we describe the design, implementation, and preliminary evaluation of a 3D virtual anatomy puzzle dedicated to helping users learn the anatomy of various organs and systems by manipulating 3D virtual data. The puzzle currently comprises several components of the human anatomy and can be easily extended to include additional organs and systems. The 3D virtual anatomy puzzle game was implemented and piloted using three display paradigms - a traditional 2D monitor, a 3D TV with active shutter glass, and the DK2 version Oculus Rift, as well as two different user interaction devices - a space mouse and traditional keyboard controls. PMID:27046584

  11. 3D-ANTLERS: Virtual Reconstruction and Three-Dimensional Measurement

    NASA Astrophysics Data System (ADS)

    Barba, S.; Fiorillo, F.; De Feo, E.

    2013-02-01

    . In the ARTEC digital mock-up for example, it shows the ability to select the individual frames, already polygonal and geo-referenced at the time of capture; however, it is not possible to make an automated texturization differently from the low-cost environment which allows to produce a good graphics' definition. Once the final 3D models were obtained, we have proceeded to do a geometric and graphic comparison of the results. Therefore, in order to provide an accuracy requirement and an assessment for the 3D reconstruction we have taken into account the following benchmarks: cost, captured points, noise (local and global), shadows and holes, operability, degree of definition, quality and accuracy. Subsequently, these studies carried out in an empirical way on the virtual reconstructions, a 3D documentation was codified with a procedural method endorsing the use of terrestrial sensors for the documentation of antlers. The results thus pursued were compared with the standards set by the current provisions (see "Manual de medición" of Government of Andalusia-Spain); to date, in fact, the identification is based on data such as length, volume, colour, texture, openness, tips, structure, etc. Data, which is currently only appreciated with traditional instruments, such as tape measure, would be well represented by a process of virtual reconstruction and cataloguing.

  12. Spatial Integration under Contextual Control in a Virtual Environment

    ERIC Educational Resources Information Center

    Molet, Mikael; Gambet, Boris; Bugallo, Mehdi; Miller, Ralph R.

    2012-01-01

    The role of context was examined in the selection and integration of independently learned spatial relationships. Using a dynamic 3D virtual environment, participants learned one spatial relationship between landmarks A and B which was established in one virtual context (e.g., A is left of B) and a different spatial relationship which was…

  13. Active Learning through the Use of Virtual Environments

    ERIC Educational Resources Information Center

    Mayrose, James

    2012-01-01

    Immersive Virtual Reality (VR) has seen explosive growth over the last decade. Immersive VR attempts to give users the sensation of being fully immersed in a synthetic environment by providing them with 3D hardware, and allowing them to interact with objects in virtual worlds. The technology is extremely effective for learning and exploration, and…

  14. Large-Scale Networked Virtual Environments: Architecture and Applications

    ERIC Educational Resources Information Center

    Lamotte, Wim; Quax, Peter; Flerackers, Eddy

    2008-01-01

    Purpose: Scalability is an important research topic in the context of networked virtual environments (NVEs). This paper aims to describe the ALVIC (Architecture for Large-scale Virtual Interactive Communities) approach to NVE scalability. Design/methodology/approach: The setup and results from two case studies are shown: a 3-D learning environment…

  15. On detailed 3D reconstruction of large indoor environments

    NASA Astrophysics Data System (ADS)

    Bondarev, Egor

    2015-03-01

    In this paper we present techniques for highly detailed 3D reconstruction of extra large indoor environments. We discuss the benefits and drawbacks of low-range, far-range and hybrid sensing and reconstruction approaches. The proposed techniques for low-range and hybrid reconstruction, enabling the reconstruction density of 125 points/cm3 on large 100.000 m3 models, are presented in detail. The techniques tackle the core challenges for the above requirements, such as a multi-modal data fusion (fusion of a LIDAR data with a Kinect data), accurate sensor pose estimation, high-density scanning and depth data noise filtering. Other important aspects for extra large 3D indoor reconstruction are the point cloud decimation and real-time rendering. In this paper, we present a method for planar-based point cloud decimation, allowing for reduction of a point cloud size by 80-95%. Besides this, we introduce a method for online rendering of extra large point clouds enabling real-time visualization of huge cloud spaces in conventional web browsers.

  16. Virtual Environments in Scientific Visualization

    NASA Technical Reports Server (NTRS)

    Bryson, Steve; Lisinski, T. A. (Technical Monitor)

    1994-01-01

    Virtual environment technology is a new way of approaching the interface between computers and humans. Emphasizing display and user control that conforms to the user's natural ways of perceiving and thinking about space, virtual environment technologies enhance the ability to perceive and interact with computer generated graphic information. This enhancement potentially has a major effect on the field of scientific visualization. Current examples of this technology include the Virtual Windtunnel being developed at NASA Ames Research Center. Other major institutions such as the National Center for Supercomputing Applications and SRI International are also exploring this technology. This talk will be describe several implementations of virtual environments for use in scientific visualization. Examples include the visualization of unsteady fluid flows (the virtual windtunnel), the visualization of geodesics in curved spacetime, surface manipulation, and examples developed at various laboratories.

  17. Information Virtulization in Virtual Environments

    NASA Technical Reports Server (NTRS)

    Bryson, Steve; Kwak, Dochan (Technical Monitor)

    2001-01-01

    Virtual Environments provide a natural setting for a wide range of information visualization applications, particularly wlieit the information to be visualized is defined on a three-dimensional domain (Bryson, 1996). This chapter provides an overview of the issues that arise when designing and implementing an information visualization application in a virtual environment. Many design issues that arise, such as, e.g., issues of display, user tracking are common to any application of virtual environments. In this chapter we focus on those issues that are special to information visualization applications, as issues of wider concern are addressed elsewhere in this book.

  18. fVisiOn: glasses-free tabletop 3D display to provide virtual 3D media naturally alongside real media

    NASA Astrophysics Data System (ADS)

    Yoshida, Shunsuke

    2012-06-01

    A novel glasses-free tabletop 3D display, named fVisiOn, floats virtual 3D objects on an empty, flat, tabletop surface and enables multiple viewers to observe raised 3D images from any angle at 360° Our glasses-free 3D image reproduction method employs a combination of an optical device and an array of projectors and produces continuous horizontal parallax in the direction of a circular path located above the table. The optical device shapes a hollow cone and works as an anisotropic diffuser. The circularly arranged projectors cast numerous rays into the optical device. Each ray represents a particular ray that passes a corresponding point on a virtual object's surface and orients toward a viewing area around the table. At any viewpoint on the ring-shaped viewing area, both eyes collect fractional images from different projectors, and all the viewers around the table can perceive the scene as 3D from their perspectives because the images include binocular disparity. The entire principle is installed beneath the table, so the tabletop area remains clear. No ordinary tabletop activities are disturbed. Many people can naturally share the 3D images displayed together with real objects on the table. In our latest prototype, we employed a handmade optical device and an array of over 100 tiny projectors. This configuration reproduces static and animated 3D scenes for a 130° viewing area and allows 5-cm-tall virtual characters to play soccer and dance on the table.

  19. Representing 3D virtual objects: interaction between visuo-spatial ability and type of exploration.

    PubMed

    Meijer, Frank; van den Broek, Egon L

    2010-03-17

    We investigated individual differences in interactively exploring 3D virtual objects. 36 participants explored 24 simple and 24 difficult objects (composed of respectively three and five Biederman geons) actively, passively, or not at all. Both their 3D mental representation of the objects and visuo-spatial ability was assessed. Results show that, regardless of the object's complexity, people with a low VSA benefit from active exploration of objects, where people with a middle or high VSA do not. These findings extend and refine earlier research on interactively learning visuo-spatial information and underline the importance to take individual differences into account. PMID:20116394

  20. Instructional Design Practices in the Design and Development of Digital Humanities Virtual Environments (DH-VEs)

    ERIC Educational Resources Information Center

    Kelly, Valerie Hunter

    2011-01-01

    Virtual environments, virtual worlds, simulations, 3D models are loaded with potential, promise, and problems. While learning in virtual settings is still being researched, instructional designers are challenged as to which instructional design practices are best suited for virtual environments (VEs). The problem is there is a lack of a conceptual…

  1. Toward virtual anatomy: a stereoscopic 3-D interactive multimedia computer program for cranial osteology.

    PubMed

    Trelease, R B

    1996-01-01

    Advances in computer visualization and user interface technologies have enabled development of "virtual reality" programs that allow users to perceive and to interact with objects in artificial three-dimensional environments. Such technologies were used to create an image database and program for studying the human skull, a specimen that has become increasingly expensive and scarce. Stereoscopic image pairs of a museum-quality skull were digitized from multiple views. For each view, the stereo pairs were interlaced into a single, field-sequential stereoscopic picture using an image processing program. The resulting interlaced image files are organized in an interactive multimedia program. At run-time, gray-scale 3-D images are displayed on a large-screen computer monitor and observed through liquid-crystal shutter goggles. Users can then control the program and change views with a mouse and cursor to point-and-click on screen-level control words ("buttons"). For each view of the skull, an ID control button can be used to overlay pointers and captions for important structures. Pointing and clicking on "hidden buttons" overlying certain structures triggers digitized audio spoken word descriptions or mini lectures. PMID:8793223

  2. Virtual Environments in Biology Teaching

    ERIC Educational Resources Information Center

    Mikropoulos, Tassos A.; Katsikis, Apostolos; Nikolou, Eugenia; Tsakalis, Panayiotis

    2003-01-01

    This article reports on the design, development and evaluation of an educational virtual environment for biology teaching. In particular it proposes a highly interactive three-dimensional synthetic environment involving certain learning tasks for the support of teaching plant cell biology and the process of photosynthesis. The environment has been…

  3. L2 Immersion in 3D Virtual Worlds: The Next Thing to Being There?

    ERIC Educational Resources Information Center

    Paillat, Edith

    2014-01-01

    Second Life is one of the many three-dimensional virtual environments accessible through a computer and a fast broadband connection. Thousands of participants connect to this platform to interact virtually with the world, join international communities of practice and, for some, role play groups. Unlike online role play games however, Second Life…

  4. An Interactive, 3D Fault Editor for VR Environments

    NASA Astrophysics Data System (ADS)

    van Aalsburg, J.; Yikilmaz, M. B.; Kreylos, O.; Kellogg, L. H.; Rundle, J. B.

    2008-12-01

    Digitial Fault Models (DFM) play a vital role in the study of earthquake dynamics, fault-earthquake interactions, and seismicity. DFMs serve as input for finite-element method (FEM) or other earthquake simulations such as Virtual California. Generally, digital fault models are generated by importing a digitized and georeferenced (2D) fault map and/or a hillshade image of the study area into a geographical information system (GIS) application, where individual fault lines are traced by the user. Data assimilation and creation of a DFM, or updating an existing DFM based on new observations, is a tedious and time-consuming process. In order to facilitate the creation process, we are developing an immersive virtual reality (VR) application to visualize and edit fault models. This program is designed to run in immersive environments such as a CAVE (walk-in VR environment), but also works in a wide range of other environments, including desktop systems and GeoWalls. It is being developed at the UC Davis W.M. Keck Center for Active Visualization in the Earth Sciences (KeckCAVES, http://www.keckcaves.org). Our program allows users to create new models or modify existing ones; for instance by repositioning individual fault-segments, by changing the dip angle, or by modifying (or assigning) the value of a property associated with a particular fault segment (i.e. slip rate). With the addition of high resolution Digital Elevation Models (DEM) , georeferenced active tectonic fault maps and earthquake hypocenters, the user can accurately add new segments to an existing model or create a fault model entirely from scratch. Interactively created or modified models can be written to XML files at any time; from there the data may easily be converted into various formats required by the analysis software or simulation. We believe that the ease of interaction provided by VR technology is ideally suited to the problem of creating and editing digital fault models. Our software provides

  5. Virtual embryology: a 3D library reconstructed from human embryo sections and animation of development process.

    PubMed

    Komori, M; Miura, T; Shiota, K; Minato, K; Takahashi, T

    1995-01-01

    The volumetric shape of a human embryo and its development is hard to comprehend as they have been viewed as a 2D schemes in a textbook or microscopic sectional image. In this paper, a CAI and research support system for human embryology using multimedia presentation techniques is described. In this system, 3D data is acquired from a series of sliced specimens. Its 3D structure can be viewed interactively by rotating, extracting, and truncating its whole body or organ. Moreover, the development process of embryos can be animated using a morphing technique applied to the specimen in several stages. The system is intended to be used interactively, like a virtual reality system. Hence, the system is called Virtual Embryology. PMID:8591413

  6. Analysis of scalability of high-performance 3D image processing platform for virtual colonoscopy.

    PubMed

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli

    2014-03-19

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. For this purpose, we previously developed a software platform for high-performance 3D medical image processing, called HPC 3D-MIP platform, which employs increasingly available and affordable commodity computing systems such as the multicore, cluster, and cloud computing systems. To achieve scalable high-performance computing, the platform employed size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D-MIP algorithms, supported task scheduling for efficient load distribution and balancing, and consisted of a layered parallel software libraries that allow image processing applications to share the common functionalities. We evaluated the performance of the HPC 3D-MIP platform by applying it to computationally intensive processes in virtual colonoscopy. Experimental results showed a 12-fold performance improvement on a workstation with 12-core CPUs over the original sequential implementation of the processes, indicating the efficiency of the platform. Analysis of performance scalability based on the Amdahl's law for symmetric multicore chips showed the potential of a high performance scalability of the HPC 3D-MIP platform when a larger number of cores is available. PMID:24910506

  7. Analysis of scalability of high-performance 3D image processing platform for virtual colonoscopy

    PubMed Central

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli

    2014-01-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. For this purpose, we previously developed a software platform for high-performance 3D medical image processing, called HPC 3D-MIP platform, which employs increasingly available and affordable commodity computing systems such as the multicore, cluster, and cloud computing systems. To achieve scalable high-performance computing, the platform employed size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D-MIP algorithms, supported task scheduling for efficient load distribution and balancing, and consisted of a layered parallel software libraries that allow image processing applications to share the common functionalities. We evaluated the performance of the HPC 3D-MIP platform by applying it to computationally intensive processes in virtual colonoscopy. Experimental results showed a 12-fold performance improvement on a workstation with 12-core CPUs over the original sequential implementation of the processes, indicating the efficiency of the platform. Analysis of performance scalability based on the Amdahl’s law for symmetric multicore chips showed the potential of a high performance scalability of the HPC 3D-MIP platform when a larger number of cores is available. PMID:24910506

  8. An improved virtual aberration model to simulate mask 3D and resist effects

    NASA Astrophysics Data System (ADS)

    Kanaya, Reiji; Fujii, Koichi; Imai, Motokatsu; Matsuyama, Tomoyuki; Tsuzuki, Takao; Lin, Qun Ying

    2015-03-01

    As shrinkage of design features progresses, the difference in best focus positions among different patterns is becoming a fatal issue, especially when many patterns co-exist in a layer. The problem arises from three major factors: aberrations of projection optics, mask 3D topography effects, and resist thickness effects. Aberrations in projection optics have already been thoroughly investigated, but mask 3D topography effects and resist thickness effects are still under study. It is well known that mask 3D topography effects can be simulated by various Electro-magnetic Field (EMF) analysis methods. However, it is almost impossible to use them for full chip modeling because all of these methods are extremely computationally intensive. Consequently, they usually apply only to a limited range of mask patterns which are about tens of square micro meters in area. Resist thickness effects on best focus positions are rarely treated as a topic of lithography investigations. Resist 3D effects are treated mostly for resist profile prediction, which also requires an intensive EMF analysis when one needs to predict it accurately. In this paper, we present a simplified Virtual Aberration (VA) model to simulate both mask 3D induced effects and resist thickness effects. A conventional simulator, when applied with this simplified method, can factor in both mask 3D topography effects and resist thickness effects. Thus it can be used to model inter-pattern Best Focus Difference (BFD) issues with the least amount of rigorous EMF analysis.

  9. An Interactive Virtual 3D Tool for Scientific Exploration of Planetary Surfaces

    NASA Astrophysics Data System (ADS)

    Traxler, Christoph; Hesina, Gerd; Gupta, Sanjeev; Paar, Gerhard

    2014-05-01

    In this paper we present an interactive 3D visualization tool for scientific analysis and planning of planetary missions. At the moment scientists have to look at individual camera images separately. There is no tool to combine them in three dimensions and look at them seamlessly as a geologist would do (by walking backwards and forwards resulting in different scales). For this reason a virtual 3D reconstruction of the terrain that can be interactively explored is necessary. Such a reconstruction has to consider multiple scales ranging from orbital image data to close-up surface image data from rover cameras. The 3D viewer allows seamless zooming between these various scales, giving scientists the possibility to relate small surface features (e.g. rock outcrops) to larger geological contexts. For a reliable geologic assessment a realistic surface rendering is important. Therefore the material properties of the rock surfaces will be considered for real-time rendering. This is achieved by an appropriate Bidirectional Reflectance Distribution Function (BRDF) estimated from the image data. The BRDF is implemented to run on the Graphical Processing Unit (GPU) to enable realistic real-time rendering, which allows a naturalistic perception for scientific analysis. Another important aspect for realism is the consideration of natural lighting conditions, which means skylight to illuminate the reconstructed scene. In our case we provide skylights from Mars and Earth, which allows switching between these two modes of illumination. This gives geologists the opportunity to perceive rock outcrops from Mars as they would appear on Earth facilitating scientific assessment. Besides viewing the virtual reconstruction on multiple scales, scientists can also perform various measurements, i.e. geo-coordinates of a selected point or distance between two surface points. Rover or other models can be placed into the scene and snapped onto certain location of the terrain. These are

  10. 2D virtual texture on 3D real object with coded structured light

    NASA Astrophysics Data System (ADS)

    Molinier, Thierry; Fofi, David; Salvi, Joaquim; Gorria, Patrick

    2008-02-01

    Augmented reality is used to improve color segmentation on human body or on precious no touch artifacts. We propose a technique to project a synthesized texture on real object without contact. Our technique can be used in medical or archaeological application. By projecting a suitable set of light patterns onto the surface of a 3D real object and by capturing images with a camera, a large number of correspondences can be found and the 3D points can be reconstructed. We aim to determine these points of correspondence between cameras and projector from a scene without explicit points and normals. We then project an adjusted texture onto the real object surface. We propose a global and automatic method to virtually texture a 3D real object.

  11. Effects of 3D Virtual Reality of Plate Tectonics on Fifth Grade Students' Achievement and Attitude toward Science

    ERIC Educational Resources Information Center

    Kim, Paul

    2006-01-01

    This study examines the effects of a teaching method using 3D virtual reality simulations on achievement and attitude toward science. An experiment was conducted with fifth-grade students (N = 41) to examine the effects of 3D simulations, designed to support inquiry-based science curriculum. An ANOVA analysis revealed that the 3D group scored…

  12. 3D Virtual Worlds as Art Media and Exhibition Arenas: Students' Responses and Challenges in Contemporary Art Education

    ERIC Educational Resources Information Center

    Lu, Lilly

    2013-01-01

    3D virtual worlds (3D VWs) are considered one of the emerging learning spaces of the 21st century; however, few empirical studies have investigated educational applications and student learning aspects in art education. This study focused on students' responses to and challenges with 3D VWs in both aspects. The findings show that most…

  13. 3D Visualization of Cultural Heritage Artefacts with Virtual Reality devices

    NASA Astrophysics Data System (ADS)

    Gonizzi Barsanti, S.; Caruso, G.; Micoli, L. L.; Covarrubias Rodriguez, M.; Guidi, G.

    2015-08-01

    Although 3D models are useful to preserve the information about historical artefacts, the potential of these digital contents are not fully accomplished until they are not used to interactively communicate their significance to non-specialists. Starting from this consideration, a new way to provide museum visitors with more information was investigated. The research is aimed at valorising and making more accessible the Egyptian funeral objects exhibited in the Sforza Castle in Milan. The results of the research will be used for the renewal of the current exhibition, at the Archaeological Museum in Milan, by making it more attractive. A 3D virtual interactive scenario regarding the "path of the dead", an important ritual in ancient Egypt, was realized to augment the experience and the comprehension of the public through interactivity. Four important artefacts were considered for this scope: two ushabty, a wooden sarcophagus and a heart scarab. The scenario was realized by integrating low-cost Virtual Reality technologies, as the Oculus Rift DK2 and the Leap Motion controller, and implementing a specific software by using Unity. The 3D models were implemented by adding responsive points of interest in relation to important symbols or features of the artefact. This allows highlighting single parts of the artefact in order to better identify the hieroglyphs and provide their translation. The paper describes the process for optimizing the 3D models, the implementation of the interactive scenario and the results of some test that have been carried out in the lab.

  14. Mackay campus of environmental education and digital cultural construction: the application of 3D virtual reality

    NASA Astrophysics Data System (ADS)

    Chien, Shao-Chi; Chung, Yu-Wei; Lin, Yi-Hsuan; Huang, Jun-Yi; Chang, Jhih-Ting; He, Cai-Ying; Cheng, Yi-Wen

    2012-04-01

    This study uses 3D virtual reality technology to create the "Mackay campus of the environmental education and digital cultural 3D navigation system" for local historical sites in the Tamsui (Hoba) area, in hopes of providing tourism information and navigation through historical sites using a 3D navigation system. We used Auto CAD, Sketch Up, and SpaceEyes 3D software to construct the virtual reality scenes and create the school's historical sites, such as the House of Reverends, the House of Maidens, the Residence of Mackay, and the Education Hall. We used this technology to complete the environmental education and digital cultural Mackay campus . The platform we established can indeed achieve the desired function of providing tourism information and historical site navigation. The interactive multimedia style and the presentation of the information will allow users to obtain a direct information response. In addition to showing the external appearances of buildings, the navigation platform can also allow users to enter the buildings to view lifelike scenes and textual information related to the historical sites. The historical sites are designed according to their actual size, which gives users a more realistic feel. In terms of the navigation route, the navigation system does not force users along a fixed route, but instead allows users to freely control the route they would like to take to view the historical sites on the platform.

  15. Virtual 3D bladder reconstruction for augmented medical records from white light cystoscopy (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Lurie, Kristen L.; Zlatev, Dimitar V.; Angst, Roland; Liao, Joseph C.; Ellerbee, Audrey K.

    2016-02-01

    Bladder cancer has a high recurrence rate that necessitates lifelong surveillance to detect mucosal lesions. Examination with white light cystoscopy (WLC), the standard of care, is inherently subjective and data storage limited to clinical notes, diagrams, and still images. A visual history of the bladder wall can enhance clinical and surgical management. To address this clinical need, we developed a tool to transform in vivo WLC videos into virtual 3-dimensional (3D) bladder models using advanced computer vision techniques. WLC videos from rigid cystoscopies (1280 x 720 pixels) were recorded at 30 Hz followed by immediate camera calibration to control for image distortions. Video data were fed into an automated structure-from-motion algorithm that generated a 3D point cloud followed by a 3D mesh to approximate the bladder surface. The highest quality cystoscopic images were projected onto the approximated bladder surface to generate a virtual 3D bladder reconstruction. In intraoperative WLC videos from 36 patients undergoing transurethral resection of suspected bladder tumors, optimal reconstruction was achieved from frames depicting well-focused vasculature, when the bladder was maintained at constant volume with minimal debris, and when regions of the bladder wall were imaged multiple times. A significant innovation of this work is the ability to perform the reconstruction using video from a clinical procedure collected with standard equipment, thereby facilitating rapid clinical translation, application to other forms of endoscopy and new opportunities for longitudinal studies of cancer recurrence.

  16. Early pregnancy placental bed and fetal vascular volume measurements using 3-D virtual reality.

    PubMed

    Reus, Averil D; Klop-van der Aa, Josine; Rifouna, Maria S; Koning, Anton H J; Exalto, Niek; van der Spek, Peter J; Steegers, Eric A P

    2014-08-01

    In this study, a new 3-D Virtual Reality (3D VR) technique for examining placental and uterine vasculature was investigated. The validity of placental bed vascular volume (PBVV) and fetal vascular volume (FVV) measurements was assessed and associations of PBVV and FVV with embryonic volume, crown-rump length, fetal birth weight and maternal parity were investigated. One hundred thirty-two patients were included in this study, and measurements were performed in 100 patients. Using V-Scope software, 100 3-D Power Doppler data sets of 100 pregnancies at 12 wk of gestation were analyzed with 3D VR in the I-Space Virtual Reality system. Volume measurements were performed with semi-automatic, pre-defined parameters. The inter-observer and intra-observer agreement was excellent with all intra-class correlation coefficients >0.93. PBVVs of multiparous women were significantly larger than the PBVVs of primiparous women (p = 0.008). In this study, no other associations were found. In conclusion, V-Scope offers a reproducible method for measuring PBVV and FVV at 12 wk of gestation, although we are unsure whether the volume measured represents the true volume of the vasculature. Maternal parity influences PBVV. PMID:24798392

  17. Robust 3D Position Estimation in Wide and Unconstrained Indoor Environments

    PubMed Central

    Mossel, Annette

    2015-01-01

    In this paper, a system for 3D position estimation in wide, unconstrained indoor environments is presented that employs infrared optical outside-in tracking of rigid-body targets with a stereo camera rig. To overcome limitations of state-of-the-art optical tracking systems, a pipeline for robust target identification and 3D point reconstruction has been investigated that enables camera calibration and tracking in environments with poor illumination, static and moving ambient light sources, occlusions and harsh conditions, such as fog. For evaluation, the system has been successfully applied in three different wide and unconstrained indoor environments, (1) user tracking for virtual and augmented reality applications, (2) handheld target tracking for tunneling and (3) machine guidance for mining. The results of each use case are discussed to embed the presented approach into a larger technological and application context. The experimental results demonstrate the system’s capabilities to track targets up to 100 m. Comparing the proposed approach to prior art in optical tracking in terms of range coverage and accuracy, it significantly extends the available tracking range, while only requiring two cameras and providing a relative 3D point accuracy with sub-centimeter deviation up to 30 m and low-centimeter deviation up to 100 m. PMID:26694388

  18. Fast extraction of minimal paths in 3D images and applications to virtual endoscopy.

    PubMed

    Deschamps, T; Cohen, L D

    2001-12-01

    The aim of this article is to build trajectories for virtual endoscopy inside 3D medical images, using the most automatic way. Usually the construction of this trajectory is left to the clinician who must define some points on the path manually using three orthogonal views. But for a complex structure such as the colon, those views give little information on the shape of the object of interest. The path construction in 3D images becomes a very tedious task and precise a priori knowledge of the structure is needed to determine a suitable trajectory. We propose a more automatic path tracking method to overcome those drawbacks: we are able to build a path, given only one or two end points and the 3D image as inputs. This work is based on previous work by Cohen and Kimmel [Int. J. Comp. Vis. 24 (1) (1997) 57] for extracting paths in 2D images using Fast Marching algorithm. Our original contribution is twofold. On the first hand, we present a general technical contribution which extends minimal paths to 3D images and gives new improvements of the approach that are relevant in 2D as well as in 3D to extract linear structures in images. It includes techniques to make the path extraction scheme faster and easier, by reducing the user interaction. We also develop a new method to extract a centered path in tubular structures. Synthetic and real medical images are used to illustrate each contribution. On the other hand, we show that our method can be efficiently applied to the problem of finding a centered path in tubular anatomical structures with minimum interactivity, and that this path can be used for virtual endoscopy. Results are shown in various anatomical regions (colon, brain vessels, arteries) with different 3D imaging protocols (CT, MR). PMID:11731307

  19. Virtual Boutique: a 3D modeling and content-based management approach to e-commerce

    NASA Astrophysics Data System (ADS)

    Paquet, Eric; El-Hakim, Sabry F.

    2000-12-01

    The Virtual Boutique is made out of three modules: the decor, the market and the search engine. The decor is the physical space occupied by the Virtual Boutique. It can reproduce any existing boutique. For this purpose, photogrammetry is used. A set of pictures of a real boutique or space is taken and a virtual 3D representation of this space is calculated from them. Calculations are performed with software developed at NRC. This representation consists of meshes and texture maps. The camera used in the acquisition process determines the resolution of the texture maps. Decorative elements are added like painting, computer generated objects and scanned objects. The objects are scanned with laser scanner developed at NRC. This scanner allows simultaneous acquisition of range and color information based on white laser beam triangulation. The second module, the market, is made out of all the merchandises and the manipulators, which are used to manipulate and compare the objects. The third module, the search engine, can search the inventory based on an object shown by the customer in order to retrieve similar objects base don shape and color. The items of interest are displayed in the boutique by reconfiguring the market space, which mean that the boutique can be continuously customized according to the customer's needs. The Virtual Boutique is entirely written in Java 3D and can run in mono and stereo mode and has been optimized in order to allow high quality rendering.

  20. Virtual Spring-Based 3D Multi-Agent Group Coordination

    NASA Astrophysics Data System (ADS)

    Daneshvar, Roozbeh; Shih, Liwen

    As future personal vehicles start enjoying the ability to fly, tackling safe transportation coordination can be a tremendous task, far beyond the current challenge on radar screen monitoring of the already saturated air traffic control. Our focus is on the distributed safe-distance coordination among a group of autonomous flying vehicle agents, where each follows its own current straight-line direction in a 3D space with variable speeds. A virtual spring-based model is proposed for the group coordination. Within a specified neighborhood radius, each vehicle forms a virtual connection with each neighbor vehicle by a virtual spring. As the vehicle changes its position, speed and altitude, the total resultant forces on each virtual spring try to maintain zero by moving to the mechanical equilibrium point. The agents then add the simple total virtual spring constraints to their movements to determine their next positions individually. Together, the multi-agent vehicles reach a group behavior, where each of them keeps a minimal safe-distance with others. A new safe behavior thus arises in the group level. With the proposed virtual spring coordination model, the vehicles need no direct communication with each other, require only minimum local processing resources, and the control is completely distributed. New behaviors can now be formulated and studied based on the proposed model, e.g., how a fast driving vehicle can find its way though the crowd by avoiding the other vehicles effortlessly1.

  1. Visuomotor learning in immersive 3D virtual reality in Parkinson's disease and in aging.

    PubMed

    Messier, Julie; Adamovich, Sergei; Jack, David; Hening, Wayne; Sage, Jacob; Poizner, Howard

    2007-05-01

    Successful adaptation to novel sensorimotor contexts critically depends on efficient sensory processing and integration mechanisms, particularly those required to combine visual and proprioceptive inputs. If the basal ganglia are a critical part of specialized circuits that adapt motor behavior to new sensorimotor contexts, then patients who are suffering from basal ganglia dysfunction, as in Parkinson's disease should show sensorimotor learning impairments. However, this issue has been under-explored. We tested the ability of 8 patients with Parkinson's disease (PD), off medication, ten healthy elderly subjects and ten healthy young adults to reach to a remembered 3D location presented in an immersive virtual environment. A multi-phase learning paradigm was used having four conditions: baseline, initial learning, reversal learning and aftereffect. In initial learning, the computer altered the position of a simulated arm endpoint used for movement feedback by shifting its apparent location diagonally, requiring thereby both horizontal and vertical compensations. This visual distortion forced subjects to learn new coordinations between what they saw in the virtual environment and the actual position of their limbs, which they had to derive from proprioceptive information (or efference copy). In reversal learning, the sign of the distortion was reversed. Both elderly subjects and PD patients showed learning phase-dependent difficulties. First, elderly controls were slower than young subjects when learning both dimensions of the initial biaxial discordance. However, their performance improved during reversal learning and as a result elderly and young controls showed similar adaptation rates during reversal learning. Second, in striking contrast to healthy elderly subjects, PD patients were more profoundly impaired during the reversal phase of learning. PD patients were able to learn the initial biaxial discordance but were on average slower than age-matched controls

  2. Blood Pool Segmentation Results in Superior Virtual Cardiac Models than Myocardial Segmentation for 3D Printing.

    PubMed

    Farooqi, Kanwal M; Lengua, Carlos Gonzalez; Weinberg, Alan D; Nielsen, James C; Sanz, Javier

    2016-08-01

    The method of cardiac magnetic resonance (CMR) three-dimensional (3D) image acquisition and post-processing which should be used to create optimal virtual models for 3D printing has not been studied systematically. Patients (n = 19) who had undergone CMR including both 3D balanced steady-state free precession (bSSFP) imaging and contrast-enhanced magnetic resonance angiography (MRA) were retrospectively identified. Post-processing for the creation of virtual 3D models involved using both myocardial (MS) and blood pool (BP) segmentation, resulting in four groups: Group 1-bSSFP/MS, Group 2-bSSFP/BP, Group 3-MRA/MS and Group 4-MRA/BP. The models created were assessed by two raters for overall quality (1-poor; 2-good; 3-excellent) and ability to identify predefined vessels (1-5: superior vena cava, inferior vena cava, main pulmonary artery, ascending aorta and at least one pulmonary vein). A total of 76 virtual models were created from 19 patient CMR datasets. The mean overall quality scores for Raters 1/2 were 1.63 ± 0.50/1.26 ± 0.45 for Group 1, 2.12 ± 0.50/2.26 ± 0.73 for Group 2, 1.74 ± 0.56/1.53 ± 0.61 for Group 3 and 2.26 ± 0.65/2.68 ± 0.48 for Group 4. The numbers of identified vessels for Raters 1/2 were 4.11 ± 1.32/4.05 ± 1.31 for Group 1, 4.90 ± 0.46/4.95 ± 0.23 for Group 2, 4.32 ± 1.00/4.47 ± 0.84 for Group 3 and 4.74 ± 0.56/4.63 ± 0.49 for Group 4. Models created using BP segmentation (Groups 2 and 4) received significantly higher ratings than those created using MS for both overall quality and number of vessels visualized (p < 0.05), regardless of the acquisition technique. There were no significant differences between Groups 1 and 3. The ratings for Raters 1 and 2 had good correlation for overall quality (ICC = 0.63) and excellent correlation for the total number of vessels visualized (ICC = 0.77). The intra-rater reliability was good for Rater A (ICC = 0.65). Three models were successfully printed

  3. Web GIS in practice VII: stereoscopic 3-D solutions for online maps and virtual globes

    USGS Publications Warehouse

    Boulos, Maged N.K.; Robinson, Larry R.

    2009-01-01

    Because our pupils are about 6.5 cm apart, each eye views a scene from a different angle and sends a unique image to the visual cortex, which then merges the images from both eyes into a single picture. The slight difference between the right and left images allows the brain to properly perceive the 'third dimension' or depth in a scene (stereopsis). However, when a person views a conventional 2-D (two-dimensional) image representation of a 3-D (three-dimensional) scene on a conventional computer screen, each eye receives essentially the same information. Depth in such cases can only be approximately inferred from visual clues in the image, such as perspective, as only one image is offered to both eyes. The goal of stereoscopic 3-D displays is to project a slightly different image into each eye to achieve a much truer and realistic perception of depth, of different scene planes, and of object relief. This paper presents a brief review of a number of stereoscopic 3-D hardware and software solutions for creating and displaying online maps and virtual globes (such as Google Earth) in "true 3D", with costs ranging from almost free to multi-thousand pounds sterling. A practical account is also given of the experience of the USGS BRD UMESC (United States Geological Survey's Biological Resources Division, Upper Midwest Environmental Sciences Center) in setting up a low-cost, full-colour stereoscopic 3-D system.

  4. 3D resolution enhancement of deep-tissue imaging based on virtual spatial overlap modulation microscopy.

    PubMed

    Su, I-Cheng; Hsu, Kuo-Jen; Shen, Po-Ting; Lin, Yen-Yin; Chu, Shi-Wei

    2016-07-25

    During the last decades, several resolution enhancement methods for optical microscopy beyond diffraction limit have been developed. Nevertheless, those hardware-based techniques typically require strong illumination, and fail to improve resolution in deep tissue. Here we develop a high-speed computational approach, three-dimensional virtual spatial overlap modulation microscopy (3D-vSPOM), which immediately solves the strong-illumination issue. By amplifying only the spatial frequency component corresponding to the un-scattered point-spread-function at focus, plus 3D nonlinear value selection, 3D-vSPOM shows significant resolution enhancement in deep tissue. Since no iteration is required, 3D-vSPOM is much faster than iterative deconvolution. Compared to non-iterative deconvolution, 3D-vSPOM does not need a priori information of point-spread-function at deep tissue, and provides much better resolution enhancement plus greatly improved noise-immune response. This method is ready to be amalgamated with two-photon microscopy or other laser scanning microscopy to enhance deep-tissue resolution. PMID:27464077

  5. Analysis of scalability of high-performance 3D image processing platform for virtual colonoscopy

    NASA Astrophysics Data System (ADS)

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli

    2014-03-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. For this purpose, we previously developed a software platform for high-performance 3D medical image processing, called HPC 3D-MIP platform, which employs increasingly available and affordable commodity computing systems such as the multicore, cluster, and cloud computing systems. To achieve scalable high-performance computing, the platform employed size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D-MIP algorithms, supported task scheduling for efficient load distribution and balancing, and consisted of a layered parallel software libraries that allow image processing applications to share the common functionalities. We evaluated the performance of the HPC 3D-MIP platform by applying it to computationally intensive processes in virtual colonoscopy. Experimental results showed a 12-fold performance improvement on a workstation with 12-core CPUs over the original sequential implementation of the processes, indicating the efficiency of the platform. Analysis of performance scalability based on the Amdahl's law for symmetric multicore chips showed the potential of a high performance scalability of the HPC 3DMIP platform when a larger number of cores is available.

  6. Web GIS in practice VII: stereoscopic 3-D solutions for online maps and virtual globes

    PubMed Central

    Boulos, Maged N Kamel; Robinson, Larry R

    2009-01-01

    Because our pupils are about 6.5 cm apart, each eye views a scene from a different angle and sends a unique image to the visual cortex, which then merges the images from both eyes into a single picture. The slight difference between the right and left images allows the brain to properly perceive the 'third dimension' or depth in a scene (stereopsis). However, when a person views a conventional 2-D (two-dimensional) image representation of a 3-D (three-dimensional) scene on a conventional computer screen, each eye receives essentially the same information. Depth in such cases can only be approximately inferred from visual clues in the image, such as perspective, as only one image is offered to both eyes. The goal of stereoscopic 3-D displays is to project a slightly different image into each eye to achieve a much truer and realistic perception of depth, of different scene planes, and of object relief. This paper presents a brief review of a number of stereoscopic 3-D hardware and software solutions for creating and displaying online maps and virtual globes (such as Google Earth) in "true 3D", with costs ranging from almost free to multi-thousand pounds sterling. A practical account is also given of the experience of the USGS BRD UMESC (United States Geological Survey's Biological Resources Division, Upper Midwest Environmental Sciences Center) in setting up a low-cost, full-colour stereoscopic 3-D system. PMID:19849837

  7. Real-time 3D radiation risk assessment supporting simulation of work in nuclear environments.

    PubMed

    Szőke, I; Louka, M N; Bryntesen, T R; Bratteli, J; Edvardsen, S T; RøEitrheim, K K; Bodor, K

    2014-06-01

    This paper describes the latest developments at the Institute for Energy Technology (IFE) in Norway, in the field of real-time 3D (three-dimensional) radiation risk assessment for the support of work simulation in nuclear environments. 3D computer simulation can greatly facilitate efficient work planning, briefing, and training of workers. It can also support communication within and between work teams, and with advisors, regulators, the media and public, at all the stages of a nuclear installation's lifecycle. Furthermore, it is also a beneficial tool for reviewing current work practices in order to identify possible gaps in procedures, as well as to support the updating of international recommendations, dissemination of experience, and education of the current and future generation of workers.IFE has been involved in research and development into the application of 3D computer simulation and virtual reality (VR) technology to support work in radiological environments in the nuclear sector since the mid 1990s. During this process, two significant software tools have been developed, the VRdose system and the Halden Planner, and a number of publications have been produced to contribute to improving the safety culture in the nuclear industry.This paper describes the radiation risk assessment techniques applied in earlier versions of the VRdose system and the Halden Planner, for visualising radiation fields and calculating dose, and presents new developments towards implementing a flexible and up-to-date dosimetric package in these 3D software tools, based on new developments in the field of radiation protection. The latest versions of these 3D tools are capable of more accurate risk estimation, permit more flexibility via a range of user choices, and are applicable to a wider range of irradiation situations than their predecessors. PMID:24727389

  8. Dynamic WIFI-Based Indoor Positioning in 3D Virtual World

    NASA Astrophysics Data System (ADS)

    Chan, S.; Sohn, G.; Wang, L.; Lee, W.

    2013-11-01

    A web-based system based on the 3DTown project was proposed using Google Earth plug-in that brings information from indoor positioning devices and real-time sensors into an integrated 3D indoor and outdoor virtual world to visualize the dynamics of urban life within the 3D context of a city. We addressed limitation of the 3DTown project with particular emphasis on video surveillance camera used for indoor tracking purposes. The proposed solution was to utilize wireless local area network (WLAN) WiFi as a replacement technology for localizing objects of interest due to the wide spread availability and large coverage area of WiFi in indoor building spaces. Indoor positioning was performed using WiFi without modifying existing building infrastructure or introducing additional access points (AP)s. A hybrid probabilistic approach was used for indoor positioning based on previously recorded WiFi fingerprint database in the Petrie Science and Engineering building at York University. In addition, we have developed a 3D building modeling module that allows for efficient reconstruction of outdoor building models to be integrated with indoor building models; a sensor module for receiving, distributing, and visualizing real-time sensor data; and a web-based visualization module for users to explore the dynamic urban life in a virtual world. In order to solve the problems in the implementation of the proposed system, we introduce approaches for integration of indoor building models with indoor positioning data, as well as real-time sensor information and visualization on the web-based system. In this paper we report the preliminary results of our prototype system, demonstrating the system's capability for implementing a dynamic 3D indoor and outdoor virtual world that is composed of discrete modules connected through pre-determined communication protocols.

  9. Three-dimensional immersive virtual reality for studying cellular compartments in 3D models from EM preparations of neural tissues.

    PubMed

    Calì, Corrado; Baghabra, Jumana; Boges, Daniya J; Holst, Glendon R; Kreshuk, Anna; Hamprecht, Fred A; Srinivasan, Madhusudhanan; Lehväslaiho, Heikki; Magistretti, Pierre J

    2016-01-01

    Advances in the application of electron microscopy (EM) to serial imaging are opening doors to new ways of analyzing cellular structure. New and improved algorithms and workflows for manual and semiautomated segmentation allow us to observe the spatial arrangement of the smallest cellular features with unprecedented detail in full three-dimensions. From larger samples, higher complexity models can be generated; however, they pose new challenges to data management and analysis. Here we review some currently available solutions and present our approach in detail. We use the fully immersive virtual reality (VR) environment CAVE (cave automatic virtual environment), a room in which we are able to project a cellular reconstruction and visualize in 3D, to step into a world created with Blender, a free, fully customizable 3D modeling software with NeuroMorph plug-ins for visualization and analysis of EM preparations of brain tissue. Our workflow allows for full and fast reconstructions of volumes of brain neuropil using ilastik, a software tool for semiautomated segmentation of EM stacks. With this visualization environment, we can walk into the model containing neuronal and astrocytic processes to study the spatial distribution of glycogen granules, a major energy source that is selectively stored in astrocytes. The use of CAVE was key to the observation of a nonrandom distribution of glycogen, and led us to develop tools to quantitatively analyze glycogen clustering and proximity to other subcellular features. PMID:26179415

  10. From GUI to Gallery: A Study of Online Virtual Environments.

    ERIC Educational Resources Information Center

    Guynup, Stephen Lawrence

    This paper began as an attempt to clarify and classify the development of Web3D environments from 1995 to the present. In that process, important facts came to light. A large proportion of these sites were virtual galleries and museums. Second, these same environments covered a wide array of architectural interpretations and represented some of…

  11. Options in virtual 3D, optical-impression-based planning of dental implants.

    PubMed

    Reich, Sven; Kern, Thomas; Ritter, Lutz

    2014-01-01

    If a 3D radiograph, which in today's dentistry often consists of a CBCT dataset, is available for computerized implant planning, the 3D planning should also consider functional prosthetic aspects. In a conventional workflow, the CBCT is done with a specially produced radiopaque prosthetic setup that makes the desired prosthetic situation visible during virtual implant planning. If an exclusively digital workflow is chosen, intraoral digital impressions are taken. On these digital models, the desired prosthetic suprastructures are designed. The entire datasets are virtually superimposed by a "registration" process on the corresponding structures (teeth) in the CBCTs. Thus, both the osseous and prosthetic structures are visible in one single 3D application and make it possible to consider surgical and prosthetic aspects. After having determined the implant positions on the computer screen, a drilling template is designed digitally. According to this design (CAD), a template is printed or milled in CAM process. This template is the first physically extant product in the entire workflow. The article discusses the options and limitations of this workflow. PMID:25098158

  12. Building virtual 3D bone fragment models to control diaphyseal fracture reduction

    NASA Astrophysics Data System (ADS)

    Leloup, Thierry; Schuind, Frederic; Lasudry, Nadine; Van Ham, Philippe

    1999-05-01

    Most fractures of the long bones are displaced and need to be surgically reduced. External fixation is often used but the crucial point of this technique is the control of reduction, which is effected with a brilliance amplifier. This system, giving instantly a x-ray image, has many disadvantages. It implies frequent irradiation to the patient and the surgical team, the visual field is limited, the supplied images are distorted and it only gives 2D information. Consequently, the reduction is occasionally imperfect although intraoperatively it appears acceptable. Using the pains inserted in each fragment as markers and an optical tracker, it is possible to build a virtual 3D model for each principal fragment and to follow its movement during the reduction. This system will supply a 3D image of the fracture in real time and without irradiation. The brilliance amplifier could then be replaced by such a virtual reality system to provide the surgeon with an accurate tool facilitating the reduction of the fracture. The purpose of this work is to show how to build the 3D model for each principal bone fragment.

  13. Avalanche for shape and feature-based virtual screening with 3D alignment.

    PubMed

    Diller, David J; Connell, Nancy D; Welsh, William J

    2015-11-01

    This report introduces a new ligand-based virtual screening tool called Avalanche that incorporates both shape- and feature-based comparison with three-dimensional (3D) alignment between the query molecule and test compounds residing in a chemical database. Avalanche proceeds in two steps. The first step is an extremely rapid shape/feature based comparison which is used to narrow the focus from potentially millions or billions of candidate molecules and conformations to a more manageable number that are then passed to the second step. The second step is a detailed yet still rapid 3D alignment of the remaining candidate conformations to the query conformation. Using the 3D alignment, these remaining candidate conformations are scored, re-ranked and presented to the user as the top hits for further visualization and evaluation. To provide further insight into the method, the results from two prospective virtual screens are presented which show the ability of Avalanche to identify hits from chemical databases that would likely be missed by common substructure-based or fingerprint-based search methods. The Avalanche method is extended to enable patent landscaping, i.e., structural refinements to improve the patentability of hits for deployment in drug discovery campaigns. PMID:26458937

  14. Exploring conformational search protocols for ligand-based virtual screening and 3-D QSAR modeling.

    PubMed

    Cappel, Daniel; Dixon, Steven L; Sherman, Woody; Duan, Jianxin

    2015-02-01

    3-D ligand conformations are required for most ligand-based drug design methods, such as pharmacophore modeling, shape-based screening, and 3-D QSAR model building. Many studies of conformational search methods have focused on the reproduction of crystal structures (i.e. bioactive conformations); however, for ligand-based modeling the key question is how to generate a ligand alignment that produces the best results for a given query molecule. In this work, we study different conformation generation modes of ConfGen and the impact on virtual screening (Shape Screening and e-Pharmacophore) and QSAR predictions (atom-based and field-based). In addition, we develop a new search method, called common scaffold alignment, that automatically detects the maximum common scaffold between each screening molecule and the query to ensure identical coordinates of the common core, thereby minimizing the noise introduced by analogous parts of the molecules. In general, we find that virtual screening results are relatively insensitive to the conformational search protocol; hence, a conformational search method that generates fewer conformations could be considered "better" because it is more computationally efficient for screening. However, for 3-D QSAR modeling we find that more thorough conformational sampling tends to produce better QSAR predictions. In addition, significant improvements in QSAR predictions are obtained with the common scaffold alignment protocol developed in this work, which focuses conformational sampling on parts of the molecules that are not part of the common scaffold. PMID:25408244

  15. A virtual interface for interactions with 3D models of the human body.

    PubMed

    De Paolis, Lucio T; Pulimeno, Marco; Aloisio, Giovanni

    2009-01-01

    The developed system is the first prototype of a virtual interface designed to avoid contact with the computer so that the surgeon is able to visualize 3D models of the patient's organs more effectively during surgical procedure or to use this in the pre-operative planning. The doctor will be able to rotate, to translate and to zoom in on 3D models of the patient's organs simply by moving his finger in free space; in addition, it is possible to choose to visualize all of the organs or only some of them. All of the interactions with the models happen in real-time using the virtual interface which appears as a touch-screen suspended in free space in a position chosen by the user when the application is started up. Finger movements are detected by means of an optical tracking system and are used to simulate touch with the interface and to interact by pressing the buttons present on the virtual screen. PMID:19377116

  16. Stroke Rehabilitation Using Virtual Environments.

    PubMed

    Fu, Michael J; Knutson, Jayme S; Chae, John

    2015-11-01

    This review covers the rationale, mechanisms, and availability of commercially available virtual environment-based interventions for stroke rehabilitation. It describes interventions for motor, speech, cognitive, and sensory dysfunction. Also discussed are the important features and mechanisms that allow virtual environments to facilitate motor relearning. A common challenge is the inability to translate success in small trials to efficacy in larger populations. The heterogeneity of stroke pathophysiology has been blamed, and experts advocate for the study of multimodal approaches. Therefore, this article also introduces a framework to help define new therapy combinations that may be necessary to address stroke heterogeneity. PMID:26522910

  17. Virtual environment tactile system

    DOEpatents

    Renzi, R.

    1996-12-10

    A method for providing a realistic sense of touch in virtual reality by means of programmable actuator assemblies is disclosed. Each tactile actuator assembly consists of a number of individual actuators whose movement is controlled by a computer and associated drive electronics. When an actuator is energized, the rare earth magnet and the associated contactor, incorporated within the actuator, are set in motion by the opposing electromagnetic field of a surrounding coil. The magnet pushes the contactor forward to contact the skin resulting in the sensation of touch. When the electromagnetic field is turned off, the rare earth magnet and the contactor return to their neutral positions due to the magnetic equilibrium caused by the interaction with the ferrous outer sleeve. The small size and flexible nature of the actuator assemblies permit incorporation into a glove, boot or body suit. The actuator has additional applications, such as, for example, as an accelerometer, an actuator for precisely controlled actuations or to simulate the sensation of braille letters. 28 figs.

  18. Virtual environment tactile system

    DOEpatents

    Renzi, Ronald

    1996-01-01

    A method for providing a realistic sense of touch in virtual reality by means of programmable actuator assemblies is disclosed. Each tactile actuator assembly consists of a number of individual actuators whose movement is controlled by a computer and associated drive electronics. When an actuator is energized, the rare earth magnet and the associated contactor, incorporated within the actuator, are set in motion by the opposing electromagnetic field of a surrounding coil. The magnet pushes the contactor forward to contact the skin resulting in the sensation of touch. When the electromagnetic field is turned off, the rare earth magnet and the contactor return to their neutral positions due to the magnetic equilibrium caused by the interaction with the ferrous outer sleeve. The small size and flexible nature of the actuator assemblies permit incorporation into a glove, boot or body suit. The actuator has additional applications, such as, for example, as an accelerometer, an actuator for precisely controlled actuations or to simulate the sensation of braille letters.

  19. Elastic registration using 3D ChainMail: application to virtual colonoscopy

    NASA Astrophysics Data System (ADS)

    Castro-Pareja, Carlos R.; Daly, Barry; Shekhar, Raj

    2006-03-01

    We present an elastic registration algorithm based on local deformations modeled using cubic B-splines and controlled using 3D ChainMail. Our algorithm eliminates the appearance of folding artifacts and allows local rigidity and compressibility control independent of the image similarity metric being used. 3D ChainMail propagates large internal deformations between neighboring B-Spline control points, thereby preserving the topology of the transformed image without requiring the addition of penalty terms based on rigidity of the transformation field to the equation used to maximize image similarity. A novel application to virtual colonoscopy is presented where the algorithm is used to significantly improve cross-localization between colon locations in prone and supine CT images.

  20. Risk Analysis Virtual ENvironment

    2014-02-10

    RAVEN has 3 major functionalities: 1. Provides a Graphical User Interface for the pre- and post-processing of the RELAP-7 input and output. 2. Provides the capability to model nuclear power plants control logic for the RELAP-7 code and dynamic control of the accident scenario evolution. This capability is based on a software structure that realizes a direct connection between the RELAP-7 solver engine (MOOSE) and a python environment where the variables describing the plant statusmore » are accessible in a scripting environment. RAVEN support the generation of the probabilistic scenario control by supplying a wide range of probability and cumulative distribution functions and their inverse functions. 3. Provides a general environment to perform probability risk analysis for RELAP-7, RELAP-5 and any generic MOOSE based applications. The probabilistic analysis is performed by sampling the input space of the coupled code parameters and it is enhanced by using modern artificial intelligence algorithms that accelerate the identification of the areas of major risk (in the input parameter space). This environment also provides a graphical visualization capability to analyze the outcomes. Among other approaches, the classical Monte Carlo and Latin Hypercube sampling algorithms are available. For the acceleration of the convergence of the sampling methodologies, Support Vector Machines, Bayesian regression, and collocation stochastic polynomials chaos are implemented. The same methodologies here described could be used to solve optimization and uncertainties propagation problems using the RAVEN framework.« less

  1. Virtual interface environment

    NASA Technical Reports Server (NTRS)

    Fisher, Scott S.

    1988-01-01

    A head-mounted, wide-angle, stereoscopic display system controlled by operator position, voice and gesture is under development for use as a multipurpose interface environment. Initial applications of the system are in telerobotics, data-management and human factors research. System configuration and research directions are described.

  2. Risk Analysis Virtual ENvironment

    SciTech Connect

    2014-02-10

    RAVEN has 3 major functionalities: 1. Provides a Graphical User Interface for the pre- and post-processing of the RELAP-7 input and output. 2. Provides the capability to model nuclear power plants control logic for the RELAP-7 code and dynamic control of the accident scenario evolution. This capability is based on a software structure that realizes a direct connection between the RELAP-7 solver engine (MOOSE) and a python environment where the variables describing the plant status are accessible in a scripting environment. RAVEN support the generation of the probabilistic scenario control by supplying a wide range of probability and cumulative distribution functions and their inverse functions. 3. Provides a general environment to perform probability risk analysis for RELAP-7, RELAP-5 and any generic MOOSE based applications. The probabilistic analysis is performed by sampling the input space of the coupled code parameters and it is enhanced by using modern artificial intelligence algorithms that accelerate the identification of the areas of major risk (in the input parameter space). This environment also provides a graphical visualization capability to analyze the outcomes. Among other approaches, the classical Monte Carlo and Latin Hypercube sampling algorithms are available. For the acceleration of the convergence of the sampling methodologies, Support Vector Machines, Bayesian regression, and collocation stochastic polynomials chaos are implemented. The same methodologies here described could be used to solve optimization and uncertainties propagation problems using the RAVEN framework.

  3. 3D pulmonary airway color image reconstruction via shape from shading and virtual bronchoscopy imaging techniques

    NASA Astrophysics Data System (ADS)

    Suter, Melissa; Reinhardt, Joseph M.; Hoffman, Eric A.; McLennan, Geoffrey

    2005-04-01

    The dependence on macro-optical imaging of the human body in the assessment of possible disease is rapidly increasing concurrent with, and as a direct result of, advancements made in medical imaging technologies. Assessing the pulmonary airways through bronchoscopy is performed extensively in clinical practice however remains highly subjective due to limited visualization techniques and the lack of quantitative analyses. The representation of 3D structures in 2D visualization modes, although providing an insight to the structural content of the scene, may in fact skew the perception of the structural form. We have developed two methods for visualizing the optically derived airway mucosal features whilst preserving the structural scene integrity. Shape from shading (SFS) techniques can be used to extract 3D structural information from 2D optical images. The SFS technique presented addresses many limitations previously encountered in conventional techniques resulting in high-resolution 3D color images. The second method presented to combine both color and structural information relies on combined CT and bronchoscopy imaging modalities. External imaging techniques such as CT provide a means of determining the gross structural anatomy of the pulmonary airways, however lack the important optically derived mucosal color. Virtual bronchoscopy is used to provide a direct link between the CT derived structural anatomy and the macro-optically derived mucosal color. Through utilization of a virtual and true bronchoscopy matching technique we are able to directly extract combined structurally sound 3D color segments of the pulmonary airways. Various pulmonary airway diseases are assessed and the resulting combined color and texture results are presented demonstrating the effectiveness of the presented techniques.

  4. 3D display considerations for rugged airborne environments

    NASA Astrophysics Data System (ADS)

    Barnidge, Tracy J.; Tchon, Joseph L.

    2015-05-01

    The KC-46 is the next generation, multi-role, aerial refueling tanker aircraft being developed by Boeing for the United States Air Force. Rockwell Collins has developed the Remote Vision System (RVS) that supports aerial refueling operations under a variety of conditions. The system utilizes large-area, high-resolution 3D displays linked with remote sensors to enhance the operator's visual acuity for precise aerial refueling control. This paper reviews the design considerations, trade-offs, and other factors related to the selection and ruggedization of the 3D display technology for this military application.

  5. Virtual Control Systems Environment (VCSE)

    SciTech Connect

    Atkins, Will

    2012-10-08

    Will Atkins, a Sandia National Laboratories computer engineer discusses cybersecurity research work for process control systems. Will explains his work on the Virtual Control Systems Environment project to develop a modeling and simulation framework of the U.S. electric grid in order to study and mitigate possible cyberattacks on infrastructure.

  6. Virtual Control Systems Environment (VCSE)

    ScienceCinema

    Atkins, Will

    2014-02-26

    Will Atkins, a Sandia National Laboratories computer engineer discusses cybersecurity research work for process control systems. Will explains his work on the Virtual Control Systems Environment project to develop a modeling and simulation framework of the U.S. electric grid in order to study and mitigate possible cyberattacks on infrastructure.

  7. Quality in virtual education environments

    ERIC Educational Resources Information Center

    Barbera, Elena

    2004-01-01

    The emergence of the Internet has changed the way we teach and learn. This paper provides a general overview of the state of the quality of virtual education environments. First of all, some problems with the quality criteria applied in this field and the need to develop quality seals are presented. Likewise, the dimensions and subdimensions of an…

  8. Cognitive Styles and Virtual Environments.

    ERIC Educational Resources Information Center

    Ford, Nigel

    2000-01-01

    Discussion of navigation through virtual information environments focuses on the need for robust user models that take into account individual differences. Considers Pask's information processing styles and strategies; deep (transformational) and surface (reproductive) learning; field dependence/independence; divergent/convergent thinking;…

  9. M3D (Media 3D): a new programming language for web-based virtual reality in E-Learning and Edutainment

    NASA Astrophysics Data System (ADS)

    Chakaveh, Sepideh; Skaley, Detlef; Laine, Patricia; Haeger, Ralf; Maad, Soha

    2003-01-01

    Today, interactive multimedia educational systems are well established, as they prove useful instruments to enhance one's learning capabilities. Hitherto, the main difficulty with almost all E-Learning systems was latent in the rich media implementation techniques. This meant that each and every system should be created individually as reapplying the media, be it only a part, or the whole content was not directly possible, as everything must be applied mechanically i.e. by hand. Consequently making E-learning systems exceedingly expensive to generate, both in time and money terms. Media-3D or M3D is a new platform independent programming language, developed at the Fraunhofer Institute Media Communication to enable visualisation and simulation of E-Learning multimedia content. M3D is an XML-based language, which is capable of distinguishing between the3D models from that of the 3D scenes, as well as handling provisions for animations, within the programme. Here we give a technical account of M3D programming language and briefly describe two specific application scenarios where M3D is applied to create virtual reality E-Learning content for training of technical personnel.

  10. Platform for setting up interactive virtual environments

    NASA Astrophysics Data System (ADS)

    Souza, Danilo; Dias, Paulo; Santos, Daniel; Sousa Santos, Beatriz

    2014-02-01

    This paper introduces pSIVE, a platform that allows the easy setting up of Virtual Environments, with interactive information (for instance, a video or a document about a machine that is present in the virtual world) to be accessed for different 3D elements. The main goal is to create for evaluation and training on a virtual factory - but generic enough to be applied in different contexts by non-expert users (academic and touristic for instance). We show some preliminary results obtained from two different scenarios: first a production line of a factory with contextualized information associated to different elements which aimed the training of employees. Second a testing environment, to compare and assess two different selection styles that were integrated in pSIVE and to allow different users to interact with an environment created with pSIVE to collect opinions about the system. The conclusions show that the overall satisfaction was high and the comments will be considered in further platform development.

  11. Neurally and ocularly informed graph-based models for searching 3D environments

    NASA Astrophysics Data System (ADS)

    Jangraw, David C.; Wang, Jun; Lance, Brent J.; Chang, Shih-Fu; Sajda, Paul

    2014-08-01

    Objective. As we move through an environment, we are constantly making assessments, judgments and decisions about the things we encounter. Some are acted upon immediately, but many more become mental notes or fleeting impressions—our implicit ‘labeling’ of the world. In this paper, we use physiological correlates of this labeling to construct a hybrid brain-computer interface (hBCI) system for efficient navigation of a 3D environment. Approach. First, we record electroencephalographic (EEG), saccadic and pupillary data from subjects as they move through a small part of a 3D virtual city under free-viewing conditions. Using machine learning, we integrate the neural and ocular signals evoked by the objects they encounter to infer which ones are of subjective interest to them. These inferred labels are propagated through a large computer vision graph of objects in the city, using semi-supervised learning to identify other, unseen objects that are visually similar to the labeled ones. Finally, the system plots an efficient route to help the subjects visit the ‘similar’ objects it identifies. Main results. We show that by exploiting the subjects’ implicit labeling to find objects of interest instead of exploring naively, the median search precision is increased from 25% to 97%, and the median subject need only travel 40% of the distance to see 84% of the objects of interest. We also find that the neural and ocular signals contribute in a complementary fashion to the classifiers’ inference of subjects’ implicit labeling. Significance. In summary, we show that neural and ocular signals reflecting subjective assessment of objects in a 3D environment can be used to inform a graph-based learning model of that environment, resulting in an hBCI system that improves navigation and information delivery specific to the user’s interests.

  12. Using a 3D Virtual Supermarket to Measure Food Purchase Behavior: A Validation Study

    PubMed Central

    Jiang, Yannan; Steenhuis, Ingrid Hendrika Margaretha; Ni Mhurchu, Cliona

    2015-01-01

    Background There is increasing recognition that supermarkets are an important environment for health-promoting interventions such as fiscal food policies or front-of-pack nutrition labeling. However, due to the complexities of undertaking such research in the real world, well-designed randomized controlled trials on these kinds of interventions are lacking. The Virtual Supermarket is a 3-dimensional computerized research environment designed to enable experimental studies in a supermarket setting without the complexity or costs normally associated with undertaking such research. Objective The primary objective was to validate the Virtual Supermarket by comparing virtual and real-life food purchasing behavior. A secondary objective was to obtain participant feedback on perceived sense of “presence” (the subjective experience of being in one place or environment even if physically located in another) in the Virtual Supermarket. Methods Eligible main household shoppers (New Zealand adults aged ≥18 years) were asked to conduct 3 shopping occasions in the Virtual Supermarket over 3 consecutive weeks, complete the validated Presence Questionnaire Items Stems, and collect their real supermarket grocery till receipts for that same period. Proportional expenditure (NZ$) and the proportion of products purchased over 18 major food groups were compared between the virtual and real supermarkets. Data were analyzed using repeated measures mixed models. Results A total of 123 participants consented to take part in the study. In total, 69.9% (86/123) completed 1 shop in the Virtual Supermarket, 64.2% (79/123) completed 2 shops, 60.2% (74/123) completed 3 shops, and 48.8% (60/123) returned their real supermarket till receipts. The 4 food groups with the highest relative expenditures were the same for the virtual and real supermarkets: fresh fruit and vegetables (virtual estimate: 14.3%; real: 17.4%), bread and bakery (virtual: 10.0%; real: 8.2%), dairy (virtual: 19.1%; real

  13. 3D Game Content Distributed Adaptation in Heterogeneous Environments

    NASA Astrophysics Data System (ADS)

    Morán, Francisco; Preda, Marius; Lafruit, Gauthier; Villegas, Paulo; Berretty, Robert-Paul

    2007-12-01

    Most current multiplayer 3D games can only be played on a single dedicated platform (a particular computer, console, or cell phone), requiring specifically designed content and communication over a predefined network. Below we show how, by using signal processing techniques such as multiresolution representation and scalable coding for all the components of a 3D graphics object (geometry, texture, and animation), we enable online dynamic content adaptation, and thus delivery of the same content over heterogeneous networks to terminals with very different profiles, and its rendering on them. We present quantitative results demonstrating how the best displayed quality versus computational complexity versus bandwidth tradeoffs have been achieved, given the distributed resources available over the end-to-end content delivery chain. Additionally, we use state-of-the-art, standardised content representation and compression formats (MPEG-4 AFX, JPEG 2000, XML), enabling deployment over existing infrastructure, while keeping hooks to well-established practices in the game industry.

  14. 3D virtual human atria: A computational platform for studying clinical atrial fibrillation.

    PubMed

    Aslanidi, Oleg V; Colman, Michael A; Stott, Jonathan; Dobrzynski, Halina; Boyett, Mark R; Holden, Arun V; Zhang, Henggui

    2011-10-01

    Despite a vast amount of experimental and clinical data on the underlying ionic, cellular and tissue substrates, the mechanisms of common atrial arrhythmias (such as atrial fibrillation, AF) arising from the functional interactions at the whole atria level remain unclear. Computational modelling provides a quantitative framework for integrating such multi-scale data and understanding the arrhythmogenic behaviour that emerges from the collective spatio-temporal dynamics in all parts of the heart. In this study, we have developed a multi-scale hierarchy of biophysically detailed computational models for the human atria--the 3D virtual human atria. Primarily, diffusion tensor MRI reconstruction of the tissue geometry and fibre orientation in the human sinoatrial node (SAN) and surrounding atrial muscle was integrated into the 3D model of the whole atria dissected from the Visible Human dataset. The anatomical models were combined with the heterogeneous atrial action potential (AP) models, and used to simulate the AP conduction in the human atria under various conditions: SAN pacemaking and atrial activation in the normal rhythm, break-down of regular AP wave-fronts during rapid atrial pacing, and the genesis of multiple re-entrant wavelets characteristic of AF. Contributions of different properties of the tissue to mechanisms of the normal rhythm and arrhythmogenesis were investigated. Primarily, the simulations showed that tissue heterogeneity caused the break-down of the normal AP wave-fronts at rapid pacing rates, which initiated a pair of re-entrant spiral waves; and tissue anisotropy resulted in a further break-down of the spiral waves into multiple meandering wavelets characteristic of AF. The 3D virtual atria model itself was incorporated into the torso model to simulate the body surface ECG patterns in the normal and arrhythmic conditions. Therefore, a state-of-the-art computational platform has been developed, which can be used for studying multi

  15. Scalable, High-performance 3D Imaging Software Platform: System Architecture and Application to Virtual Colonoscopy

    PubMed Central

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli; Brett, Bevin

    2013-01-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. In this work, we have developed a software platform that is designed to support high-performance 3D medical image processing for a wide range of applications using increasingly available and affordable commodity computing systems: multi-core, clusters, and cloud computing systems. To achieve scalable, high-performance computing, our platform (1) employs size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D image processing algorithms; (2) supports task scheduling for efficient load distribution and balancing; and (3) consists of a layered parallel software libraries that allow a wide range of medical applications to share the same functionalities. We evaluated the performance of our platform by applying it to an electronic cleansing system in virtual colonoscopy, with initial experimental results showing a 10 times performance improvement on an 8-core workstation over the original sequential implementation of the system. PMID:23366803

  16. Towards a 3d Based Platform for Cultural Heritage Site Survey and Virtual Exploration

    NASA Astrophysics Data System (ADS)

    Seinturier, J.; Riedinger, C.; Mahiddine, A.; Peloso, D.; Boï, J.-M.; Merad, D.; Drap, P.

    2013-07-01

    This paper present a 3D platform that enables to make both cultural heritage site survey and its virtual exploration. It provides a single and easy way to use framework for merging multi scaled 3D measurements based on photogrammetry, documentation produced by experts and the knowledge of involved domains leaving the experts able to extract and choose the relevant information to produce the final survey. Taking into account the interpretation of the real world during the process of archaeological surveys is in fact the main goal of a survey. New advances in photogrammetry and the capability to produce dense 3D point clouds do not solve the problem of surveys. New opportunities for 3D representation are now available and we must to use them and find new ways to link geometry and knowledge. The new platform is able to efficiently manage and process large 3D data (points set, meshes) thanks to the implementation of space partition methods coming from the state of the art such as octrees and kd-trees and thus can interact with dense point clouds (thousands to millions of points) in real time. The semantisation of raw 3D data relies on geometric algorithms such as geodetic path computation, surface extraction from dense points cloud and geometrical primitive optimization. The platform provide an interface that enables expert to describe geometric representations of interesting objects like ashlar blocs, stratigraphic units or generic items (contour, lines, … ) directly onto the 3D representation of the site and without explicit links to underlying algorithms. The platform provide two ways for describing geometric representation. If oriented photographs are available, the expert can draw geometry on a photograph and the system computes its 3D representation by projection on the underlying mesh or the points cloud. If photographs are not available or if the expert wants to only use the 3D representation then he can simply draw objects shape on it. When 3D

  17. Determinants of Presence in 3D Virtual Worlds: A Structural Equation Modelling Analysis

    ERIC Educational Resources Information Center

    Chow, Meyrick

    2016-01-01

    There is a growing body of evidence that feeling present in virtual environments contributes to effective learning. Presence is a psychological state of the user; hence, it is generally agreed that individual differences in user characteristics can lead to different experiences of presence. Despite the fact that user characteristics can play a…

  18. Techniques for Revealing 3d Hidden Archeological Features: Morphological Residual Models as Virtual-Polynomial Texture Maps

    NASA Astrophysics Data System (ADS)

    Pires, H.; Martínez Rubio, J.; Elorza Arana, A.

    2015-02-01

    The recent developments in 3D scanning technologies are not been accompanied by visualization interfaces. We are still using the same types of visual codes as when maps and drawings were made by hand. The available information in 3D scanning data sets is not being fully exploited by current visualization techniques. In this paper we present recent developments regarding the use of 3D scanning data sets for revealing invisible information from archaeological sites. These sites are affected by a common problem, decay processes, such as erosion, that never ceases its action and endangers the persistence of last vestiges of some peoples and cultures. Rock art engravings, or epigraphical inscriptions, are among the most affected by these processes because they are, due to their one nature, carved at the surface of rocks often exposed to climatic agents. The study and interpretation of these motifs and texts is strongly conditioned by the degree of conservation of the imprints left by our ancestors. Every single detail in the remaining carvings can make a huge difference in the conclusions taken by specialists. We have selected two case-studies severely affected by erosion to present the results of the on-going work dedicated to explore in new ways the information contained in 3D scanning data sets. A new method for depicting subtle morphological features in the surface of objects or sites has been developed. It allows to contrast human patterns still present at the surface but invisible to naked eye or by any other archaeological inspection technique. It was called Morphological Residual Model (MRM) because of its ability to contrast the shallowest morphological details, to which we refer as residuals, contained in the wider forms of the backdrop. Afterwards, we have simulated the process of building Polynomial Texture Maps - a widespread technique that as been contributing to archaeological studies for some years - in a 3D virtual environment using the results of MRM

  19. Towards a Transcription System of Sign Language for 3D Virtual Agents

    NASA Astrophysics Data System (ADS)

    Do Amaral, Wanessa Machado; de Martino, José Mario

    Accessibility is a growing concern in computer science. Since virtual information is mostly presented visually, it may seem that access for deaf people is not an issue. However, for prelingually deaf individuals, those who were deaf since before acquiring and formally learn a language, written information is often of limited accessibility than if presented in signing. Further, for this community, signing is their language of choice, and reading text in a spoken language is akin to using a foreign language. Sign language uses gestures and facial expressions and is widely used by deaf communities. To enabling efficient production of signed content on virtual environment, it is necessary to make written records of signs. Transcription systems have been developed to describe sign languages in written form, but these systems have limitations. Since they were not originally designed with computer animation in mind, in general, the recognition and reproduction of signs in these systems is an easy task only to those who deeply know the system. The aim of this work is to develop a transcription system to provide signed content in virtual environment. To animate a virtual avatar, a transcription system requires explicit enough information, such as movement speed, signs concatenation, sequence of each hold-and-movement and facial expressions, trying to articulate close to reality. Although many important studies in sign languages have been published, the transcription problem remains a challenge. Thus, a notation to describe, store and play signed content in virtual environments offers a multidisciplinary study and research tool, which may help linguistic studies to understand the sign languages structure and grammar.

  20. System Management Software for Virtual Environments

    SciTech Connect

    Vallee, Geoffroy R; Naughton, III, Thomas J; Scott, Stephen L

    2007-01-01

    Recently there has been an increased interest in the use of system-level virtualization using mature solutions such as Xen, QEMU, or VMWare. These virtualization platforms are being used in distributed and parallel environments including high performance computing. The use of virtual machines within such environments introduces new challenges to system management. These include tedious tasks such as deploying para-virtualized host operating systems to support virtual machine execution or virtual overlay networks to connect these virtual machines. Additionally, there is the problem of machine definition and deployment, which is complicated by differentiation in the underlying virtualization technology. This paper discusses tools for the deployment and management of both host operating systems and virtual machines in clusters. We begin with an overview of system-level virtualization and move on to a description of tools that we have developed to aid with these environments. These tools extend prior work in the area of cluster installation, configuration and management.

  1. Combinatorial Pharmacophore-Based 3D-QSAR Analysis and Virtual Screening of FGFR1 Inhibitors

    PubMed Central

    Zhou, Nannan; Xu, Yuan; Liu, Xian; Wang, Yulan; Peng, Jianlong; Luo, Xiaomin; Zheng, Mingyue; Chen, Kaixian; Jiang, Hualiang

    2015-01-01

    The fibroblast growth factor/fibroblast growth factor receptor (FGF/FGFR) signaling pathway plays crucial roles in cell proliferation, angiogenesis, migration, and survival. Aberration in FGFRs correlates with several malignancies and disorders. FGFRs have proved to be attractive targets for therapeutic intervention in cancer, and it is of high interest to find FGFR inhibitors with novel scaffolds. In this study, a combinatorial three-dimensional quantitative structure-activity relationship (3D-QSAR) model was developed based on previously reported FGFR1 inhibitors with diverse structural skeletons. This model was evaluated for its prediction performance on a diverse test set containing 232 FGFR inhibitors, and it yielded a SD value of 0.75 pIC50 units from measured inhibition affinities and a Pearson’s correlation coefficient R2 of 0.53. This result suggests that the combinatorial 3D-QSAR model could be used to search for new FGFR1 hit structures and predict their potential activity. To further evaluate the performance of the model, a decoy set validation was used to measure the efficiency of the model by calculating EF (enrichment factor). Based on the combinatorial pharmacophore model, a virtual screening against SPECS database was performed. Nineteen novel active compounds were successfully identified, which provide new chemical starting points for further structural optimization of FGFR1 inhibitors. PMID:26110383

  2. Virtual Environments for Data Preservation

    NASA Astrophysics Data System (ADS)

    Beckmann, Volker

    Data preservation in a wider sense includes also the ability to analyse data of past experiments. Because operation systems, such as Linux and Windows, are evolving rapidly, software packages can be outdated and not usable anymore already a few years after they have been written. Creating an image of the operation system is a way to be able to launch the analysis software on a computing infrastructure independent on the local operation system used. At the same time, virtualization also allows to launch the same software in collaborations across several institutes with very different computing infrastructure. At the François Arago Centre of the APC in Paris we provide user support for virtualization and computing environment access to the scientific community

  3. Design and fabrication of concave-convex lens for head mounted virtual reality 3D glasses

    NASA Astrophysics Data System (ADS)

    Deng, Zhaoyang; Cheng, Dewen; Hu, Yuan; Huang, Yifan; Wang, Yongtian

    2015-08-01

    As a kind of light-weighted and convenient tool to achieve stereoscopic vision, virtual reality glasses are gaining more popularity nowadays. For these glasses, molded plastic lenses are often adopted to handle both the imaging property and the cost of massive production. However, the as-built performance of the glass depends on both the optical design and the injection molding process, and maintaining the profile of the lens during injection molding process presents particular challenges. In this paper, optical design is combined with processing simulation analysis to obtain a design result suitable for injection molding. Based on the design and analysis results, different experiments are done using high-quality equipment to optimize the process parameters of injection molding. Finally, a single concave-convex lens is designed with a field-of-view of 90° for the virtual reality 3D glasses. The as-built profile error of the glass lens is controlled within 5μm, which indicates that the designed shape of the lens is fairly realized and the designed optical performance can thus be achieved.

  4. 3D modeling of the Strasbourg's Cathedral basements for interdisciplinary research and virtual visits

    NASA Astrophysics Data System (ADS)

    Landes, T.; Kuhnle, G.; Bruna, R.

    2015-08-01

    On the occasion of the millennium celebration of Strasbourg Cathedral, a transdisciplinary research group composed of archaeologists, surveyors, architects, art historians and a stonemason revised the 1966-1972 excavations under the St. Lawrence's Chapel of the Cathedral having remains of Roman and medieval masonry. The 3D modeling of the Chapel has been realized based on the combination of conventional surveying techniques for the network creation, laser scanning for the model creation and photogrammetric techniques for the texturing of a few parts. According to the requirements and the end-user of the model, the level of detail and level of accuracy have been adapted and assessed for every floor. The basement has been acquired and modeled with more details and a higher accuracy than the other parts. Thanks to this modeling work, archaeologists can confront their assumptions to those of other disciplines by simulating constructions of other worship edifices on the massive stones composing the basement. The virtual reconstructions provided evidence in support of these assumptions and served for communication via virtual visits.

  5. Using Highly Interactive Virtual Environments for Safeguards Activities

    SciTech Connect

    Weil, Bradley S; Alcala, Benjamin S; Alcala, Scott; Eipeldauer, Mary D; Weil, Logan B

    2010-01-01

    Highly interactive virtual environment (HIVE) is a term that refers to interactive educational simulations, serious games and virtual worlds. Studies indicate that learning with the aid of interactive environments produces better retention and depth of knowledge by promoting improved trainee engagement and understanding. Virtual reality or three dimensional (3D) visualization is often used to promote the understanding of something when personal observation, photographs, drawings, and/or sketches are not possible or available. Subjects and situations, either real or hypothetical, can be developed using a 3D model. Models can be tailored to the audience allowing safeguards and security features to be demonstrated for educational purposes in addition to engineering evaluation and performance analysis. Oak Ridge National Laboratory (ORNL) has begun evaluating the feasibility of HIVEs for improving safeguards activities such as training, mission planning, and evaluating worker task performance. This paper will discuss the development workflow of HIVEs and present some recent examples.

  6. Affordable virtual environments: building a virtual beach for clinical use.

    PubMed

    Sherstyuk, Andrei; Aschwanden, Christoph; Saiki, Stanley

    2005-01-01

    Virtual Reality has been used for clinical application for about 10 years and has proved to be an effective tool for treating various disorders. In this paper, we want to share our experience in building a 3D, motion tracked, immersive VR system for pain treatment and biofeedback research. PMID:15718779

  7. An investigation of pointing postures in a 3D stereoscopic environment.

    PubMed

    Lin, Chiuhsiang Joe; Ho, Sui-Hua; Chen, Yan-Jyun

    2015-05-01

    Many object pointing and selecting techniques for large screens have been proposed in the literature. There is a lack of quantitative evidence suggesting proper pointing postures for interacting with stereoscopic targets in immersive virtual environments. The objective of this study was to explore users' performances and experiences of using different postures while interacting with 3D targets remotely in an immersive stereoscopic environment. Two postures, hand-directed and gaze-directed pointing methods, were compared in order to investigate the postural influences. Two stereo parallaxes, negative and positive parallaxes, were compared for exploring how target depth variances would impact users' performances and experiences. Fifteen participants were recruited to perform two interactive tasks, tapping and tracking tasks, to simulate interaction behaviors in the stereoscopic environment. Hand-directed pointing is suggested for both tapping and tracking tasks due to its significantly better overall performance, less muscle fatigue, and better usability. However, a gaze-directed posture is probably a better alternative than hand-directed pointing for tasks with high accuracy requirements in home-in phases. Additionally, it is easier for users to interact with targets with negative parallax than with targets with positive parallax. Based on the findings of this research, future applications involving different pointing techniques should consider both pointing performances and postural effects as a result of pointing task precision requirements and potential postural fatigue. PMID:25683543

  8. Telearch - Integrated visual simulation environment for collaborative virtual archaeology.

    NASA Astrophysics Data System (ADS)

    Kurillo, Gregorij; Forte, Maurizio

    Archaeologists collect vast amounts of digital data around the world; however, they lack tools for integration and collaborative interaction to support reconstruction and interpretation process. TeleArch software is aimed to integrate different data sources and provide real-time interaction tools for remote collaboration of geographically distributed scholars inside a shared virtual environment. The framework also includes audio, 2D and 3D video streaming technology to facilitate remote presence of users. In this paper, we present several experimental case studies to demonstrate the integration and interaction with 3D models and geographical information system (GIS) data in this collaborative environment.

  9. Foreign Language Vocabulary Development through Activities in an Online 3D Environment

    ERIC Educational Resources Information Center

    Milton, James; Jonsen, Sunniva; Hirst, Steven; Lindenburn, Sharn

    2012-01-01

    On-line virtual 3D worlds offer the opportunity for users to interact in real time with native speakers of the language they are learning. In principle, this ought to be of great benefit to learners, and mimicking the opportunity for immersion that real-life travel to a foreign country offers. We have very little research to show whether this is…

  10. NanTroSEIZE in 3-D: Creating a Virtual Research Experience in Undergraduate Geoscience Courses

    NASA Astrophysics Data System (ADS)

    Reed, D. L.; Bangs, N. L.; Moore, G. F.; Tobin, H.

    2009-12-01

    Marine research programs, both large and small, have increasingly added a web-based component to facilitate outreach to K-12 and the public, in general. These efforts have included, among other activities, information-rich websites, ship-to-shore communication with scientists during expeditions, blogs at sea, clips on YouTube, and information about daily shipboard activities. Our objective was to leverage a portion of the vast collection of data acquired through the NSF-MARGINS program to create a learning tool with a long lifespan for use in undergraduate geoscience courses. We have developed a web-based virtual expedition, NanTroSEIZE in 3-D, based on a seismic survey associated with the NanTroSEIZE program of NSF-MARGINS and IODP to study the properties of the plate boundary fault system in the upper limit of the seismogenic zone off Japan. The virtual voyage can be used in undergraduate classes at anytime, since it is not directly tied to the finite duration of a specific seagoing project. The website combines text, graphics, audio and video to place learning in an experiential framework as students participate on the expedition and carry out research. Students learn about the scientific background of the program, especially the critical role of international collaboration, and meet the chief scientists before joining the sea-going expedition. Students are presented with the principles of 3-D seismic imaging, data processing and interpretation while mapping and identifying the active faults that were the likely sources of devastating earthquakes and tsunamis in Japan in 1944 and 1948. They also learn about IODP drilling that began in 2007 and will extend through much of the next decade. The website is being tested in undergraduate classes in fall 2009 and will be distributed through the NSF-MARGINS website (http://www.nsf-margins.org/) and the MARGINS Mini-lesson section of the Science Education Resource Center (SERC) (http

  11. AnimatLab: a 3D graphics environment for neuromechanical simulations.

    PubMed

    Cofer, David; Cymbalyuk, Gennady; Reid, James; Zhu, Ying; Heitler, William J; Edwards, Donald H

    2010-03-30

    The nervous systems of animals evolved to exert dynamic control of behavior in response to the needs of the animal and changing signals from the environment. To understand the mechanisms of dynamic control requires a means of predicting how individual neural and body elements will interact to produce the performance of the entire system. AnimatLab is a software tool that provides an approach to this problem through computer simulation. AnimatLab enables a computational model of an animal's body to be constructed from simple building blocks, situated in a virtual 3D world subject to the laws of physics, and controlled by the activity of a multicellular, multicompartment neural circuit. Sensor receptors on the body surface and inside the body respond to external and internal signals and then excite central neurons, while motor neurons activate Hill muscle models that span the joints and generate movement. AnimatLab provides a common neuromechanical simulation environment in which to construct and test models of any skeletal animal, vertebrate or invertebrate. The use of AnimatLab is demonstrated in a neuromechanical simulation of human arm flexion and the myotactic and contact-withdrawal reflexes. PMID:20074588

  12. VERS: a virtual environment for reconstructive surgery planning

    NASA Astrophysics Data System (ADS)

    Montgomery, Kevin N.

    1997-05-01

    The virtual environment for reconstructive surgery (VERS) project at the NASA Ames Biocomputation Center is applying virtual reality technology to aid surgeons in planning surgeries. We are working with a craniofacial surgeon at Stanford to assemble and visualize the bone structure of patients requiring reconstructive surgery either through developmental abnormalities or trauma. This project is an extension of our previous work in 3D reconstruction, mesh generation, and immersive visualization. The current VR system, consisting of an SGI Onyx RE2, FakeSpace BOOM and ImmersiveWorkbench, Virtual Technologies CyberGlove and Ascension Technologies tracker, is currently in development and has already been used to visualize defects preoperatively. In the near future it will be used to more fully plan the surgery and compute the projected result to soft tissue structure. This paper presents the work in progress and details the production of a high-performance, collaborative, and networked virtual environment.

  13. Intelligent Motion and Interaction Within Virtual Environments

    NASA Technical Reports Server (NTRS)

    Ellis, Stephen R. (Editor); Slater, Mel (Editor); Alexander, Thomas (Editor)

    2007-01-01

    What makes virtual actors and objects in virtual environments seem real? How can the illusion of their reality be supported? What sorts of training or user-interface applications benefit from realistic user-environment interactions? These are some of the central questions that designers of virtual environments face. To be sure simulation realism is not necessarily the major, or even a required goal, of a virtual environment intended to communicate specific information. But for some applications in entertainment, marketing, or aspects of vehicle simulation training, realism is essential. The following chapters will examine how a sense of truly interacting with dynamic, intelligent agents may arise in users of virtual environments. These chapters are based on presentations at the London conference on Intelligent Motion and Interaction within a Virtual Environments which was held at University College, London, U.K., 15-17 September 2003.

  14. CamMedNP: Building the Cameroonian 3D structural natural products database for virtual screening

    PubMed Central

    2013-01-01

    Background Computer-aided drug design (CADD) often involves virtual screening (VS) of large compound datasets and the availability of such is vital for drug discovery protocols. We present CamMedNP - a new database beginning with more than 2,500 compounds of natural origin, along with some of their derivatives which were obtained through hemisynthesis. These are pure compounds which have been previously isolated and characterized using modern spectroscopic methods and published by several research teams spread across Cameroon. Description In the present study, 224 distinct medicinal plant species belonging to 55 plant families from the Cameroonian flora have been considered. About 80 % of these have been previously published and/or referenced in internationally recognized journals. For each compound, the optimized 3D structure, drug-like properties, plant source, collection site and currently known biological activities are given, as well as literature references. We have evaluated the “drug-likeness” of this database using Lipinski’s “Rule of Five”. A diversity analysis has been carried out in comparison with the ChemBridge diverse database. Conclusion CamMedNP could be highly useful for database screening and natural product lead generation programs. PMID:23590173

  15. Assessing the precision of gaze following using a stereoscopic 3D virtual reality setting.

    PubMed

    Atabaki, Artin; Marciniak, Karolina; Dicke, Peter W; Thier, Peter

    2015-07-01

    Despite the ecological importance of gaze following, little is known about the underlying neuronal processes, which allow us to extract gaze direction from the geometric features of the eye and head of a conspecific. In order to understand the neuronal mechanisms underlying this ability, a careful description of the capacity and the limitations of gaze following at the behavioral level is needed. Previous studies of gaze following, which relied on naturalistic settings have the disadvantage of allowing only very limited control of potentially relevant visual features guiding gaze following, such as the contrast of iris and sclera, the shape of the eyelids and--in the case of photographs--they lack depth. Hence, in order to get full control of potentially relevant features we decided to study gaze following of human observers guided by the gaze of a human avatar seen stereoscopically. To this end we established a stereoscopic 3D virtual reality setup, in which we tested human subjects' abilities to detect at which target a human avatar was looking at. Following the gaze of the avatar showed all the features of the gaze following of a natural person, namely a substantial degree of precision associated with a consistent pattern of systematic deviations from the target. Poor stereo vision affected performance surprisingly little (only in certain experimental conditions). Only gaze following guided by targets at larger downward eccentricities exhibited a differential effect of the presence or absence of accompanying movements of the avatar's eyelids and eyebrows. PMID:25982719

  16. The three-dimensional Event-Driven Graphics Environment (3D-EDGE)

    NASA Technical Reports Server (NTRS)

    Freedman, Jeffrey; Hahn, Roger; Schwartz, David M.

    1993-01-01

    Stanford Telecom developed the Three-Dimensional Event-Driven Graphics Environment (3D-EDGE) for NASA GSFC's (GSFC) Communications Link Analysis and Simulation System (CLASS). 3D-EDGE consists of a library of object-oriented subroutines which allow engineers with little or no computer graphics experience to programmatically manipulate, render, animate, and access complex three-dimensional objects.

  17. A Comparative Analysis of 2D and 3D Tasks for Virtual Reality Therapies Based on Robotic-Assisted Neurorehabilitation for Post-stroke Patients.

    PubMed

    Lledó, Luis D; Díez, Jorge A; Bertomeu-Motos, Arturo; Ezquerro, Santiago; Badesa, Francisco J; Sabater-Navarro, José M; García-Aracil, Nicolás

    2016-01-01

    Post-stroke neurorehabilitation based on virtual therapies are performed completing repetitive exercises shown in visual electronic devices, whose content represents imaginary or daily life tasks. Currently, there are two ways of visualization of these task. 3D virtual environments are used to get a three dimensional space that represents the real world with a high level of detail, whose realism is determinated by the resolucion and fidelity of the objects of the task. Furthermore, 2D virtual environments are used to represent the tasks with a low degree of realism using techniques of bidimensional graphics. However, the type of visualization can influence the quality of perception of the task, affecting the patient's sensorimotor performance. The purpose of this paper was to evaluate if there were differences in patterns of kinematic movements when post-stroke patients performed a reach task viewing a virtual therapeutic game with two different type of visualization of virtual environment: 2D and 3D. Nine post-stroke patients have participated in the study receiving a virtual therapy assisted by PUPArm rehabilitation robot. Horizontal movements of the upper limb were performed to complete the aim of the tasks, which consist in reaching peripheral or perspective targets depending on the virtual environment shown. Various parameter types such as the maximum speed, reaction time, path length, or initial movement are analyzed from the data acquired objectively by the robotic device to evaluate the influence of the task visualization. At the end of the study, a usability survey was provided to each patient to analysis his/her satisfaction level. For all patients, the movement trajectories were enhanced when they completed the therapy. This fact suggests that patient's motor recovery was increased. Despite of the similarity in majority of the kinematic parameters, differences in reaction time and path length were higher using the 3D task. Regarding the success rates

  18. A Comparative Analysis of 2D and 3D Tasks for Virtual Reality Therapies Based on Robotic-Assisted Neurorehabilitation for Post-stroke Patients

    PubMed Central

    Lledó, Luis D.; Díez, Jorge A.; Bertomeu-Motos, Arturo; Ezquerro, Santiago; Badesa, Francisco J.; Sabater-Navarro, José M.; García-Aracil, Nicolás

    2016-01-01

    Post-stroke neurorehabilitation based on virtual therapies are performed completing repetitive exercises shown in visual electronic devices, whose content represents imaginary or daily life tasks. Currently, there are two ways of visualization of these task. 3D virtual environments are used to get a three dimensional space that represents the real world with a high level of detail, whose realism is determinated by the resolucion and fidelity of the objects of the task. Furthermore, 2D virtual environments are used to represent the tasks with a low degree of realism using techniques of bidimensional graphics. However, the type of visualization can influence the quality of perception of the task, affecting the patient's sensorimotor performance. The purpose of this paper was to evaluate if there were differences in patterns of kinematic movements when post-stroke patients performed a reach task viewing a virtual therapeutic game with two different type of visualization of virtual environment: 2D and 3D. Nine post-stroke patients have participated in the study receiving a virtual therapy assisted by PUPArm rehabilitation robot. Horizontal movements of the upper limb were performed to complete the aim of the tasks, which consist in reaching peripheral or perspective targets depending on the virtual environment shown. Various parameter types such as the maximum speed, reaction time, path length, or initial movement are analyzed from the data acquired objectively by the robotic device to evaluate the influence of the task visualization. At the end of the study, a usability survey was provided to each patient to analysis his/her satisfaction level. For all patients, the movement trajectories were enhanced when they completed the therapy. This fact suggests that patient's motor recovery was increased. Despite of the similarity in majority of the kinematic parameters, differences in reaction time and path length were higher using the 3D task. Regarding the success rates

  19. Guided exploration in virtual environments

    NASA Astrophysics Data System (ADS)

    Beckhaus, Steffi; Eckel, Gerhard; Strothotte, Thomas

    2001-06-01

    We describe an application supporting alternating interaction and animation for the purpose of exploration in a surround- screen projection-based virtual reality system. The exploration of an environment is a highly interactive and dynamic process in which the presentation of objects of interest can give the user guidance while exploring the scene. Previous systems for automatic presentation of models or scenes need either cinematographic rules, direct human interaction, framesets or precalculation (e.g. precalculation of paths to a predefined goal). We report on the development of a system that can deal with rapidly changing user interest in objects of a scene or model as well as with dynamic models and changes of the camera position introduced interactively by the user. It is implemented as a potential-field based camera data generating system. In this paper we describe the implementation of our approach in a virtual art museum on the CyberStage, our surround-screen projection-based stereoscopic display. The paradigm of guided exploration is introduced describing the freedom of the user to explore the museum autonomously. At the same time, if requested by the user, guided exploration provides just-in-time navigational support. The user controls this support by specifying the current field of interest in high-level search criteria. We also present an informal user study evaluating this approach.

  20. Design of Virtual Environments for the Comprehension of Planetary Phenomena Based on Students' Ideas.

    ERIC Educational Resources Information Center

    Bakas, Christos; Mikropoulos, Tassos A.

    2003-01-01

    Explains the design and development of an educational virtual environment to support the teaching of planetary phenomena, particularly the movements of Earth and the sun, day and night cycle, and change of seasons. Uses an interactive, three-dimensional (3D) virtual environment. Initial results show that the majority of students enthused about…

  1. A Model Supported Interactive Virtual Environment for Natural Resource Sharing in Environmental Education

    ERIC Educational Resources Information Center

    Barbalios, N.; Ioannidou, I.; Tzionas, P.; Paraskeuopoulos, S.

    2013-01-01

    This paper introduces a realistic 3D model supported virtual environment for environmental education, that highlights the importance of water resource sharing by focusing on the tragedy of the commons dilemma. The proposed virtual environment entails simulations that are controlled by a multi-agent simulation model of a real ecosystem consisting…

  2. Social Interaction Development through Immersive Virtual Environments

    ERIC Educational Resources Information Center

    Beach, Jason; Wendt, Jeremy

    2014-01-01

    The purpose of this pilot study was to determine if participants could improve their social interaction skills by participating in a virtual immersive environment. The participants used a developing virtual reality head-mounted display to engage themselves in a fully-immersive environment. While in the environment, participants had an opportunity…

  3. Evaluation Framework for Collaborative Educational Virtual Environments

    ERIC Educational Resources Information Center

    Tsiatsos, Thrasyvoulos; Andreas, Konstantinidis; Pomportsis, Andreas

    2010-01-01

    In this paper we will focus on a specific category of Collaborative Virtual Environments that aims to support Collaborative Learning. We call these environments Collaborative Educational Virtual Environments. Our aim is to analyze the evaluation process through the study of relevant bibliography and by doing so reveal the existing research gap…

  4. Simulating Navigation with Virtual 3d Geovisualizations - a Focus on Memory Related Factors

    NASA Astrophysics Data System (ADS)

    Lokka, I.; Çöltekin, A.

    2016-06-01

    The use of virtual environments (VE) for navigation-related studies, such as spatial cognition and path retrieval has been widely adopted in cognitive psychology and related fields. What motivates the use of VEs for such studies is that, as opposed to real-world, we can control for the confounding variables in simulated VEs. When simulating a geographic environment as a virtual world with the intention to train navigational memory in humans, an effective and efficient visual design is important to facilitate the amount of recall. However, it is not yet clear what amount of information should be included in such visual designs intended to facilitate remembering: there can be too little or too much of it. Besides the amount of information or level of detail, the types of visual features (`elements' in a visual scene) that should be included in the representations to create memorable scenes and paths must be defined. We analyzed the literature in cognitive psychology, geovisualization and information visualization, and identified the key factors for studying and evaluating geovisualization designs for their function to support and strengthen human navigational memory. The key factors we identified are: i) the individual abilities and age of the users, ii) the level of realism (LOR) included in the representations and iii) the context in which the navigation is performed, thus specific tasks within a case scenario. Here we present a concise literature review and our conceptual development for follow-up experiments.

  5. Re-Dimensional Thinking in Earth Science: From 3-D Virtual Reality Panoramas to 2-D Contour Maps

    ERIC Educational Resources Information Center

    Park, John; Carter, Glenda; Butler, Susan; Slykhuis, David; Reid-Griffin, Angelia

    2008-01-01

    This study examines the relationship of gender and spatial perception on student interactivity with contour maps and non-immersive virtual reality. Eighteen eighth-grade students elected to participate in a six-week activity-based course called "3-D GeoMapping." The course included nine days of activities related to topographic mapping. At the end…

  6. An Australian and New Zealand Scoping Study on the Use of 3D Immersive Virtual Worlds in Higher Education

    ERIC Educational Resources Information Center

    Dalgarno, Barney; Lee, Mark J. W.; Carlson, Lauren; Gregory, Sue; Tynan, Belinda

    2011-01-01

    This article describes the research design of, and reports selected findings from, a scoping study aimed at examining current and planned applications of 3D immersive virtual worlds at higher education institutions across Australia and New Zealand. The scoping study is the first of its kind in the region, intended to parallel and complement a…

  7. A 3-D Virtual Reality Model of the Sun and the Moon for E-Learning at Elementary Schools

    ERIC Educational Resources Information Center

    Sun, Koun-Tem; Lin, Ching-Ling; Wang, Sheng-Min

    2010-01-01

    The relative positions of the sun, moon, and earth, their movements, and their relationships are abstract and difficult to understand astronomical concepts in elementary school science. This study proposes a three-dimensional (3-D) virtual reality (VR) model named the "Sun and Moon System." This e-learning resource was designed by combining…

  8. A Virtual Geant4 Environment

    NASA Astrophysics Data System (ADS)

    Iwai, Go

    2015-12-01

    We describe the development of an environment for Geant4 consisting of an application and data that provide users with a more efficient way to access Geant4 applications without having to download and build the software locally. The environment is platform neutral and offers the users near-real time performance. In addition, the environment consists of data and Geant4 libraries built using low-level virtual machine (LLVM) tools which can produce bitcode that can be embedded in HTML and accessed via a browser. The bitcode is downloaded to the local machine via the browser and can then be configured by the user. This approach provides a way of minimising the risk of leaking potentially sensitive data used to construct the Geant4 model and application in the medical domain for treatment planning. We describe several applications that have used this approach and compare their performance with that of native applications. We also describe potential user communities that could benefit from this approach.

  9. Development of real-time motion capture system for 3D on-line games linked with virtual character

    NASA Astrophysics Data System (ADS)

    Kim, Jong Hyeong; Ryu, Young Kee; Cho, Hyung Suck

    2004-10-01

    Motion tracking method is being issued as essential part of the entertainment, medical, sports, education and industry with the development of 3-D virtual reality. Virtual human character in the digital animation and game application has been controlled by interfacing devices; mouse, joysticks, midi-slider, and so on. Those devices could not enable virtual human character to move smoothly and naturally. Furthermore, high-end human motion capture systems in commercial market are expensive and complicated. In this paper, we proposed a practical and fast motion capturing system consisting of optic sensors, and linked the data with 3-D game character with real time. The prototype experiment setup is successfully applied to a boxing game which requires very fast movement of human character.

  10. Magnetic resonance virtual histology for embryos: 3D atlases for automated high-throughput phenotyping.

    PubMed

    Cleary, Jon O; Modat, Marc; Norris, Francesca C; Price, Anthony N; Jayakody, Sujatha A; Martinez-Barbera, Juan Pedro; Greene, Nicholas D E; Hawkes, David J; Ordidge, Roger J; Scambler, Peter J; Ourselin, Sebastien; Lythgoe, Mark F

    2011-01-15

    Ambitious international efforts are underway to produce gene-knockout mice for each of the 25,000 mouse genes, providing a new platform to study mammalian development and disease. Robust, large-scale methods for morphological assessment of prenatal mice will be essential to this work. Embryo phenotyping currently relies on histological techniques but these are not well suited to large volume screening. The qualitative nature of these approaches also limits the potential for detailed group analysis. Advances in non-invasive imaging techniques such as magnetic resonance imaging (MRI) may surmount these barriers. We present a high-throughput approach to generate detailed virtual histology of the whole embryo, combined with the novel use of a whole-embryo atlas for automated phenotypic assessment. Using individual 3D embryo MRI histology, we identified new pituitary phenotypes in Hesx1 mutant mice. Subsequently, we used advanced computational techniques to produce a whole-body embryo atlas from 6 CD-1 embryos, creating an average image with greatly enhanced anatomical detail, particularly in CNS structures. This methodology enabled unsupervised assessment of morphological differences between CD-1 embryos and Chd7 knockout mice (n=5 Chd7(+/+) and n=8 Chd7(+/-), C57BL/6 background). Using a new atlas generated from these three groups, quantitative organ volumes were automatically measured. We demonstrated a difference in mean brain volumes between Chd7(+/+) and Chd7(+/-) mice (42.0 vs. 39.1mm(3), p<0.05). Differences in whole-body, olfactory and normalised pituitary gland volumes were also found between CD-1 and Chd7(+/+) mice (C57BL/6 background). Our work demonstrates the feasibility of combining high-throughput embryo MRI with automated analysis techniques to distinguish novel mouse phenotypes. PMID:20656039

  11. 3D Simulation as a Learning Environment for Acquiring the Skill of Self-Management: An Experience Involving Spanish University Students of Education

    ERIC Educational Resources Information Center

    Cela-Ranilla, Jose María; Esteve-Gonzalez, Vanessa; Esteve-Mon, Francesc; Gisbert-Cervera, Merce

    2014-01-01

    In this study we analyze how 57 Spanish university students of Education developed a learning process in a virtual world by conducting activities that involved the skill of self-management. The learning experience comprised a serious game designed in a 3D simulation environment. Descriptive statistics and non-parametric tests were used in the…

  12. Digital Immersive Virtual Environments and Instructional Computing

    ERIC Educational Resources Information Center

    Blascovich, Jim; Beall, Andrew C.

    2010-01-01

    This article reviews theory and research relevant to the development of digital immersive virtual environment-based instructional computing systems. The review is organized within the context of a multidimensional model of social influence and interaction within virtual environments that models the interaction of four theoretical factors: theory…

  13. Aerospace applications of virtual environment technology.

    PubMed

    Loftin, R B

    1996-11-01

    The uses of virtual environment technology in the space program are examined with emphasis on training for the Hubble Space Telescope Repair and Maintenance Mission in 1993. Project ScienceSpace at the Virtual Environment Technology Lab is discussed. PMID:11539349

  14. Special Section: New Ways to Detect Colon Cancer 3-D virtual screening now being used

    MedlinePlus

    ... tech medical fields of biomedical visualization, computer graphics, virtual reality, and multimedia. The year was 1994. Kaufman's "two- ... organ, like the colon—and view it in virtual reality." Later, he and his team used it with ...

  15. Vasculogenesis and angiogenesis in the first trimester human placenta: an innovative 3D study using an immersive Virtual Reality system.

    PubMed

    van Oppenraaij, R H F; Koning, A H J; Lisman, B A; Boer, K; van den Hoff, M J B; van der Spek, P J; Steegers, E A P; Exalto, N

    2009-03-01

    First trimester human villous vascularization is mainly studied by conventional two-dimensional (2D) microscopy. With this (2D) technique it is not possible to observe the spatial arrangement of the haemangioblastic cords and vessels, transition of cords into vessels and the transition of vasculogenesis to angiogenesis. The Confocal Laser Scanning Microscopy (CLSM) allows for a three-dimensional (3D) reconstruction of images of early pregnancy villous vascularization. These 3D reconstructions, however, are normally analyzed on a 2D medium, lacking depth perception. We performed a descriptive morphologic study, using an immersive Virtual Reality system to utilize the full third dimension completely. This innovative 3D technique visualizes 3D datasets as enlarged 3D holograms and provided detailed insight in the spatial arrangement of first trimester villous vascularization, the beginning of lumen formation within various junctions of haemangioblastic cords between 5 and 7 weeks gestational age and in the gradual transition of vasculogenesis to angiogenesis. This innovative immersive Virtual Reality system enables new perspectives for vascular research and will be implemented for future investigation. PMID:19185915

  16. Toward real-time endoscopically-guided robotic navigation based on a 3D virtual surgical field model

    NASA Astrophysics Data System (ADS)

    Gong, Yuanzheng; Hu, Danying; Hannaford, Blake; Seibel, Eric J.

    2015-03-01

    The challenge is to accurately guide the surgical tool within the three-dimensional (3D) surgical field for roboticallyassisted operations such as tumor margin removal from a debulked brain tumor cavity. The proposed technique is 3D image-guided surgical navigation based on matching intraoperative video frames to a 3D virtual model of the surgical field. A small laser-scanning endoscopic camera was attached to a mock minimally-invasive surgical tool that was manipulated toward a region of interest (residual tumor) within a phantom of a debulked brain tumor. Video frames from the endoscope provided features that were matched to the 3D virtual model, which were reconstructed earlier by raster scanning over the surgical field. Camera pose (position and orientation) is recovered by implementing a constrained bundle adjustment algorithm. Navigational error during the approach to fluorescence target (residual tumor) is determined by comparing the calculated camera pose to the measured camera pose using a micro-positioning stage. From these preliminary results, computation efficiency of the algorithm in MATLAB code is near real-time (2.5 sec for each estimation of pose), which can be improved by implementation in C++. Error analysis produced 3-mm distance error and 2.5 degree of orientation error on average. The sources of these errors come from 1) inaccuracy of the 3D virtual model, generated on a calibrated RAVEN robotic platform with stereo tracking; 2) inaccuracy of endoscope intrinsic parameters, such as focal length; and 3) any endoscopic image distortion from scanning irregularities. This work demonstrates feasibility of micro-camera 3D guidance of a robotic surgical tool.

  17. SciEthics Interactive: Science and Ethics Learning in a Virtual Environment

    ERIC Educational Resources Information Center

    Nadolny, Larysa; Woolfrey, Joan; Pierlott, Matthew; Kahn, Seth

    2013-01-01

    Learning in immersive 3D environments allows students to collaborate, build, and interact with difficult course concepts. This case study examines the design and development of the TransGen Island within the SciEthics Interactive project, a National Science Foundation-funded, 3D virtual world emphasizing learning science content in the context of…

  18. 3-D prestack Kirchhoff depth migration: From prototype to production in a massively parallel processor environment

    SciTech Connect

    Chang, H.; Solano, M.; VanDyke, J.P.; McMechan, G.A.; Epili, D.

    1998-03-01

    Portable, production-scale 3-D prestack Kirchhoff depth migration software capable of full-volume imaging has been successfully implemented and applied to a six-million trace (46.9 Gbyte) marine data set from a salt/subsalt play in the Gulf of Mexico. Velocity model building and updates use an image-driven strategy and were performed in a Sun Sparc environment. Images obtained by 3-D prestack migration after three velocity iterations are substantially better focused and reveal drilling targets that were not visible in images obtained from conventional 3-D poststack time migration. Amplitudes are well preserved, so anomalies associated with known reservoirs conform to the petrophysical predictions. Prototype development was on an 8-node Intel iPSC860 computer; the production version was run on an 1824-node Intel Paragon computer. The code has been successfully ported to CRAY (T3D) and Unix workstation (PVM) environments.

  19. Shared virtual environments for aerospace training

    NASA Technical Reports Server (NTRS)

    Loftin, R. Bowen; Voss, Mark

    1994-01-01

    Virtual environments have the potential to significantly enhance the training of NASA astronauts and ground-based personnel for a variety of activities. A critical requirement is the need to share virtual environments, in real or near real time, between remote sites. It has been hypothesized that the training of international astronaut crews could be done more cheaply and effectively by utilizing such shared virtual environments in the early stages of mission preparation. The Software Technology Branch at NASA's Johnson Space Center has developed the capability for multiple users to simultaneously share the same virtual environment. Each user generates the graphics needed to create the virtual environment. All changes of object position and state are communicated to all users so that each virtual environment maintains its 'currency.' Examples of these shared environments will be discussed and plans for the utilization of the Department of Defense's Distributed Interactive Simulation (DIS) protocols for shared virtual environments will be presented. Finally, the impact of this technology on training and education in general will be explored.

  20. Virtual agents in a simulated virtual training environment

    NASA Technical Reports Server (NTRS)

    Achorn, Brett; Badler, Norman L.

    1993-01-01

    A drawback to live-action training simulations is the need to gather a large group of participants in order to train a few individuals. One solution to this difficulty is the use of computer-controlled agents in a virtual training environment. This allows a human participant to be replaced by a virtual, or simulated, agent when only limited responses are needed. Each agent possesses a specified set of behaviors and is capable of limited autonomous action in response to its environment or the direction of a human trainee. The paper describes these agents in the context of a simulated hostage rescue training session, involving two human rescuers assisted by three virtual (computer-controlled) agents and opposed by three other virtual agents.

  1. Real-time 3D video compression for tele-immersive environments

    NASA Astrophysics Data System (ADS)

    Yang, Zhenyu; Cui, Yi; Anwar, Zahid; Bocchino, Robert; Kiyanclar, Nadir; Nahrstedt, Klara; Campbell, Roy H.; Yurcik, William

    2006-01-01

    Tele-immersive systems can improve productivity and aid communication by allowing distributed parties to exchange information via a shared immersive experience. The TEEVE research project at the University of Illinois at Urbana-Champaign and the University of California at Berkeley seeks to foster the development and use of tele-immersive environments by a holistic integration of existing components that capture, transmit, and render three-dimensional (3D) scenes in real time to convey a sense of immersive space. However, the transmission of 3D video poses significant challenges. First, it is bandwidth-intensive, as it requires the transmission of multiple large-volume 3D video streams. Second, existing schemes for 2D color video compression such as MPEG, JPEG, and H.263 cannot be applied directly because the 3D video data contains depth as well as color information. Our goal is to explore from a different angle of the 3D compression space with factors including complexity, compression ratio, quality, and real-time performance. To investigate these trade-offs, we present and evaluate two simple 3D compression schemes. For the first scheme, we use color reduction to compress the color information, which we then compress along with the depth information using zlib. For the second scheme, we use motion JPEG to compress the color information and run-length encoding followed by Huffman coding to compress the depth information. We apply both schemes to 3D videos captured from a real tele-immersive environment. Our experimental results show that: (1) the compressed data preserves enough information to communicate the 3D images effectively (min. PSNR > 40) and (2) even without inter-frame motion estimation, very high compression ratios (avg. > 15) are achievable at speeds sufficient to allow real-time communication (avg. ~ 13 ms per 3D video frame).

  2. Pictorial communication in virtual and real environments

    NASA Technical Reports Server (NTRS)

    Ellis, Stephen R. (Editor)

    1991-01-01

    Papers about the communication between human users and machines in real and synthetic environments are presented. Individual topics addressed include: pictorial communication, distortions in memory for visual displays, cartography and map displays, efficiency of graphical perception, volumetric visualization of 3D data, spatial displays to increase pilot situational awareness, teleoperation of land vehicles, computer graphics system for visualizing spacecraft in orbit, visual display aid for orbital maneuvering, multiaxis control in telemanipulation and vehicle guidance, visual enhancements in pick-and-place tasks, target axis effects under transformed visual-motor mappings, adapting to variable prismatic displacement. Also discussed are: spatial vision within egocentric and exocentric frames of reference, sensory conflict in motion sickness, interactions of form and orientation, perception of geometrical structure from congruence, prediction of three-dimensionality across continuous surfaces, effects of viewpoint in the virtual space of pictures, visual slant underestimation, spatial constraints of stereopsis in video displays, stereoscopic stance perception, paradoxical monocular stereopsis and perspective vergence. (No individual items are abstracted in this volume)

  3. Emergency Response Virtual Environment for Safe Schools

    NASA Technical Reports Server (NTRS)

    Wasfy, Ayman; Walker, Teresa

    2008-01-01

    An intelligent emergency response virtual environment (ERVE) that provides emergency first responders, response planners, and managers with situational awareness as well as training and support for safe schools is presented. ERVE incorporates an intelligent agent facility for guiding and assisting the user in the context of the emergency response operations. Response information folders capture key information about the school. The system enables interactive 3D visualization of schools and academic campuses, including the terrain and the buildings' exteriors and interiors in an easy to use Web..based interface. ERVE incorporates live camera and sensors feeds and can be integrated with other simulations such as chemical plume simulation. The system is integrated with a Geographical Information System (GIS) to enable situational awareness of emergency events and assessment of their effect on schools in a geographic area. ERVE can also be integrated with emergency text messaging notification systems. Using ERVE, it is now possible to address safe schools' emergency management needs with a scaleable, seamlessly integrated and fully interactive intelligent and visually compelling solution.

  4. Virtual environment architecture for rapid application development

    NASA Technical Reports Server (NTRS)

    Grinstein, Georges G.; Southard, David A.; Lee, J. P.

    1993-01-01

    We describe the MITRE Virtual Environment Architecture (VEA), a product of nearly two years of investigations and prototypes of virtual environment technology. This paper discusses the requirements for rapid prototyping, and an architecture we are developing to support virtual environment construction. VEA supports rapid application development by providing a variety of pre-built modules that can be reconfigured for each application session. The modules supply interfaces for several types of interactive I/O devices, in addition to large-screen or head-mounted displays.

  5. A PC-based high-quality and interactive virtual endoscopy navigating system using 3D texture based volume rendering.

    PubMed

    Hwang, Jin-Woo; Lee, Jong-Min; Kim, In-Young; Song, In-Ho; Lee, Yong-Hee; Kim, SunI

    2003-05-01

    As an alternative method to optical endoscopy, visual quality and interactivity are crucial for virtual endoscopy. One solution is to use the 3D texture map based volume rendering method that offers high rendering speed without reducing visual quality. However, it is difficult to apply the method to virtual endoscopy. First, 3D texture mapping requires a high-end graphic workstation. Second, texture memory limits reduce the frame-rate. Third, lack of shading reduces visual quality significantly. As 3D texture mapping has become available on personal computers recently, we developed an interactive navigation system using 3D texture mapping on a personal computer. We divided the volume data into small cubes and tested whether the cubes had meaningful data. Only the cubes that passed the test were loaded into the texture memory and rendered. With the amount of data to be rendered minimized, rendering speed increased remarkably. We also improved visual quality by implementing full Phong shading based on the iso-surface shading method without sacrificing interactivity. With the developed navigation system, 256 x 256 x 256 sized brain MRA data was interactively explored with good image quality. PMID:12725966

  6. Testing the hybrid-3D Hillslope Hydrological Model in a Real-World Controlled Environment

    NASA Astrophysics Data System (ADS)

    Hazenberg, P.; Broxton, P. D.; Gochis, D. J.; Niu, G. Y.; Pelletier, J. D.; Troch, P. A. A.; Zeng, X.

    2015-12-01

    Hillslopes play an important role for converting rainfall into runoff, and as such, influence theterrestrial dynamics of the Earth's climate system. Recently, we have developed a hybrid-3D (h3D) hillslope hydrological model that couples a 1D vertical soil column model with a lateral pseudo-2D saturated zone and overland flow model. The h3D model gives similar results as the CATchment HYdrological model (CATHY), which simulates the subsurface movement of water with the 3D Richards equation, though the runtime efficiency of the h3D model is about 2-3 orders of magnitude faster. In the current work, the ability of the h3D model to predict real-world hydrological dynamics is assessed using a number of recharge-drainage experiments within the Landscape Evolution Observatory (LEO) at the Biosphere 2 near Tucson, Arizona, USA. LEO offers accurate and high-resolution (both temporally and spatially) observations of the inputs, outputs and storage dynamics of several hillslopes. The level of detail of these observations is generally not possible with real-world hillslope studies. Therefore, LEO offers an optimal environment to test the h3D model. The h3D model captures the observed storage, baseflow, and overland flow dynamics of both a larger and a smaller hillslope. Furthermore, it simulates overland flow better than CATHY. The h3D model has difficulties correctly representing the height of the saturated zone close to the seepage face of the smaller hillslope, though. There is a gravel layer near this seepage face, and the numerical boundary condition of the h3D model is insufficient to capture the hydrological dynamics within this region. In addition, the h3D model is used to test the hypothesis that model parameters change through time due to the migration of soil particles during the recharge-drainage experiments. An in depth calibration of the h3D model parameters reveals that the best results are obtained by applying an event-based optimization procedure as compared

  7. NVision: A 3D Visualization Environment for N-Body Simulations

    NASA Astrophysics Data System (ADS)

    Markiel, J. A.

    2000-05-01

    NVision: A 3D Visualization Environment for N-Body Simulations We are developing a set of packages for 3D visualization and analysis of our numerical N-body simulations. These tools are intended to be generalizable to a wide range of related problems including cosmological, planetary dynamics, and molecular dynamics simulations. The applications and source code will be fully available to the community. To prototype this project we have adopted the Java platform with the newly released Java3D extension to take advantage of its portability, object-oriented environment, and availability of extensive documentation and class libraries. We will describe the goals and design principles of the project and demo the currently implemented features, including visualization of cosmological simulations and the simulated collision of two rubble-pile asteroids. This research is supported by NSF grants AST99-73209 and AST99-79891.

  8. Virtual environment assessment for laser-based vision surface profiling

    NASA Astrophysics Data System (ADS)

    ElSoussi, Adnane; Al Alami, Abed ElRahman; Abu-Nabah, Bassam A.

    2015-03-01

    Oil and gas businesses have been raising the demand from original equipment manufacturers (OEMs) to implement a reliable metrology method in assessing surface profiles of welds before and after grinding. This certainly mandates the deviation from the commonly used surface measurement gauges, which are not only operator dependent, but also limited to discrete measurements along the weld. Due to its potential accuracy and speed, the use of laser-based vision surface profiling systems have been progressively rising as part of manufacturing quality control. This effort presents a virtual environment that lends itself for developing and evaluating existing laser vision sensor (LVS) calibration and measurement techniques. A combination of two known calibration techniques is implemented to deliver a calibrated LVS system. System calibration is implemented virtually and experimentally to scan simulated and 3D printed features of known profiles, respectively. Scanned data is inverted and compared with the input profiles to validate the virtual environment capability for LVS surface profiling and preliminary assess the measurement technique for weld profiling applications. Moreover, this effort brings 3D scanning capability a step closer towards robust quality control applications in a manufacturing environment.

  9. Robot navigation in cluttered 3-D environments using preference-based fuzzy behaviors.

    PubMed

    Shi, Dongqing; Collins, Emmanuel G; Dunlap, Damion

    2007-12-01

    Autonomous navigation systems for mobile robots have been successfully deployed for a wide range of planar ground-based tasks. However, very few counterparts of previous planar navigation systems were developed for 3-D motion, which is needed for both unmanned aerial and underwater vehicles. A novel fuzzy behavioral scheme for navigating an unmanned helicopter in cluttered 3-D spaces is developed. The 3-D navigation problem is decomposed into several identical 2-D navigation subproblems, each of which is solved by using preference-based fuzzy behaviors. Due to the shortcomings of vector summation during the fusion of the 2-D subproblems, instead of directly outputting steering subdirections by their own defuzzification processes, the intermediate preferences of the subproblems are fused to create a 3-D solution region, representing degrees of preference for the robot movement. A new defuzzification algorithm that steers the robot by finding the centroid of a 3-D convex region of maximum volume in the 3-D solution region is developed. A fuzzy speed-control system is also developed to ensure efficient and safe navigation. Substantial simulations have been carried out to demonstrate that the proposed algorithm can smoothly and effectively guide an unmanned helicopter through unknown and cluttered urban and forest environments. PMID:18179068

  10. Taking Science Online: Evaluating Presence and Immersion through a Laboratory Experience in a Virtual Learning Environment for Entomology Students

    ERIC Educational Resources Information Center

    Annetta, Leonard; Klesath, Marta; Meyer, John

    2009-01-01

    A 3-D virtual field trip was integrated into an online college entomology course and developed as a trial for the possible incorporation of future virtual environments to supplement online higher education laboratories. This article provides an explanation of the rationale behind creating the virtual experience, the Bug Farm; the method and…

  11. Generation of a tumor spheroid in a microgravity environment as a 3D model of melanoma.

    PubMed

    Marrero, Bernadette; Messina, Jane L; Heller, Richard

    2009-10-01

    An in vitro 3D model was developed utilizing a synthetic microgravity environment to facilitate studying the cell interactions. 2D monolayer cell culture models have been successfully used to understand various cellular reactions that occur in vivo. There are some limitations to the 2D model that are apparent when compared to cells grown in a 3D matrix. For example, some proteins that are not expressed in a 2D model are found up-regulated in the 3D matrix. In this paper, we discuss techniques used to develop the first known large, free-floating 3D tissue model used to establish tumor spheroids. The bioreactor system known as the High Aspect Ratio Vessel (HARVs) was used to provide a microgravity environment. The HARVs promoted aggregation of keratinocytes (HaCaT) that formed a construct that served as scaffolding for the growth of mouse melanoma. Although there is an emphasis on building a 3D model with the proper extracellular matrix and stroma, we were able to develop a model that excluded the use of matrigel. Immunohistochemistry and apoptosis assays provided evidence that this 3D model supports B16.F10 cell growth, proliferation, and synthesis of extracellular matrix. Immunofluorescence showed that melanoma cells interact with one another displaying observable cellular morphological changes. The goal of engineering a 3D tissue model is to collect new information about cancer development and develop new potential treatment regimens that can be translated to in vivo models while reducing the use of laboratory animals. PMID:19533253

  12. Best Practices for Designing Online Learning Environments for 3D Modeling Curricula: A Delphi Study

    ERIC Educational Resources Information Center

    Mapson, Kathleen Harrell

    2011-01-01

    The purpose of this study was to develop an inventory of best practices for designing online learning environments for 3D modeling curricula. Due to the instructional complexity of three-dimensional modeling, few have sought to develop this type of course for online teaching and learning. Considering this, the study aimed to collectively aggregate…

  13. Physical Environment as a 3-D Textbook: Design and Development of a Prototype

    ERIC Educational Resources Information Center

    Kong, Seng Yeap; Yaacob, Naziaty Mohd; Ariffin, Ati Rosemary Mohd

    2015-01-01

    The use of the physical environment as a three-dimensional (3-D) textbook is not a common practice in educational facilities design. Previous researches documented that little progress has been made to incorporate environmental education (EE) into architecture, especially among the conventional designers who are often constrained by the budget and…

  14. Structuring Narrative in 3D Digital Game-Based Learning Environments to Support Second Language Acquisition

    ERIC Educational Resources Information Center

    Neville, David O.

    2010-01-01

    The essay is a conceptual analysis from an instructional design perspective exploring the feasibility of using three-dimensional digital game-based learning (3D-DGBL) environments to assist in second language acquisition (SLA). It examines the shared characteristics of narrative within theories of situated cognition, context-based approaches to…

  15. Accident or homicide--virtual crime scene reconstruction using 3D methods.

    PubMed

    Buck, Ursula; Naether, Silvio; Räss, Beat; Jackowski, Christian; Thali, Michael J

    2013-02-10

    The analysis and reconstruction of forensically relevant events, such as traffic accidents, criminal assaults and homicides are based on external and internal morphological findings of the injured or deceased person. For this approach high-tech methods are gaining increasing importance in forensic investigations. The non-contact optical 3D digitising system GOM ATOS is applied as a suitable tool for whole body surface and wound documentation and analysis in order to identify injury-causing instruments and to reconstruct the course of event. In addition to the surface documentation, cross-sectional imaging methods deliver medical internal findings of the body. These 3D data are fused into a whole body model of the deceased. Additional to the findings of the bodies, the injury inflicting instruments and incident scene is documented in 3D. The 3D data of the incident scene, generated by 3D laser scanning and photogrammetry, is also included into the reconstruction. Two cases illustrate the methods. In the fist case a man was shot in his bedroom and the main question was, if the offender shot the man intentionally or accidentally, as he declared. In the second case a woman was hit by a car, driving backwards into a garage. It was unclear if the driver drove backwards once or twice, which would indicate that he willingly injured and killed the woman. With this work, we demonstrate how 3D documentation, data merging and animation enable to answer reconstructive questions regarding the dynamic development of patterned injuries, and how this leads to a real data based reconstruction of the course of event. PMID:22727689

  16. Automatic 360-deg profilometry of a 3D object using a shearing interferometer and virtual grating

    NASA Astrophysics Data System (ADS)

    Zhang, Yong-Lin; Bu, Guixue

    1996-10-01

    Phase measuring technique has been widely used in optical precision inspection for its extraordinary advantage. We use the phase-measuring technique and design a practical instrument for measuring 360 degrees profile of 3D object. A novel method that can realize profile detection with higher speed and lower cost is proposed. Phase unwrapping algorithm based on the second order differentiation is developed. A complete 3D shape is reconstructed from a series of line- section profiles corresponding to discrete angular position of the object. The profile-jointing procedure is only related with two fixed parameters and coordination transformation.

  17. Virtually Ostracized: Studying Ostracism in Immersive Virtual Environments

    PubMed Central

    Wesselmann, Eric D.; Law, Alvin Ty; Williams, Kipling D.

    2012-01-01

    Abstract Electronic-based communication (such as Immersive Virtual Environments; IVEs) may offer new ways of satisfying the need for social connection, but they also provide ways this need can be thwarted. Ostracism, being ignored and excluded, is a common social experience that threatens fundamental human needs (i.e., belonging, control, self-esteem, and meaningful existence). Previous ostracism research has made use of a variety of paradigms, including minimal electronic-based interactions (e.g., Cyberball) and communication (e.g., chatrooms and Short Message Services). These paradigms, however, lack the mundane realism that many IVEs now offer. Further, IVE paradigms designed to measure ostracism may allow researchers to test more nuanced hypotheses about the effects of ostracism. We created an IVE in which ostracism could be manipulated experimentally, emulating a previously validated minimal ostracism paradigm. We found that participants who were ostracized in this IVE experienced the same negative effects demonstrated in other ostracism paradigms, providing, to our knowledge, the first evidence of the negative effects of ostracism in virtual environments. Though further research directly exploring these effects in online virtual environments is needed, this research suggests that individuals encountering ostracism in other virtual environments (such as massively multiplayer online role playing games; MMORPGs) may experience negative effects similar to those of being ostracized in real life. This possibility may have serious implications for individuals who are marginalized in their real life and turn to IVEs to satisfy their need for social connection. PMID:22897472

  18. Sharing visualization experiences among remote virtual environments

    SciTech Connect

    Disz, T.L.; Papka, M.E.; Pellegrino, M.; Stevens, R.

    1995-12-31

    Virtual reality has become an increasingly familiar part of the science of visualization and communication of information. This, combined with the increase in connectivity of remote sites via high-speed networks, allows for the development of a collaborative distributed virtual environment. Such an environment enables the development of supercomputer simulations with virtual reality visualizations that can be displayed at multiple sites, with each site interacting, viewing, and communicating about the results being discovered. The early results of an experimental collaborative virtual reality environment are discussed in this paper. The issues that need to be addressed in the implementation, as well as preliminary results are covered. Also provided are a discussion of plans and a generalized application programmers interface for CAVE to CAVE will be provided.

  19. Distributed virtual environment for emergency medical training

    NASA Astrophysics Data System (ADS)

    Stytz, Martin R.; Banks, Sheila B.; Garcia, Brian W.; Godsell-Stytz, Gayl M.

    1997-07-01

    In many professions where individuals must work in a team in a high stress environment to accomplish a time-critical task, individual and team performance can benefit from joint training using distributed virtual environments (DVEs). One professional field that lacks but needs a high-fidelity team training environment is the field of emergency medicine. Currently, emergency department (ED) medical personnel train by using words to create a metal picture of a situation for the physician and staff, who then cooperate to solve the problems portrayed by the word picture. The need in emergency medicine for realistic virtual team training is critical because ED staff typically encounter rarely occurring but life threatening situations only once in their careers and because ED teams currently have no realistic environment in which to practice their team skills. The resulting lack of experience and teamwork makes diagnosis and treatment more difficult. Virtual environment based training has the potential to redress these shortfalls. The objective of our research is to develop a state-of-the-art virtual environment for emergency medicine team training. The virtual emergency room (VER) allows ED physicians and medical staff to realistically prepare for emergency medical situations by performing triage, diagnosis, and treatment on virtual patients within an environment that provides them with the tools they require and the team environment they need to realistically perform these three tasks. There are several issues that must be addressed before this vision is realized. The key issues deal with distribution of computations; the doctor and staff interface to the virtual patient and ED equipment; the accurate simulation of individual patient organs' response to injury, medication, and treatment; and an accurate modeling of the symptoms and appearance of the patient while maintaining a real-time interaction capability. Our ongoing work addresses all of these issues. In this

  20. Controlling Social Stress in Virtual Reality Environments

    PubMed Central

    Hartanto, Dwi; Kampmann, Isabel L.; Morina, Nexhmedin; Emmelkamp, Paul G. M.; Neerincx, Mark A.; Brinkman, Willem-Paul

    2014-01-01

    Virtual reality exposure therapy has been proposed as a viable alternative in the treatment of anxiety disorders, including social anxiety disorder. Therapists could benefit from extensive control of anxiety eliciting stimuli during virtual exposure. Two stimuli controls are studied in this study: the social dialogue situation, and the dialogue feedback responses (negative or positive) between a human and a virtual character. In the first study, 16 participants were exposed in three virtual reality scenarios: a neutral virtual world, blind date scenario, and job interview scenario. Results showed a significant difference between the three virtual scenarios in the level of self-reported anxiety and heart rate. In the second study, 24 participants were exposed to a job interview scenario in a virtual environment where the ratio between negative and positive dialogue feedback responses of a virtual character was systematically varied on-the-fly. Results yielded that within a dialogue the more positive dialogue feedback resulted in less self-reported anxiety, lower heart rate, and longer answers, while more negative dialogue feedback of the virtual character resulted in the opposite. The correlations between on the one hand the dialogue stressor ratio and on the other hand the means of SUD score, heart rate and audio length in the eight dialogue conditions showed a strong relationship: r(6) = 0.91, p = 0.002; r(6) = 0.76, p = 0.028 and r(6) = −0.94, p = 0.001 respectively. Furthermore, more anticipatory anxiety reported before exposure was found to coincide with more self-reported anxiety, and shorter answers during the virtual exposure. These results demonstrate that social dialogues in a virtual environment can be effectively manipulated for therapeutic purposes. PMID:24671006

  1. Generating virtual textile composite specimens using statistical data from micro-computed tomography: 3D tow representations

    NASA Astrophysics Data System (ADS)

    Rinaldi, Renaud G.; Blacklock, Matthew; Bale, Hrishikesh; Begley, Matthew R.; Cox, Brian N.

    2012-08-01

    Recent work presented a Monte Carlo algorithm based on Markov Chain operators for generating replicas of textile composite specimens that possess the same statistical characteristics as specimens imaged using high resolution x-ray computed tomography. That work represented the textile reinforcement by one-dimensional tow loci in three-dimensional space, suitable for use in the Binary Model of textile composites. Here analogous algorithms are used to generate solid, three-dimensional (3D) tow representations, to provide geometrical models for more detailed failure analyses. The algorithms for generating 3D models are divided into those that refer to the topology of the textile and those that deal with its geometry. The topological rules carry all the information that distinguishes textiles with different interlacing patterns (weaves, braids, etc.) and provide instructions for resolving interpenetrations or ordering errors among tows. They also simplify writing a single computer program that can accept input data for generic textile cases. The geometrical rules adjust the shape and smoothness of the generated virtual specimens to match data from imaged specimens. The virtual specimen generator is illustrated using data for an angle interlock weave, a common 3D textile architecture.

  2. Proteopedia: A Collaborative, Virtual 3D Web-Resource for Protein and Biomolecule Structure and Function

    ERIC Educational Resources Information Center

    Hodis, Eran; Prilusky, Jaime, Sussman, Joel L.

    2010-01-01

    Protein structures are hard to represent on paper. They are large, complex, and three-dimensional (3D)--four-dimensional if conformational changes count! Unlike most of their substrates, which can easily be drawn out in full chemical formula, drawing every atom in a protein would usually be a mess. Simplifications like showing only the surface of…

  3. ComputerApplications and Virtual Environments (CAVE)

    NASA Technical Reports Server (NTRS)

    1993-01-01

    Virtual Reality (VR) can provide cost effective methods to design and evaluate components and systems for maintenance and refurbishment operations. The Marshall Space Flight Centerr (MSFC) in Huntsville, Alabama began to utilize VR for design analysis in the X-34 experimental reusable space vehicle. Analysts at MSFC's Computer Applications and Virtual Environments (CAVE) used Head Mounted Displays (HMD) (pictured), spatial trackers and gesture inputs as a means to animate or inhabit a properly sized virtual human model. These models were used in a VR scenario as a way to determine functionality of space and maintenance requirements for the virtual X-34. The primary functions of the virtual X-34 mockup was to support operations development and design analysis for engine removal, the engine compartment and the aft fuselage. This capability provided general visualization support to engineers and designers at MSFC and to the System Design Freeze Review at Orbital Sciences Corporation (OSC). The X-34 program was cancelled in 2001.

  4. ComputerApplications and Virtual Environments (CAVE)

    NASA Technical Reports Server (NTRS)

    1993-01-01

    Virtual Reality (VR) can provide cost effective methods to design and evaluate components and systems for maintenance and refurbishment operations. The Marshall Space Flight Center (MSFC) in Huntsville, Alabama began to utilize VR for design analysis in the X-34 experimental reusable space vehicle. Analysts at MSFC's Computer Applications and Virtual Environments (CAVE) used Head Mounted Displays (HMD) (pictured), spatial trackers and gesture inputs as a means to animate or inhabit a properly sized virtual human model. These models were used in a VR scenario as a way to determine functionality of space and maintenance requirements for the virtual X-34. The primary functions of the virtual X-34 mockup was to support operations development and design analysis for engine removal, the engine compartment and the aft fuselage. This capability providedgeneral visualization support to engineers and designers at MSFC and to the System Design Freeze Review at Orbital Sciences Corporation (OSC). The X-34 program was cancelled in 2001.

  5. Computer Applications and Virtual Environments (CAVE)

    NASA Technical Reports Server (NTRS)

    1993-01-01

    Virtual Reality (VR) can provide cost effective methods to design and evaluate components and systems for maintenance and refurbishment operations. Marshall SPace Flight Center (MSFC) is begirning to utilize VR for design analysis in the X-34 experimental reusable space vehicle. Analysts at MSFC's Computer Applications and Virtual Environments (CAVE) used Head Mounted Displays (HMD) (pictured), spatial trackers and gesture inputs as a means to animate or inhabit a properly sized virtual human model. These models are used in a VR scenario as a way to determine functionality of space and maintenance requirements for the virtual X-34. The primary functions of the virtual X-34 mockup is to support operations development and design analysis for engine removal, the engine compartment and the aft fuselage. This capability provides general visualization support to engineers and designers at MSFC and to the System Design Freeze Review at Orbital Sciences Corporation (OSC).

  6. Identification of potential influenza virus endonuclease inhibitors through virtual screening based on the 3D-QSAR model.

    PubMed

    Kim, J; Lee, C; Chong, Y

    2009-01-01

    Influenza endonucleases have appeared as an attractive target of antiviral therapy for influenza infection. With the purpose of designing a novel antiviral agent with enhanced biological activities against influenza endonuclease, a three-dimensional quantitative structure-activity relationships (3D-QSAR) model was generated based on 34 influenza endonuclease inhibitors. The comparative molecular similarity index analysis (CoMSIA) with a steric, electrostatic and hydrophobic (SEH) model showed the best correlative and predictive capability (q(2) = 0.763, r(2) = 0.969 and F = 174.785), which provided a pharmacophore composed of the electronegative moiety as well as the bulky hydrophobic group. The CoMSIA model was used as a pharmacophore query in the UNITY search of the ChemDiv compound library to give virtual active compounds. The 3D-QSAR model was then used to predict the activity of the selected compounds, which identified three compounds as the most likely inhibitor candidates. PMID:19343586

  7. The Input-Interface of Webcam Applied in 3D Virtual Reality Systems

    ERIC Educational Resources Information Center

    Sun, Huey-Min; Cheng, Wen-Lin

    2009-01-01

    Our research explores a virtual reality application based on Web camera (Webcam) input-interface. The interface can replace with the mouse to control direction intention of a user by the method of frame difference. We divide a frame into nine grids from Webcam and make use of the background registration to compute the moving object. In order to…

  8. Virtual Presence and the Mind's Eye in 3-D Online Communities

    NASA Astrophysics Data System (ADS)

    Beacham, R. C.; Denard, H.; Baker, D.

    2011-09-01

    Digital technologies have introduced fundamental changes in the forms, content, and media of communication. Indeed, some have suggested we are in the early stages of a seismic shift comparable to that in antiquity with the transition from a primarily oral culture to one based upon writing. The digital transformation is rapidly displacing the long-standing hegemony of text, and restoring in part social, bodily, oral and spatial elements, but in radically reconfigured forms and formats. Contributing to and drawing upon such changes and possibilities, scholars and those responsible for sites preserving or displaying cultural heritage, have undertaken projects to explore the properties and potential of the online communities enabled by "Virtual Worlds" and related platforms for teaching, collaboration, publication, and new modes of disciplinary research. Others, keenly observing and evaluating such work, are poised to contribute to it. It is crucial that leadership be provided to ensure that serious and sustained investigation be undertaken by scholars who have experience, and achievements, in more traditional forms of research, and who perceive the emerging potential of Virtual World work to advance their investigations. The Virtual Museums Transnational Network will seek to engage such scholars and provide leadership in this emerging and immensely attractive new area of cultural heritage exploration and experience. This presentation reviews examples of the current "state of the art" in heritage based Virtual World initiatives, looking at the new modes of social interaction and experience enabled by such online communities, and some of the achievements and future aspirations of this work.

  9. "The Evolution of e-Learning in the Context of 3D Virtual Worlds"

    ERIC Educational Resources Information Center

    Kotsilieris, Theodore; Dimopoulou, Nikoletta

    2013-01-01

    Information and Communication Technologies (ICT) offer new approaches towards knowledge acquisition and collaboration through distance learning processes. Web-based Learning Management Systems (LMS) have transformed the way that education is conducted nowadays. At the same time, the adoption of Virtual Worlds in the educational process is of great…

  10. Collaboration and Knowledge Sharing Using 3D Virtual World on "Second Life"

    ERIC Educational Resources Information Center

    Rahim, Noor Faridah A.

    2013-01-01

    A collaborative and knowledge sharing virtual activity on "Second Life" using a learner-centred teaching methodology was initiated between Temasek Polytechnic and The Hong Kong Polytechnic University (HK PolyU) in the October 2011 semester. This paper highlights the author's experience in designing and implementing this e-learning…

  11. Recent improvements in SPE3D: a VR-based surgery planning environment

    NASA Astrophysics Data System (ADS)

    Witkowski, Marcin; Sitnik, Robert; Verdonschot, Nico

    2014-02-01

    SPE3D is a surgery planning environment developed within TLEMsafe project [1] (funded by the European Commission FP7). It enables the operator to plan a surgical procedure on the customized musculoskeletal (MS) model of the patient's lower limbs, send the modified model to the biomechanical analysis module, and export the scenario's parameters to the surgical navigation system. The personalized patient-specific three-dimensional (3-D) MS model is registered with 3-D MRI dataset of lower limbs and the two modalities may be visualized simultaneously. Apart from main planes, any arbitrary MRI cross-section can be rendered on the 3-D MS model in real time. The interface provides tools for: bone cutting, manipulating and removal, repositioning muscle insertion points, modifying muscle force, removing muscles and placing implants stored in the implant library. SPE3D supports stereoscopic viewing as well as natural inspection/manipulation with use of haptic devices. Alternatively, it may be controlled with use of a standard computer keyboard, mouse and 2D display or a touch screen (e.g. in an operating room). The interface may be utilized in two main fields. Experienced surgeons may use it to simulate their operative plans and prepare input data for a surgical navigation system while student or novice surgeons can use it for training.

  12. Learner Perceptions and Recall of Small Group Discussions within 2D and 3D Collaborative Environments

    ERIC Educational Resources Information Center

    Downey, Steve; Mohler, Jill; Morris, Joan; Sanchez, Rene

    2012-01-01

    Online learning critically relies upon good communication between engaged parties in order to convey ideas, meanings, and values. Emerging technologies in collaborative virtual environments are providing new affordances in establishing greater online presence and, in turn, greater abilities to communicate and learn. This study examines how…

  13. Design and application of a virtual reality 3D engine based on rapid indices

    NASA Astrophysics Data System (ADS)

    Jiang, Nan; Mai, Jin

    2007-06-01

    This article proposes a data structure of a 3D engine based on rapid indices. Taking a model for a construction unit, this data structure can construct a coordinate array with 3D vertex rapidly and arrange those vertices in a sequence of triangle strips or triangle fans, which can be rendered rapidly by OpenGL. This data structure is easy to extend. It can hold texture coordinates, normal coordinates of vertices and a model matrix. Other models can be added to it, deleted from it, or transformed by model matrix, so it is flexible. This data structure also improves the render speed of OpenGL when it holds a large amount of data.

  14. Virtual touch 3D interactive system for autostereoscopic display with embedded optical sensor

    NASA Astrophysics Data System (ADS)

    Huang, Yi-Pai; Wang, Guo-Zhen; Ma, Ming-Ching; Tung, Shang-Yu; Huang, Shu-Yi; Tseng, Hung-Wei; Kuo, Chung-Hong; Li, Chun-Huai

    2011-06-01

    The traidational 3D interactive sysetm which uses CCD camera to capture image is difficult to operate on near range for mobile applications.Therefore, 3D interactive display with embedded optical sensor was proposed. Based on optical sensor based system, we proposed four different methods to support differenct functions. T mark algorithm can obtain 5- axis information (x, y, z,θ, and φ)of LED no matter where LED was vertical or inclined to panel and whatever it rotated. Sequential mark algorithm and color filter based algorithm can support mulit-user. Finally, bare finger touch system with sequential illuminator can achieve to interact with auto-stereoscopic images by bare finger. Furthermore, the proposed methods were verified on a 4-inch panel with embedded optical sensors.

  15. Model-based 3D human shape estimation from silhouettes for virtual fitting

    NASA Astrophysics Data System (ADS)

    Saito, Shunta; Kouchi, Makiko; Mochimaru, Masaaki; Aoki, Yoshimitsu

    2014-03-01

    We propose a model-based 3D human shape reconstruction system from two silhouettes. Firstly, we synthesize a deformable body model from 3D human shape database consists of a hundred whole body mesh models. Each mesh model is homologous, so that it has the same topology and same number of vertices among all models. We perform principal component analysis (PCA) on the database and synthesize an Active Shape Model (ASM). ASM allows changing the body type of the model with a few parameters. The pose changing of our model can be achieved by reconstructing the skeleton structures from implanted joints of the model. By applying pose changing after body type deformation, our model can represents various body types and any pose. We apply the model to the problem of 3D human shape reconstruction from front and side silhouette. Our approach is simply comparing the contours between the model's and input silhouettes', we then use only torso part contour of the model to reconstruct whole shape. We optimize the model parameters by minimizing the difference between corresponding silhouettes by using a stochastic, derivative-free non-linear optimization method, CMA-ES.

  16. Virtual Reality Training Environments: Contexts and Concerns.

    ERIC Educational Resources Information Center

    Harmon, Stephen W.; Kenney, Patrick J.

    1994-01-01

    Discusses the contexts where virtual reality (VR) training environments might be appropriate; examines the advantages and disadvantages of VR as a training technology; and presents a case study of a VR training environment used at the NASA Johnson Space Center in preparation for the repair of the Hubble Space Telescope. (AEF)

  17. Distributed collaborative environment with real-time tracking of 3D body postures

    NASA Astrophysics Data System (ADS)

    Alisi, Thomas M.; Del Bimbo, Alberto; Pucci, Fabio; Valli, Alessandro

    2003-12-01

    In this paper a multi-user motion capture system is presented, where users work from separate locations and interact in a common virtual environment. The system functions well on low-end personal computers; it implements a natural human/machine interaction due to the complete absence of markers and weak constraints on users' clothes and environment lighting. It is suitable for every-day use, where the great precision reached by complex commercial systems is not the principal requisite.

  18. Guidelines for developing distributed virtual environment applications

    NASA Astrophysics Data System (ADS)

    Stytz, Martin R.; Banks, Sheila B.

    1998-08-01

    We have conducted a variety of projects that served to investigate the limits of virtual environments and distributed virtual environment (DVE) technology for the military and medical professions. The projects include an application that allows the user to interactively explore a high-fidelity, dynamic scale model of the Solar System and a high-fidelity, photorealistic, rapidly reconfigurable aircraft simulator. Additional projects are a project for observing, analyzing, and understanding the activity in a military distributed virtual environment, a project to develop a distributed threat simulator for training Air Force pilots, a virtual spaceplane to determine user interface requirements for a planned military spaceplane system, and an automated wingman for use in supplementing or replacing human-controlled systems in a DVE. The last two projects are a virtual environment user interface framework; and a project for training hospital emergency department personnel. In the process of designing and assembling the DVE applications in support of these projects, we have developed rules of thumb and insights into assembling DVE applications and the environment itself. In this paper, we open with a brief review of the applications that were the source for our insights and then present the lessons learned as a result of these projects. The lessons we have learned fall primarily into five areas. These areas are requirements development, software architecture, human-computer interaction, graphical database modeling, and construction of computer-generated forces.

  19. Obstacle classification and 3D measurement in unstructured environments based on ToF cameras.

    PubMed

    Yu, Hongshan; Zhu, Jiang; Wang, Yaonan; Jia, Wenyan; Sun, Mingui; Tang, Yandong

    2014-01-01

    Inspired by the human 3D visual perception system, we present an obstacle detection and classification method based on the use of Time-of-Flight (ToF) cameras for robotic navigation in unstructured environments. The ToF camera provides 3D sensing by capturing an image along with per-pixel 3D space information. Based on this valuable feature and human knowledge of navigation, the proposed method first removes irrelevant regions which do not affect robot's movement from the scene. In the second step, regions of interest are detected and clustered as possible obstacles using both 3D information and intensity image obtained by the ToF camera. Consequently, a multiple relevance vector machine (RVM) classifier is designed to classify obstacles into four possible classes based on the terrain traversability and geometrical features of the obstacles. Finally, experimental results in various unstructured environments are presented to verify the robustness and performance of the proposed approach. We have found that, compared with the existing obstacle recognition methods, the new approach is more accurate and efficient. PMID:24945679

  20. Obstacle Classification and 3D Measurement in Unstructured Environments Based on ToF Cameras

    PubMed Central

    Yu, Hongshan; Zhu, Jiang; Wang, Yaonan; Jia, Wenyan; Sun, Mingui; Tang, Yandong

    2014-01-01

    Inspired by the human 3D visual perception system, we present an obstacle detection and classification method based on the use of Time-of-Flight (ToF) cameras for robotic navigation in unstructured environments. The ToF camera provides 3D sensing by capturing an image along with per-pixel 3D space information. Based on this valuable feature and human knowledge of navigation, the proposed method first removes irrelevant regions which do not affect robot's movement from the scene. In the second step, regions of interest are detected and clustered as possible obstacles using both 3D information and intensity image obtained by the ToF camera. Consequently, a multiple relevance vector machine (RVM) classifier is designed to classify obstacles into four possible classes based on the terrain traversability and geometrical features of the obstacles. Finally, experimental results in various unstructured environments are presented to verify the robustness and performance of the proposed approach. We have found that, compared with the existing obstacle recognition methods, the new approach is more accurate and efficient. PMID:24945679

  1. Scaffold hopping through virtual screening using 2D and 3D similarity descriptors: ranking, voting, and consensus scoring.

    PubMed

    Zhang, Qiang; Muegge, Ingo

    2006-03-01

    The ability to find novel bioactive scaffolds in compound similarity-based virtual screening experiments has been studied comparing Tanimoto-based, ranking-based, voting, and consensus scoring protocols. Ligand sets for seven well-known drug targets (CDK2, COX2, estrogen receptor, neuraminidase, HIV-1 protease, p38 MAP kinase, thrombin) have been assembled such that each ligand represents its own unique chemotype, thus ensuring that each similarity recognition event between ligands constitutes a scaffold hopping event. In a series of virtual screening studies involving 9969 MDDR compounds as negative controls it has been found that atom pair descriptors and 3D pharmacophore fingerprints combined with ranking, voting, and consensus scoring strategies perform well in finding novel bioactive scaffolds. In addition, often superior performance has been observed for similarity-based virtual screening compared to structure-based methods. This finding suggests that information about a target obtained from known bioactive ligands is as valuable as knowledge of the target structures for identifying novel bioactive scaffolds through virtual screening. PMID:16509572

  2. Microfabricated Tepui: probing into cancer invasion, metastasis and evolution in a 3D environment

    NASA Astrophysics Data System (ADS)

    Liu, Liyu

    2011-03-01

    Cancer metastasis and chemotherapeutic resistance are the major reasons why cancer remains recalcitrant to long-term therapy. We are interested to know: 1. How cancer cells invade tissues and metastasize in a 3D spatial environment? 2. How cancer cells evolve resistance to chemotherapeutic therapy? Answering these fundamental questions will require spatially propagating cancer cells in a 3D in vitro micro environment with dynamically controlled chemical stress. Here we attempt to realize this micro environment with a three-dimentional topology on a micro-chip which consist of isolated highlands (Tepui) and deep lower lands. Cancer cells are patterned in the lower lands and their spatial invasion to the mesas of Tepui is observed continuously with a microscope. Experiments have demonstrated that the cell invasion potential is time dependent, which is not only determined by cell motility, but also cell number and spatial stress. Quantitative analysis shows that the invasion rate fits logistic equation. Further more, we have also imbedded collagen based Extracellular Matrix (ECM) inside these structures and established a robust chemical gradient in a vertical space. With merit of real-time confocal imaging, cell propagation, metastasis and evolution in the 3D environment are studied with time as a model for cell behavior inside tissues. NCI grant: U54CA143803.

  3. User-centered virtual environment design for virtual rehabilitation

    PubMed Central

    2010-01-01

    Background As physical and cognitive rehabilitation protocols utilizing virtual environments transition from single applications to comprehensive rehabilitation programs there is a need for a new design cycle methodology. Current human-computer interaction designs focus on usability without benchmarking technology within a user-in-the-loop design cycle. The field of virtual rehabilitation is unique in that determining the efficacy of this genre of computer-aided therapies requires prior knowledge of technology issues that may confound patient outcome measures. Benchmarking the technology (e.g., displays or data gloves) using healthy controls may provide a means of characterizing the "normal" performance range of the virtual rehabilitation system. This standard not only allows therapists to select appropriate technology for use with their patient populations, it also allows them to account for technology limitations when assessing treatment efficacy. Methods An overview of the proposed user-centered design cycle is given. Comparisons of two optical see-through head-worn displays provide an example of benchmarking techniques. Benchmarks were obtained using a novel vision test capable of measuring a user's stereoacuity while wearing different types of head-worn displays. Results from healthy participants who performed both virtual and real-world versions of the stereoacuity test are discussed with respect to virtual rehabilitation design. Results The user-centered design cycle argues for benchmarking to precede virtual environment construction, especially for therapeutic applications. Results from real-world testing illustrate the general limitations in stereoacuity attained when viewing content using a head-worn display. Further, the stereoacuity vision benchmark test highlights differences in user performance when utilizing a similar style of head-worn display. These results support the need for including benchmarks as a means of better understanding user outcomes

  4. Physical environment virtualization for human activities recognition

    NASA Astrophysics Data System (ADS)

    Poshtkar, Azin; Elangovan, Vinayak; Shirkhodaie, Amir; Chan, Alex; Hu, Shuowen

    2015-05-01

    Human activity recognition research relies heavily on extensive datasets to verify and validate performance of activity recognition algorithms. However, obtaining real datasets are expensive and highly time consuming. A physics-based virtual simulation can accelerate the development of context based human activity recognition algorithms and techniques by generating relevant training and testing videos simulating diverse operational scenarios. In this paper, we discuss in detail the requisite capabilities of a virtual environment to aid as a test bed for evaluating and enhancing activity recognition algorithms. To demonstrate the numerous advantages of virtual environment development, a newly developed virtual environment simulation modeling (VESM) environment is presented here to generate calibrated multisource imagery datasets suitable for development and testing of recognition algorithms for context-based human activities. The VESM environment serves as a versatile test bed to generate a vast amount of realistic data for training and testing of sensor processing algorithms. To demonstrate the effectiveness of VESM environment, we present various simulated scenarios and processed results to infer proper semantic annotations from the high fidelity imagery data for human-vehicle activity recognition under different operational contexts.

  5. Generic precise augmented reality guiding system and its calibration method based on 3D virtual model.

    PubMed

    Liu, Miao; Yang, Shourui; Wang, Zhangying; Huang, Shujun; Liu, Yue; Niu, Zhenqi; Zhang, Xiaoxuan; Zhu, Jigui; Zhang, Zonghua

    2016-05-30

    Augmented reality system can be applied to provide precise guidance for various kinds of manual works. The adaptability and guiding accuracy of such systems are decided by the computational model and the corresponding calibration method. In this paper, a novel type of augmented reality guiding system and the corresponding designing scheme are proposed. Guided by external positioning equipment, the proposed system can achieve high relative indication accuracy in a large working space. Meanwhile, the proposed system is realized with a digital projector and the general back projection model is derived with geometry relationship between digitized 3D model and the projector in free space. The corresponding calibration method is also designed for the proposed system to obtain the parameters of projector. To validate the proposed back projection model, the coordinate data collected by a 3D positioning equipment is used to calculate and optimize the extrinsic parameters. The final projecting indication accuracy of the system is verified with subpixel pattern projecting technique. PMID:27410124

  6. Comparative brain morphology of Neotropical parrots (Aves, Psittaciformes) inferred from virtual 3D endocasts.

    PubMed

    Carril, Julieta; Tambussi, Claudia Patricia; Degrange, Federico Javier; Benitez Saldivar, María Juliana; Picasso, Mariana Beatriz Julieta

    2016-08-01

    Psittaciformes are a very diverse group of non-passerine birds, with advanced cognitive abilities and highly developed locomotor and feeding behaviours. Using computed tomography and three-dimensional (3D) visualization software, the endocasts of 14 extant Neotropical parrots were reconstructed, with the aim of analysing, comparing and exploring the morphology of the brain within the clade. A 3D geomorphometric analysis was performed, and the encephalization quotient (EQ) was calculated. Brain morphology character states were traced onto a Psittaciformes tree in order to facilitate interpretation of morphological traits in a phylogenetic context. Our results indicate that: (i) there are two conspicuously distinct brain morphologies, one considered walnut type (quadrangular and wider than long) and the other rounded (narrower and rostrally tapered); (ii) Psittaciformes possess a noticeable notch between hemisphaeria that divides the bulbus olfactorius; (iii) the plesiomorphic and most frequently observed characteristics of Neotropical parrots are a rostrally tapered telencephalon in dorsal view, distinctly enlarged dorsal expansion of the eminentia sagittalis and conspicuous fissura mediana; (iv) there is a positive correlation between body mass and brain volume; (v) psittacids are characterized by high EQ values that suggest high brain volumes in relation to their body masses; and (vi) the endocranial morphology of the Psittaciformes as a whole is distinctive relative to other birds. This new knowledge of brain morphology offers much potential for further insight in paleoneurological, phylogenetic and evolutionary studies. PMID:26053196

  7. Foreign language learning in immersive virtual environments

    NASA Astrophysics Data System (ADS)

    Chang, Benjamin; Sheldon, Lee; Si, Mei; Hand, Anton

    2012-03-01

    Virtual reality has long been used for training simulations in fields from medicine to welding to vehicular operation, but simulations involving more complex cognitive skills present new design challenges. Foreign language learning, for example, is increasingly vital in the global economy, but computer-assisted education is still in its early stages. Immersive virtual reality is a promising avenue for language learning as a way of dynamically creating believable scenes for conversational training and role-play simulation. Visual immersion alone, however, only provides a starting point. We suggest that the addition of social interactions and motivated engagement through narrative gameplay can lead to truly effective language learning in virtual environments. In this paper, we describe the development of a novel application for teaching Mandarin using CAVE-like VR, physical props, human actors and intelligent virtual agents, all within a semester-long multiplayer mystery game. Students travel (virtually) to China on a class field trip, which soon becomes complicated with intrigue and mystery surrounding the lost manuscript of an early Chinese literary classic. Virtual reality environments such as the Forbidden City and a Beijing teahouse provide the setting for learning language, cultural traditions, and social customs, as well as the discovery of clues through conversation in Mandarin with characters in the game.

  8. Enhanced Rgb-D Mapping Method for Detailed 3d Modeling of Large Indoor Environments

    NASA Astrophysics Data System (ADS)

    Tang, Shengjun; Zhu, Qing; Chen, Wu; Darwish, Walid; Wu, Bo; Hu, Han; Chen, Min

    2016-06-01

    RGB-D sensors are novel sensing systems that capture RGB images along with pixel-wise depth information. Although they are widely used in various applications, RGB-D sensors have significant drawbacks with respect to 3D dense mapping of indoor environments. First, they only allow a measurement range with a limited distance (e.g., within 3 m) and a limited field of view. Second, the error of the depth measurement increases with increasing distance to the sensor. In this paper, we propose an enhanced RGB-D mapping method for detailed 3D modeling of large indoor environments by combining RGB image-based modeling and depth-based modeling. The scale ambiguity problem during the pose estimation with RGB image sequences can be resolved by integrating the information from the depth and visual information provided by the proposed system. A robust rigid-transformation recovery method is developed to register the RGB image-based and depth-based 3D models together. The proposed method is examined with two datasets collected in indoor environments for which the experimental results demonstrate the feasibility and robustness of the proposed method

  9. 3D reconstruction of outdoor environments from omnidirectional range and color images

    NASA Astrophysics Data System (ADS)

    Asai, Toshihiro; Kanbara, Masayuki; Yokoya, Naokazu

    2005-03-01

    This paper describes a 3D modeling method for wide area outdoor environments which is based on integrating omnidirectional range and color images. In the proposed method, outdoor scenes can be efficiently digitized by an omnidirectional laser rangefinder which can obtain a 3D shape with high-accuracy and an omnidirectional multi-camera system (OMS) which can capture a high-resolution color image. Multiple range images are registered by minimizing the distances between corresponding points in the different range images. In order to register multiple range images stably, the points on the plane portions detected from the range data are used in registration process. The position and orientation acquired by the RTK-GPS and the gyroscope are used as initial value of simultaneous registration. The 3D model which is obtained by registration of range data is mapped by the texture selected from omnidirectional images in consideration of the resolution of the texture and occlusions of the model. In experiments, we have carried out 3D modeling of our campus with the proposed method.

  10. A video, text, and speech-driven realistic 3-d virtual head for human-machine interface.

    PubMed

    Yu, Jun; Wang, Zeng-Fu

    2015-05-01

    A multiple inputs-driven realistic facial animation system based on 3-D virtual head for human-machine interface is proposed. The system can be driven independently by video, text, and speech, thus can interact with humans through diverse interfaces. The combination of parameterized model and muscular model is used to obtain a tradeoff between computational efficiency and high realism of 3-D facial animation. The online appearance model is used to track 3-D facial motion from video in the framework of particle filtering, and multiple measurements, i.e., pixel color value of input image and Gabor wavelet coefficient of illumination ratio image, are infused to reduce the influence of lighting and person dependence for the construction of online appearance model. The tri-phone model is used to reduce the computational consumption of visual co-articulation in speech synchronized viseme synthesis without sacrificing any performance. The objective and subjective experiments show that the system is suitable for human-machine interaction. PMID:25122851

  11. 3D chromosome rendering from Hi-C data using virtual reality

    NASA Astrophysics Data System (ADS)

    Zhu, Yixin; Selvaraj, Siddarth; Weber, Philip; Fang, Jennifer; Schulze, Jürgen P.; Ren, Bing

    2015-01-01

    Most genome browsers display DNA linearly, using single-dimensional depictions that are useful to examine certain epigenetic mechanisms such as DNA methylation. However, these representations are insufficient to visualize intrachromosomal interactions and relationships between distal genome features. Relationships between DNA regions may be difficult to decipher or missed entirely if those regions are distant in one dimension but could be spatially proximal when mapped to three-dimensional space. For example, the visualization of enhancers folding over genes is only fully expressed in three-dimensional space. Thus, to accurately understand DNA behavior during gene expression, a means to model chromosomes is essential. Using coordinates generated from Hi-C interaction frequency data, we have created interactive 3D models of whole chromosome structures and its respective domains. We have also rendered information on genomic features such as genes, CTCF binding sites, and enhancers. The goal of this article is to present the procedure, findings, and conclusions of our models and renderings.

  12. Multiviewer 3D monitor

    NASA Astrophysics Data System (ADS)

    Kostrzewski, Andrew A.; Aye, Tin M.; Kim, Dai Hyun; Esterkin, Vladimir; Savant, Gajendra D.

    1998-09-01

    Physical Optics Corporation has developed an advanced 3-D virtual reality system for use with simulation tools for training technical and military personnel. This system avoids such drawbacks of other virtual reality (VR) systems as eye fatigue, headaches, and alignment for each viewer, all of which are due to the need to wear special VR goggles. The new system is based on direct viewing of an interactive environment. This innovative holographic multiplexed screen technology makes it unnecessary for the viewer to wear special goggles.

  13. The Complete Virtual 3d Reconstruction of the East Pediment of the Temple of ZEUS at Olympia

    NASA Astrophysics Data System (ADS)

    Patay-Horváth, A.

    2011-09-01

    The arrangement of the five central figures of the east pediment of the temple of Zeus at Olympia has been the subject of scholarly debates since the discovery of the fragments more than a century ago. In theory, there are four substantially different arrangements, all of which have already been selected by certain scholars for various aesthetic, technical and other considerations. The present project tries to approach this controversy in a new way, by producing a virtual 3D reconstruction of the group. Digital models of the statues were produced by scanning the original fragments and by reconstructing them virtually. For this purpose an innovative new software (Leonar3Do) has also been employed. The virtual model of the pediment surrounding the sculptures was prepared on the basis of the latest architectural studies and afterwards the reconstructed models were inserted in this frame, in order to test the technical feasibility and aesthetic effects the four possible arrangements. The paper gives an overview of the entire work and presents the final results suggesting that two arrangements can be ruled out due to the limited space available in the pediment.

  14. WaveQ3D: Fast and accurate acoustic transmission loss (TL) eigenrays, in littoral environments

    NASA Astrophysics Data System (ADS)

    Reilly, Sean M.

    This study defines a new 3D Gaussian ray bundling acoustic transmission loss model in geodetic coordinates: latitude, longitude, and altitude. This approach is designed to lower the computation burden of computing accurate environmental effects in sonar training application by eliminating the need to transform the ocean environment into a collection of Nx2D Cartesian radials. This approach also improves model accuracy by incorporating real world 3D effects, like horizontal refraction, into the model. This study starts with derivations for a 3D variant of Gaussian ray bundles in this coordinate system. To verify the accuracy of this approach, acoustic propagation predictions of transmission loss, time of arrival, and propagation direction are compared to analytic solutions and other models. To validate the model's ability to predict real world phenomena, predictions of transmission loss and propagation direction are compared to at-sea measurements, in an environment where strong horizontal refraction effect have been observed. This model has been integrated into U.S. Navy active sonar training system applications, where testing has demonstrated its ability to improve transmission loss calculation speed without sacrificing accuracy.

  15. Mass Spectrometry of 3D-printed plastic parts under plasma and radiative heat environments

    NASA Astrophysics Data System (ADS)

    Rivera, W. F.; Romero-Talamas, C. A.; Bates, E. M.; Birmingham, W.; Takeno, J.; Knop, S.

    2015-11-01

    We present the design and preliminary results of a mass spectrometry system used to assess vacuum compatibility of 3D-printed parts, developed at the Dusty Plasma Laboratory of the University of Maryland Baltimore County (UMBC). A decrease in outgassing was observed when electroplated parts were inserted in the test chamber vs. non electroplated ones. Outgassing will also be tested under different environments such as plasma and radiative heat. Heat will be generated by a titanium getter pump placed inside a 90 degree elbow, such that titanium does not coat the part. A mirror inside the elbow will be used to throttle the heat arriving at the part. Plasma exposure of 3D printed parts will be achieved by placing the parts in a separate chamber connected to the spectrometer by a vacuum line that is differentially pumped. The signals from the mass spectrometer will be analyzed to see how the vacuum conditions fluctuate under different plasma discharges.

  16. Virtual Learning Environments Designed in Brazil.

    ERIC Educational Resources Information Center

    Eichler, Marcelo L.; Goncalves, Mario R.; da Silva, Flavia O. M.; Junges, Fernando; Del Pino, Jose C.

    2003-01-01

    Discusses instructional design for computerized pedagogic materials and emphasizes the elements of activity and discovery in creating effective learning experiences. Describes a virtual learning environment designed in Brazil that is open to different forms of use so teachers and students can decide on the best ways of using it. (LRW)

  17. Using Immersive Virtual Environments for Certification

    NASA Technical Reports Server (NTRS)

    Lutz, R.; Cruz-Neira, C.

    1998-01-01

    Immersive virtual environments (VEs) technology has matured to the point where it can be utilized as a scientific and engineering problem solving tool. In particular, VEs are starting to be used to design and evaluate safety-critical systems that involve human operators, such as flight and driving simulators, complex machinery training, and emergency rescue strategies.

  18. On Mediation in Virtual Learning Environments.

    ERIC Educational Resources Information Center

    Davies, Larry; Hassan, W. Shukry

    2001-01-01

    Discusses concepts of mediation and focuses on the importance of implementing comprehensive virtual learning environments. Topics include education and technology as they relate to cultural change, social institutions, the Internet and computer-mediated communication, software design and human-computer interaction, the use of MOOs, and language.…

  19. Elearn: A Collaborative Educational Virtual Environment.

    ERIC Educational Resources Information Center

    Michailidou, Anna; Economides, Anastasios A.

    Virtual Learning Environments (VLEs) that support collaboration are one of the new technologies that have attracted great interest. VLEs are learning management software systems composed of computer-mediated communication software and online methods of delivering course material. This paper presents ELearn, a collaborative VLE for teaching…

  20. Virtual Learning Environments: Three Implementation Perspectives

    ERIC Educational Resources Information Center

    Keller, Christina

    2005-01-01

    Universities worldwide offer web-based courses distributed by virtual learning environments (VLEs). A common theoretical framework for implementing VLEs is the pedagogical perspective of instructional design. In this paper, three perspectives of implementation from information systems implementation research and organization theory are presented:…

  1. Virtual environments for telerobotic shared control

    NASA Technical Reports Server (NTRS)

    Christensen, Brian K.

    1994-01-01

    The use of a virtual environment to bring about telerobotic shared control is discussed. A knowledge base, referred to as the World Model, is used to aid the system in its decision making. Information from the World Model is displayed visually in order to aid the human side of human-computer interface.

  2. Middle School Students in Virtual Learning Environments

    ERIC Educational Resources Information Center

    Wyatt, Erin Drankwalter

    2010-01-01

    This ethnographic study examined middle school students engaged in a virtual learning environment used in concert with face-to-face instruction in order to complete a collaborative research project. Thirty-eight students from three eighth grade classes participated in this study where data were collected through observation of student work within…

  3. 3D visualisation and artistic imagery to enhance interest in `hidden environments' - new approaches to soil science

    NASA Astrophysics Data System (ADS)

    Gilford, J.; Falconer, R. E.; Wade, R.; Scott-Brown, K. C.

    2014-09-01

    Interactive Virtual Environments (VEs) have the potential to increase student interest in soil science. Accordingly a bespoke 'soil atlas' was created using Java3D as an interactive 3D VE, to show soil information in the context of (and as affected by) the over-lying landscape. To display the below-ground soil characteristics, four sets of artistic illustrations were produced, each set showing the effects of soil organic-matter density and water content on fungal density, to determine potential for visualisations and interactivity in stimulating interest in soil and soil illustrations, interest being an important factor in facilitating learning. The illustrations were created using 3D modelling packages, and a wide range of styles were produced. This allowed a preliminary study of the relative merits of different artistic styles, scientific-credibility, scale, abstraction and 'realism' (e.g. photo-realism or realism of forms), and any relationship between these and the level of interest indicated by the study participants in the soil visualisations and VE. The study found significant differences in mean interest ratings for different soil illustration styles, as well as in the perception of scientific-credibility of these styles, albeit for both measures there was considerable difference of attitude between participants about particular styles. There was also found to be a highly significant positive correlation between participants rating styles highly for interest and highly for scientific-credibility. There was furthermore a particularly high interest rating among participants for seeing temporal soil processes illustrated/animated, suggesting this as a particularly promising method for further stimulating interest in soil illustrations and soil itself.

  4. Effects of Presence, Copresence, and Flow on Learning Outcomes in 3D Learning Spaces

    ERIC Educational Resources Information Center

    Hassell, Martin D.; Goyal, Sandeep; Limayem, Moez; Boughzala, Imed

    2012-01-01

    The level of satisfaction and effectiveness of 3D virtual learning environments were examined. Additionally, 3D virtual learning environments were compared with face-to-face learning environments. Students that experienced higher levels of flow and presence also experienced more satisfaction but not necessarily more effectiveness with 3D virtual…

  5. 3D graphics, virtual reality, and motion-onset visual evoked potentials in neurogaming.

    PubMed

    Beveridge, R; Wilson, S; Coyle, D

    2016-01-01

    A brain-computer interface (BCI) offers movement-free control of a computer application and is achieved by reading and translating the cortical activity of the brain into semantic control signals. Motion-onset visual evoked potentials (mVEP) are neural potentials employed in BCIs and occur when motion-related stimuli are attended visually. mVEP dynamics are correlated with the position and timing of the moving stimuli. To investigate the feasibility of utilizing the mVEP paradigm with video games of various graphical complexities including those of commercial quality, we conducted three studies over four separate sessions comparing the performance of classifying five mVEP responses with variations in graphical complexity and style, in-game distractions, and display parameters surrounding mVEP stimuli. To investigate the feasibility of utilizing contemporary presentation modalities in neurogaming, one of the studies compared mVEP classification performance when stimuli were presented using the oculus rift virtual reality headset. Results from 31 independent subjects were analyzed offline. The results show classification performances ranging up to 90% with variations in conditions in graphical complexity having limited effect on mVEP performance; thus, demonstrating the feasibility of using the mVEP paradigm within BCI-based neurogaming. PMID:27590974

  6. Cognitive Virtualization: Combining Cognitive Models and Virtual Environments

    SciTech Connect

    Tuan Q. Tran; David I. Gertman; Donald D. Dudenhoeffer; Ronald L. Boring; Alan R. Mecham

    2007-08-01

    3D manikins are often used in visualizations to model human activity in complex settings. Manikins assist in developing understanding of human actions, movements and routines in a variety of different environments representing new conceptual designs. One such environment is a nuclear power plant control room, here they have the potential to be used to simulate more precise ergonomic assessments of human work stations. Next generation control rooms will pose numerous challenges for system designers. The manikin modeling approach by itself, however, may be insufficient for dealing with the desired technical advancements and challenges of next generation automated systems. Uncertainty regarding effective staffing levels; and the potential for negative human performance consequences in the presence of advanced automated systems (e.g., reduced vigilance, poor situation awareness, mistrust or blind faith in automation, higher information load and increased complexity) call for further research. Baseline assessment of novel control room equipment(s) and configurations needs to be conducted. These design uncertainties can be reduced through complementary analysis that merges ergonomic manikin models with models of higher cognitive functions, such as attention, memory, decision-making, and problem-solving. This paper will discuss recent advancements in merging a theoretical-driven cognitive modeling framework within a 3D visualization modeling tool to evaluate of next generation control room human factors and ergonomic assessment. Though this discussion primary focuses on control room design, the application for such a merger between 3D visualization and cognitive modeling can be extended to various areas of focus such as training and scenario planning.

  7. Guest Editor's introduction: Special issue on distributed virtual environments

    NASA Astrophysics Data System (ADS)

    Lea, Rodger

    1998-09-01

    Distributed virtual environments (DVEs) combine technology from 3D graphics, virtual reality and distributed systems to provide an interactive 3D scene that supports multiple participants. Each participant has a representation in the scene, often known as an avatar, and is free to navigate through the scene and interact with both the scene and other viewers of the scene. Changes to the scene, for example, position changes of one avatar as the associated viewer navigates through the scene, or changes to objects in the scene via manipulation, are propagated in real time to all viewers. This ensures that all viewers of a shared scene `see' the same representation of it, allowing sensible reasoning about the scene. Early work on such environments was restricted to their use in simulation, in particular in military simulation. However, over recent years a number of interesting and potentially far-reaching attempts have been made to exploit the technology for a range of other uses, including: Social spaces. Such spaces can be seen as logical extensions of the familiar text chat space. In 3D social spaces avatars, representing participants, can meet in shared 3D scenes and in addition to text chat can use visual cues and even in some cases spatial audio. Collaborative working. A number of recent projects have attempted to explore the use of DVEs to facilitate computer-supported collaborative working (CSCW), where the 3D space provides a context and work space for collaboration. Gaming. The shared 3D space is already familiar, albeit in a constrained manner, to the gaming community. DVEs are a logical superset of existing 3D games and can provide a rich framework for advanced gaming applications. e-commerce. The ability to navigate through a virtual shopping mall and to look at, and even interact with, 3D representations of articles has appealed to the e-commerce community as it searches for the best method of presenting merchandise to electronic consumers. The technology

  8. Global Warming and the Arctic in 3D: A Virtual Globe for Outreach

    NASA Astrophysics Data System (ADS)

    Manley, W. F.

    2006-12-01

    Virtual Globes provide a new way to capture and inform the public's interest in environmental change. As an example, a recent Google Earth presentation conveyed 'key findings' from the Arctic Climate Impact Assessment (ACIA, 2004) to middle school students during the 2006 INSTAAR/NSIDC Open House at the University of Colorado. The 20-minute demonstration to 180 eighth graders began with an introduction and a view of the Arctic from space, zooming into the North American Arctic, then to a placemark for the first key finding, 'Arctic climate is now warming rapidly and much larger changes are projected'. An embedded link then opened a custom web page, with brief explanatory text, along with an ACIA graphic illustrating the rise in Arctic temperature, global CO2 concentrations, and carbon emissions for the last millennium. The demo continued with an interactive tour of other key findings (Reduced Sea Ice, Changes for Animals, Melting Glaciers, Coastal Erosion, Changes in Vegetation, Melting Permafrost, and others). Each placemark was located somewhat arbitrarily (which may be a concern for some audiences), but the points represented the messages in a geographic sense and enabled a smooth visual tour of the northern latitudes. Each placemark was linked to custom web pages with photos and concise take-home messages. The demo ended with navigation to Colorado, then Boulder, then the middle school that the students attended, all the while speaking to implications as they live their lives locally. The demo piqued the students' curiosity, and in this way better conveyed important messages about the Arctic and climate change. The use of geospatial visualizations for outreach and education appears to be in its infancy, with much potential.

  9. 3D-modeling of Callisto's sputtered surface-exosphere environment

    NASA Astrophysics Data System (ADS)

    Lammer, Helmut; Pfleger, Martin; Lindqvist, Jesper; Lichtenegger, Herbert; Holmström, Mats; Vorburger, Audrey; Wurz, Peter; Barabash, Stas

    2016-04-01

    We study the stoichiometrical release of various surface elements caused by plasma sputtering from an assumed icy and non-icy (i.e., chondritic) surface into the exosphere of the Jovian satellite Callisto. We apply a 3D plasma planetary interaction hybrid model that is used for the evaluation of precipitation maps of magnetospheric H+, O+ and S+ sputter agents onto Callisto's surface. The obtained precipitation maps are then applied to the assumed surface compositions where the related sputter yields are calculated by means of the 2013 SRIM code and are coupled with a 3D exosphere model. Sputtered surface particles are followed on their individual trajectories until they either escape Callisto's gravitational attraction or return to the surface. We study also the effect of collisions between sputter species and ambient O2 molecules which form a tiny atmosphere near the satellite's surface and compare the exosphere densities that are obtained from the 3D model with and without a background gaseous envelope with recent 1D model results. Finally we discuss if the Neutral gas and Ion Mass (NIM) spectrometer, that is part of the Particle Environment Package (PEP) on board of the JUICE mission will be able to detect sputtered particles from Callisto's icy and non-icy surface.

  10. Fast 3D modeling in complex environments using a single Kinect sensor

    NASA Astrophysics Data System (ADS)

    Yue, Haosong; Chen, Weihai; Wu, Xingming; Liu, Jingmeng

    2014-02-01

    Three-dimensional (3D) modeling technology has been widely used in inverse engineering, urban planning, robot navigation, and many other applications. How to build a dense model of the environment with limited processing resources is still a challenging topic. A fast 3D modeling algorithm that only uses a single Kinect sensor is proposed in this paper. For every color image captured by Kinect, corner feature extraction is carried out first. Then a spiral search strategy is utilized to select the region of interest (ROI) that contains enough feature corners. Next, the iterative closest point (ICP) method is applied to the points in the ROI to align consecutive data frames. Finally, the analysis of which areas can be walked through by human beings is presented. Comparative experiments with the well-known KinectFusion algorithm have been done and the results demonstrate that the accuracy of the proposed algorithm is the same as KinectFusion but the computing speed is nearly twice of KinectFusion. 3D modeling of two scenes of a public garden and traversable areas analysis in these regions further verified the feasibility of our algorithm.

  11. Interactive Motion Planning for Steerable Needles in 3D Environments with Obstacles

    PubMed Central

    Patil, Sachin; Alterovitz, Ron

    2011-01-01

    Bevel-tip steerable needles for minimally invasive medical procedures can be used to reach clinical targets that are behind sensitive or impenetrable areas and are inaccessible to straight, rigid needles. We present a fast algorithm that can compute motion plans for steerable needles to reach targets in complex, 3D environments with obstacles at interactive rates. The fast computation makes this method suitable for online control of the steerable needle based on 3D imaging feedback and allows physicians to interactively edit the planning environment in real-time by adding obstacle definitions as they are discovered or become relevant. We achieve this fast performance by using a Rapidly Exploring Random Tree (RRT) combined with a reachability-guided sampling heuristic to alleviate the sensitivity of the RRT planner to the choice of the distance metric. We also relax the constraint of constant-curvature needle trajectories by relying on duty-cycling to realize bounded-curvature needle trajectories. These characteristics enable us to achieve orders of magnitude speed-up compared to previous approaches; we compute steerable needle motion plans in under 1 second for challenging environments containing complex, polyhedral obstacles and narrow passages. PMID:22294214

  12. Prediction of car cabin environment by means of 1D and 3D cabin model

    NASA Astrophysics Data System (ADS)

    Fišer, J.; Pokorný, J.; Jícha, M.

    2012-04-01

    Thermal comfort and also reduction of energy requirements of air-conditioning system in vehicle cabins are currently very intensively investigated and up-to-date issues. The article deals with two approaches of modelling of car cabin environment; the first model was created in simulation language Modelica (typical 1D approach without cabin geometry) and the second one was created in specialized software Theseus-FE (3D approach with cabin geometry). Performance and capabilities of this tools are demonstrated on the example of the car cabin and the results from simulations are compared with the results from the real car cabin climate chamber measurements.

  13. Visualizing Moon Phases in Virtual and Physical Astronomy Environments

    NASA Astrophysics Data System (ADS)

    Udomprasert, Patricia S.; Goodman, Alyssa A.; Sunbury, Susan; Zhang, Zhihui; Sadler, Philip M.; Dussault, Mary E.; Wang, Qin; Johnson, Erin; Lotridge, Erin; Jackson, Jonathan; Constantin, Ana-Maria

    2015-01-01

    We report on the development and testing of a 'Visualization Lab,' which includes both physical and virtual models, designed to teach middle school students about the cause of the Moon's phases and eclipses, phenomena that require students to visualize complex 3D relationships amongst the Sun, Earth, and Moon. The physical models included styrofoam balls, a lamp, and hula hoops, and we used two different kinds of virtual models: a simple 2D simulator, vs. a complex 3D model in WorldWide Telescope (WWT), an immersive, free astronomy data visualization environmentIn Phase I, all students used the physical model first, then one of the two virtual models. Students who used WWT as the virtual model had stronger learning gains than students who used the 2D simulator, and they had more interest in continuing to explore the computer model independently after the formal instruction was complete.In Phase 2, all students used WWT, but half used the physical model first, while the other half used WWT first. The Phase 2 pilot (N=68) showed that level of prior knowledge may influence which model order would be more beneficial to student learning. Students with low prior knowledge benefited from using the physical model first, and students with high prior knowledge benefited from using WWT first. Three additional cohorts in 2013-14 (N=226) showed that performance on the multiple choice assessment is comparable regardless of model order, with a regression analysis showing a slight benefit to using WWT first for all levels of prior knowledge.For two cohorts where we have coded open responses, students who used WWT first expressed fewer misconceptions about the cause of Moon phases on the posttest. Despite the stronger learning outcomes from using WWT first, only 19% of students preferred having WWT first or wish they had WWT first.

  14. Communication Modes, Persuasiveness, and Decision-Making Quality: A Comparison of Audio Conferencing, Video Conferencing, and a Virtual Environment

    ERIC Educational Resources Information Center

    Lockwood, Nicholas S.

    2011-01-01

    Geographically dispersed teams rely on information and communication technologies (ICTs) to communicate and collaborate. Three ICTs that have received attention are audio conferencing (AC), video conferencing (VC), and, recently, 3D virtual environments (3D VEs). These ICTs offer modes of communication that differ primarily in the number and type…

  15. Dynamic shared state maintenance in distributed virtual environments

    NASA Astrophysics Data System (ADS)

    Hamza-Lup, Felix George

    Advances in computer networks and rendering systems facilitate the creation of distributed collaborative environments in which the distribution of information at remote locations allows efficient communication. Particularly challenging are distributed interactive Virtual Environments (VE) that allow knowledge sharing through 3D information. The purpose of this work is to address the problem of latency in distributed interactive VE and to develop a conceptual model for consistency maintenance in these environments based on the participant interaction model. An area that needs to be explored is the relationship between the dynamic shared state and the interaction with the virtual entities present in the shared scene. Mixed Reality (MR) and VR environments must bring the human participant interaction into the loop through a wide range of electronic motion sensors, and haptic devices. Part of the work presented here defines a novel criterion for categorization of distributed interactive VE and introduces, as well as analyzes, an adaptive synchronization algorithm for consistency maintenance in such environments. As part of the work, a distributed interactive Augmented Reality (AR) testbed and the algorithm implementation details are presented. Currently the testbed is part of several research efforts at the Optical Diagnostics and Applications Laboratory including 3D visualization applications using custom built head-mounted displays (HMDs) with optical motion tracking and a medical training prototype for endotracheal intubation and medical prognostics. An objective method using quaternion calculus is applied for the algorithm assessment. In spite of significant network latency, results show that the dynamic shared state can be maintained consistent at multiple remotely located sites. In further consideration of the latency problems and in the light of the current trends in interactive distributed VE applications, we propose a hybrid distributed system architecture for

  16. Pharmacophore modeling, virtual screening and 3D-QSAR studies of 5-tetrahydroquinolinylidine aminoguanidine derivatives as sodium hydrogen exchanger inhibitors.

    PubMed

    Bhatt, Hardik G; Patel, Paresh K

    2012-06-01

    Sodium hydrogen exchanger (SHE) inhibitor is one of the most important targets in treatment of myocardial ischemia. In the course of our research into new types of non-acylguanidine, SHE inhibitory activities of 5-tetrahydroquinolinylidine aminoguanidine derivatives were used to build pharmacophore and 3D-QSAR models. Genetic Algorithm Similarity Program (GASP) was used to derive a 3D pharmacophore model which was used in effective alignment of data set. Eight molecules were selected on the basis of structure diversity to build 10 different pharmacophore models. Model 1 was considered as the best model as it has highest fitness score compared to other nine models. The obtained model contained two acceptor sites, two donor atoms and one hydrophobic region. Pharmacophore modeling was followed by substructure searching and virtual screening. The best CoMFA model, representing steric and electrostatic fields, obtained for 30 training set molecules was statistically significant with cross-validated coefficient (q(2)) of 0.673 and conventional coefficient (r(2)) of 0.988. In addition to steric and electrostatic fields observed in CoMFA, CoMSIA also represents hydrophobic, hydrogen bond donor and hydrogen bond acceptor fields. CoMSIA model was also significant with cross-validated coefficient (q(2)) and conventional coefficient (r(2)) of 0.636 and 0.986, respectively. Both models were validated by an external test set of eight compounds and gave satisfactory prediction (r(pred)(2)) of 0.772 and 0.701 for CoMFA and CoMSIA models, respectively. This pharmacophore based 3D-QSAR approach provides significant insights that can be used to design novel, potent and selective SHE inhibitors. PMID:22546667

  17. Psychophysical estimation of 3D virtual depth of united, synthesized and mixed type stereograms by means of simultaneous observation

    NASA Astrophysics Data System (ADS)

    Iizuka, Masayuki; Ookuma, Yoshio; Nakashima, Yoshio; Takamatsu, Mamoru

    2007-02-01

    Recently, many types of computer-generated stereograms (CGSs), i.e. various works of art produced by using computer are published for hobby and entertainment. It is said that activation of brain, improvement of visual eye sight, decrease of mental stress, effect of healing, etc. are expected when properly appreciating a kind of CGS as the stereoscopic view. There is a lot of information on the internet web site concerning all aspects of stereogram history, science, social organization, various types of stereograms, and free software for generating CGS. Generally, the CGS is classified into nine types: (1) stereo pair type, (2) anaglyph type, (3) repeated pattern type, (4) embedded type, (5) random dot stereogram (RDS), (6) single image stereogram (SIS), (7) united stereogram, (8) synthesized stereogram, and (9) mixed or multiple type stereogram. Each stereogram has advantages and disadvantages when viewing directly the stereogram with two eyes by training with a little patience. In this study, the characteristics of united, synthesized and mixed type stereograms, the role and composition of depth map image (DMI) called hidden image or picture, and the effect of irregular shift of texture pattern image called wall paper are discussed from the viewpoint of psychophysical estimation of 3D virtual depth and visual quality of virtual image by means of simultaneous observation in the case of the parallel viewing method.

  18. New approaches to virtual environment surgery

    NASA Technical Reports Server (NTRS)

    Ross, M. D.; Twombly, A.; Lee, A. W.; Cheng, R.; Senger, S.

    1999-01-01

    This research focused on two main problems: 1) low cost, high fidelity stereoscopic imaging of complex tissues and organs; and 2) virtual cutting of tissue. A further objective was to develop these images and virtual tissue cutting methods for use in a telemedicine project that would connect remote sites using the Next Generation Internet. For goal one we used a CT scan of a human heart, a desktop PC with an OpenGL graphics accelerator card, and LCD stereoscopic glasses. Use of multiresolution meshes ranging from approximately 1,000,000 to 20,000 polygons speeded interactive rendering rates enormously while retaining general topography of the dataset. For goal two, we used a CT scan of an infant skull with premature closure of the right coronal suture, a Silicon Graphics Onyx workstation, a Fakespace Immersive WorkBench and CrystalEyes LCD glasses. The high fidelity mesh of the skull was reduced from one million to 50,000 polygons. The cut path was automatically calculated as the shortest distance along the mesh between a small number of hand selected vertices. The region outlined by the cut path was then separated from the skull and translated/rotated to assume a new position. The results indicate that widespread high fidelity imaging in virtual environment is possible using ordinary PC capabilities if appropriate mesh reduction methods are employed. The software cutting tool is applicable to heart and other organs for surgery planning, for training surgeons in a virtual environment, and for telemedicine purposes.

  19. Exploring the User Experience of Three-Dimensional Virtual Learning Environments

    ERIC Educational Resources Information Center

    Shin, Dong-Hee; Biocca, Frank; Choo, Hyunseung

    2013-01-01

    This study examines the users' experiences with three-dimensional (3D) virtual environments to investigate the areas of development as a learning application. For the investigation, the modified technology acceptance model (TAM) is used with constructs from expectation-confirmation theory (ECT). Users' responses to questions about…

  20. Nonthreshold-based event detection for 3d environment monitoring in sensor networks

    SciTech Connect

    Li, M.; Liu, Y.H.; Chen, L.

    2008-12-15

    Event detection is a crucial task for wireless sensor network applications, especially environment monitoring. Existing approaches for event detection are mainly based on some predefined threshold values and, thus, are often inaccurate and incapable of capturing complex events. For example, in coal mine monitoring scenarios, gas leakage or water osmosis can hardly be described by the overrun of specified attribute thresholds but some complex pattern in the full-scale view of the environmental data. To address this issue, we propose a nonthreshold-based approach for the real 3D sensor monitoring environment. We employ energy-efficient methods to collect a time series of data maps from the sensor network and detect complex events through matching the gathered data to spatiotemporal data patterns. Finally, we conduct trace-driven simulations to prove the efficacy and efficiency of this approach on detecting events of complex phenomena from real-life records.

  1. The Effects of Instructor-Avatar Immediacy in Second Life, an Immersive and Interactive Three-Dimensional Virtual Environment

    ERIC Educational Resources Information Center

    Lawless-Reljic, Sabine Karine

    2010-01-01

    Growing interest of educational institutions in desktop 3D graphic virtual environments for hybrid and distance education prompts questions on the efficacy of such tools. Virtual worlds, such as Second Life[R], enable computer-mediated immersion and interactions encompassing multimodal communication channels including audio, video, and text-.…

  2. Human-Computer Interaction and Virtual Environments

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K. (Compiler)

    1995-01-01

    The proceedings of the Workshop on Human-Computer Interaction and Virtual Environments are presented along with a list of attendees. The objectives of the workshop were to assess the state-of-technology and level of maturity of several areas in human-computer interaction and to provide guidelines for focused future research leading to effective use of these facilities in the design/fabrication and operation of future high-performance engineering systems.

  3. Visualization and Interpretation in 3D Virtual Reality of Topographic and Geophysical Data from the Chicxulub Impact Crater

    NASA Astrophysics Data System (ADS)

    Rosen, J.; Kinsland, G. L.; Borst, C.

    2011-12-01

    We have assembled Shuttle Radar Topography Mission (SRTM) data (Borst and Kinsland, 2005), gravity data (Bedard, 1977), horizontal gravity gradient data (Hildebrand et al., 1995), magnetic data (Pilkington et al., 2000) and GPS topography data (Borst and Kinsland, 2005) from the Chicxulub Impact Crater buried on the Yucatan Peninsula of Mexico. These data sets are imaged as gridded surfaces and are all georegistered, within an interactive 3D virtual reality (3DVR) visualization and interpretation system created and maintained in the Center for Advanced Computer Studies at the University of Louisiana at Lafayette. We are able to view and interpret the data sets individually or together and to scale and move the data or to move our physical head position so as to achieve the best viewing perspective for interpretation. A feature which is especially valuable for understanding the relationships between the various data sets is our ability to "interlace" the 3D images. "Interlacing" is a technique we have developed whereby the data surfaces are moved along a common axis so that they interpenetrate. This technique leads to rapid and positive identification of spatially corresponding features in the various data sets. We present several images from the 3D system, which demonstrate spatial relationships amongst the features in the data sets. Some of the anomalies in gravity are very nearly coincident with anomalies in the magnetic data as one might suspect if the causal bodies are the same. Other gravity and magnetic anomalies are not spatially coincident indicating different causal bodies. Topographic anomalies display a strong spatial correspondence with many gravity anomalies. In some cases small gravity anomalies and topographic valleys are caused by shallow dissolution within the Tertiary cover along faults or fractures propagated upward from the buried structure. In other cases the sources of the gravity anomalies are in the more deeply buried structure from which

  4. Development of microgravity, full body functional reach envelope using 3-D computer graphic models and virtual reality technology

    NASA Technical Reports Server (NTRS)

    Lindsey, Patricia F.

    1994-01-01

    In microgravity conditions mobility is greatly enhanced and body stability is difficult to achieve. Because of these difficulties, optimum placement and accessibility of objects and controls can be critical to required tasks on board shuttle flights or on the proposed space station. Anthropometric measurement of the maximum reach of occupants of a microgravity environment provide knowledge about maximum functional placement for tasking situations. Calculations for a full body, functional reach envelope for microgravity environments are imperative. To this end, three dimensional computer modeled human figures, providing a method of anthropometric measurement, were used to locate the data points that define the full body, functional reach envelope. Virtual reality technology was utilized to enable an occupant of the microgravity environment to experience movement within the reach envelope while immersed in a simulated microgravity environment.

  5. 3D virtual planning in orthognathic surgery and CAD/CAM surgical splints generation in one patient with craniofacial microsomia: a case report

    PubMed Central

    Vale, Francisco; Scherzberg, Jessica; Cavaleiro, João; Sanz, David; Caramelo, Francisco; Maló, Luísa; Marcelino, João Pedro

    2016-01-01

    Objective: In this case report, the feasibility and precision of tridimensional (3D) virtual planning in one patient with craniofacial microsomia is tested using Nemoceph 3D-OS software (Software Nemotec SL, Madrid, Spain) to predict postoperative outcomes on hard tissue and produce CAD/CAM (Computer Aided Design/Computer Aided Manufacturing) surgical splints. Methods: The clinical protocol consists of 3D data acquisition of the craniofacial complex by cone-beam computed tomography (CBCT) and surface scanning of the plaster dental casts. The ''virtual patient'' created underwent virtual surgery and a simulation of postoperative results on hard tissues. Surgical splints were manufactured using CAD/CAM technology in order to transfer the virtual surgical plan to the operating room. Intraoperatively, both CAD/CAM and conventional surgical splints are comparable. A second set of 3D images was obtained after surgery to acquire linear measurements and compare them with measurements obtained when predicting postoperative results virtually. Results: It was found a high similarity between both types of surgical splints with equal fitting on the dental arches. The linear measurements presented some discrepancies between the actual surgical outcomes and the predicted results from the 3D virtual simulation, but caution must be taken in the analysis of these results due to several variables. Conclusions: The reported case confirms the clinical feasibility of the described computer-assisted orthognathic surgical protocol. Further progress in the development of technologies for 3D image acquisition and improvements on software programs to simulate postoperative changes on soft tissue are required. PMID:27007767

  6. Use of 3D conformal symbology on HMD for a safer flight in degraded visual environment

    NASA Astrophysics Data System (ADS)

    Klein, Ofer; Doehler, Hans-Ullrich; Trousil, Thomas; Peleg-Marzan, Ruthy

    2012-06-01

    Since the entry of coalition forces to Afghanistan and Iraq, a steep rise at the rate of accidents has occurred as a result of flying and landing in Degraded Visual Environment (DVE) conditions. Such conditions exist in various areas around the world and include bad weather, dust and snow landing (Brownout and whiteout) and low illumination at dark nights. A promising solution is a novel 3D conformal symbology displayed on head-tracked helmet mounted display (HMD). The 3D conformal symbology approach provides space stabilized three-dimensional symbology presented on the pilot helmet mounted display and has the potential of presenting a step function in HMD performance. It offers an intuitive way for presenting crucial information to the pilots in order to increase Situational Awareness, lower the pilots' workload and thus enhancing safety of flight dramatically. The pilots can fly "heads out" while the necessary flight and mission information is presented in intuitive manner, conformal with the real world and in real-time. . Several Evaluation trials had been conducted in the UK, US and Israel using systems that were developed by Elbit Systems to prove the embodied potential of the system to provide a solution for DVE flight conditions: technology, concept and the specific systems.

  7. Snap2Diverse: coordinating information visualizations and virtual environments

    NASA Astrophysics Data System (ADS)

    Polys, Nicholas F.; North, Chris; Bowman, Doug A.; Ray, Andrew; Moldenhauer, Maxim; Dandekar, Chetan

    2004-06-01

    The field of Information Visualization is concerned with improving how users perceive, understand, and interact with visual representations of abstract information. Immersive Virtual Environments (VEs) excel at a greater comprehension of spatial information. This project addresses the intersection of these two fields known as Information-Rich Virtual Environments (IRVEs) where perceptually realistic information, such as models and scenes, are enhanced with abstract information, such as text, numeric data, hyperlinks, or multimedia resources. IRVEs present a number of important design challenges including the management, coordination, and display of interrelated perceptual and abstract information. We describe a set of design issues for this type of integrated visualization and demonstrate a coordinated, multiple-views approach to support 2D and 3D visualization interactions such as overview, navigation, details-on-demand, and brushing-and-linking. In the CAVE, spatial information in a VE is interactively linked to embedded visualizations of related abstract information. Software architecture issues are discussed with details of our implementation applied to the domain of chemical information visualization. Lastly, we subject our system to an informal usability evaluation and identify usability issues with interaction and navigation that guides future work in these environments.

  8. 3D Virtual Reality Applied in Tectonic Geomorphic Study of the Gombori Range of Greater Caucasus Mountains

    NASA Astrophysics Data System (ADS)

    Sukhishvili, Lasha; Javakhishvili, Zurab

    2016-04-01

    Gombori Range represents the southern part of the young Greater Caucasus Mountains and stretches from NW to SE. The range separates Alazani and Iori basins within the eastern Georgian province of Kakheti. The active phase of Caucasian orogeny started in the Pliocene, but according to alluvial sediments of Gombori range (mapped in the Soviet geologic map), we observe its uplift process to be Quaternary event. The highest peak of the Gombori range has an absolute elevation of 1991 m, while its neighboring Alazani valley gains only 400 m. We assume the range has a very fast uplift rate and it could trigger streams flow direction course reverse in Quaternary. To check this preliminary assumptions we are going to use a tectonic and fluvial geomorphic and stratigraphic approaches including paleocurrent analyses and various affordable absolute dating techniques to detect the evidence of river course reverses and date them. For these purposes we have selected river Turdo outcrop. The river itself flows northwards from the Gombori range and nearby region`s main city of Telavi generates 30-40 m high continuous outcrop along 1 km section. Turdo outcrop has very steep walls and requires special climbing skills to work on it. The goal of this particularly study is to avoid time and resource consuming ground survey process of this steep, high and wide outcrop and test 3D aerial and ground base photogrammetric modelling and analyzing approaches in initial stage of the tectonic geomorphic study. Using this type of remote sensing and virtual lab analyses of 3D outcrop model, we roughly delineated stratigraphic layers, selected exact locations for applying various research techniques and planned safe and suitable climbing routes for getting to the investigation sites.

  9. Model-based adaptive 3D sonar reconstruction in reverberating environments.

    PubMed

    Saucan, Augustin-Alexandru; Sintes, Christophe; Chonavel, Thierry; Caillec, Jean-Marc Le

    2015-10-01

    In this paper, we propose a novel model-based approach for 3D underwater scene reconstruction, i.e., bathymetry, for side scan sonar arrays in complex and highly reverberating environments like shallow water areas. The presence of multipath echoes and volume reverberation generates false depth estimates. To improve the resulting bathymetry, this paper proposes and develops an adaptive filter, based on several original geometrical models. This multimodel approach makes it possible to track and separate the direction of arrival trajectories of multiple echoes impinging the array. Echo tracking is perceived as a model-based processing stage, incorporating prior information on the temporal evolution of echoes in order to reject cluttered observations generated by interfering echoes. The results of the proposed filter on simulated and real sonar data showcase the clutter-free and regularized bathymetric reconstruction. Model validation is carried out with goodness of fit tests, and demonstrates the importance of model-based processing for bathymetry reconstruction. PMID:25974936

  10. Enhanced ICP for the Registration of Large-Scale 3D Environment Models: An Experimental Study

    PubMed Central

    Han, Jianda; Yin, Peng; He, Yuqing; Gu, Feng

    2016-01-01

    One of the main applications of mobile robots is the large-scale perception of the outdoor environment. One of the main challenges of this application is fusing environmental data obtained by multiple robots, especially heterogeneous robots. This paper proposes an enhanced iterative closest point (ICP) method for the fast and accurate registration of 3D environmental models. First, a hierarchical searching scheme is combined with the octree-based ICP algorithm. Second, an early-warning mechanism is used to perceive the local minimum problem. Third, a heuristic escape scheme based on sampled potential transformation vectors is used to avoid local minima and achieve optimal registration. Experiments involving one unmanned aerial vehicle and one unmanned surface vehicle were conducted to verify the proposed technique. The experimental results were compared with those of normal ICP registration algorithms to demonstrate the superior performance of the proposed method. PMID:26891298

  11. Enhanced ICP for the Registration of Large-Scale 3D Environment Models: An Experimental Study.

    PubMed

    Han, Jianda; Yin, Peng; He, Yuqing; Gu, Feng

    2016-01-01

    One of the main applications of mobile robots is the large-scale perception of the outdoor environment. One of the main challenges of this application is fusing environmental data obtained by multiple robots, especially heterogeneous robots. This paper proposes an enhanced iterative closest point (ICP) method for the fast and accurate registration of 3D environmental models. First, a hierarchical searching scheme is combined with the octree-based ICP algorithm. Second, an early-warning mechanism is used to perceive the local minimum problem. Third, a heuristic escape scheme based on sampled potential transformation vectors is used to avoid local minima and achieve optimal registration. Experiments involving one unmanned aerial vehicle and one unmanned surface vehicle were conducted to verify the proposed technique. The experimental results were compared with those of normal ICP registration algorithms to demonstrate the superior performance of the proposed method. PMID:26891298

  12. Virtual environment application with partial gravity simulation

    NASA Technical Reports Server (NTRS)

    Ray, David M.; Vanchau, Michael N.

    1994-01-01

    To support manned missions to the surface of Mars and missions requiring manipulation of payloads and locomotion in space, a training facility is required to simulate the conditions of both partial and microgravity. A partial gravity simulator (Pogo) which uses pneumatic suspension is being studied for use in virtual reality training. Pogo maintains a constant partial gravity simulation with a variation of simulated body force between 2.2 and 10 percent, depending on the type of locomotion inputs. this paper is based on the concept and application of a virtual environment system with Pogo including a head-mounted display and glove. The reality engine consists of a high end SGI workstation and PC's which drive Pogo's sensors and data acquisition hardware used for tracking and control. The tracking system is a hybrid of magnetic and optical trackers integrated for this application.

  13. Virtual Cities as a Collaborative Educational Environment

    NASA Astrophysics Data System (ADS)

    Müller, Daniel Nehme; de Oliveira, Otto Lopes Braitback; Remião, Joelma Adriana Abrão; Silveira, Paloma Dias; Martins, Márcio André Rodrigues; Axt, Margarete

    The CIVITAS (Virtual Cities with Technologies for Learning and Simulating) project presents a research, teaching and extension approach directed to the construction of cities imagined by students in the first years of elementary school, with an emphasis to the fourth grade. The teacher ventures on a deviation from the official curriculum proposed to reflect upon the invention of cities along with the children. Within this context, the game Città is introduced as an environment that allows the creation of digital real/virtual/imagined cities, and enables different forms of interaction among the students through networked computers. The cooperative situations, made possible by the access to the game, are tools for teachers and students to think about the information that operate as general rules and words of order with the invention of the city/knowledge.

  14. Scripting human animations in a virtual environment

    NASA Technical Reports Server (NTRS)

    Goldsby, Michael E.; Pandya, Abhilash K.; Maida, James C.

    1994-01-01

    The current deficiencies of virtual environment (VE) are well known: annoying lag time in drawing the current view, drastically simplified environments to reduce that time lag, low resolution and narrow field of view. Animation scripting is an application of VE technology which can be carried out successfully despite these deficiencies. The final product is a smoothly moving high resolution animation displaying detailed models. In this system, the user is represented by a human computer model with the same body proportions. Using magnetic tracking, the motions of the model's upper torso, head and arms are controlled by the user's movements (18 degrees of freedom). The model's lower torso and global position and orientation are controlled by a spaceball and keypad (12 degrees of freedom). Using this system human motion scripts can be extracted from the user's movements while immersed in a simplified virtual environment. Recorded data is used to define key frames; motion is interpolated between them and post processing adds a more detailed environment. The result is a considerable savings in time and a much more natural-looking movement of a human figure in a smooth and seamless animation.

  15. Complex conditional control by pigeons in a continuous virtual environment.

    PubMed

    Qadri, Muhammad A J; Reid, Sean; Cook, Robert G

    2016-01-01

    We tested two pigeons in a continuously streaming digital environment. Using animation software that constantly presented a dynamic, three-dimensional (3D) environment, the animals were tested with a conditional object identification task. The correct object at a given time depended on the virtual context currently streaming in front of the pigeon. Pigeons were required to accurately peck correct target objects in the environment for food reward, while suppressing any pecks to intermixed distractor objects which delayed the next object's presentation. Experiment 1 established that the pigeons' discrimination of two objects could be controlled by the surface material of the digital terrain. Experiment 2 established that the pigeons' discrimination of four objects could be conjunctively controlled by both the surface material and topography of the streaming environment. These experiments indicate that pigeons can simultaneously process and use at least two context cues from a streaming environment to control their identification behavior of passing objects. These results add to the promise of testing interactive digital environments with animals to advance our understanding of cognition and behavior. PMID:26781058

  16. Virtual Environment Design for Low/Zero Visibility Tower Tools

    NASA Technical Reports Server (NTRS)

    Reisman, Ron; Farouk, Ahmed; Edwards, Thomas A. (Technical Monitor)

    1998-01-01

    This paper describes prototype software for three-dimensional display of aircraft movement based on realtime radar and other Air Traffic Control (ATC) information. This prototype can be used to develop operational tools for controllers in ATC Towers who cannot view aircraft in low or zero visibility (LZV) weather conditions. The controller could also use the software to arbitrarily reposition his virtual eyepoint to overcome physical obstructions or increase situation awareness. The LZV Tower tool prototype consists of server and client components. The server interfaces to operational ATC radar and communications systems, sending processed data to a client process written in java. This client process runs under Netscape Communicator to provide an interactive perspective display of aircraft in the airport environment. Prototype VRML airport models were derived from 3-D databases used in FAA-certified high fidelity flight-simulators. The web-based design offers potential efficiency increases and decreased costs in the development and deployment of operational LZV Tower tools.

  17. Effects of active navigation on object recognition in virtual environments.

    PubMed

    Hahm, Jinsun; Lee, Kanghee; Lim, Seung-Lark; Kim, Sei-Young; Kim, Hyun-Taek; Lee, Jang-Han

    2007-04-01

    We investigated the importance and efficiency of active and passive exploration on the recognition of objects in a variety of virtual environments (VEs). In this study, 54 participants were randomly allocated into one of active and passive navigation conditions. Active navigation was performed by allowing participants to self-pace and control their own navigation, but passive navigation was conducted by forced navigation. After navigating VEs, participants were asked to recognize the objects that had been in the VEs. Active navigation condition had a significantly higher percentage of hit responses (t (52) = 4.000, p < 0.01), and a significantly lower percentage of miss responses (t (52) = -3.763, p < 0.01) in object recognition than the passive condition. These results suggest that active navigation plays an important role in spatial cognition as well as providing an explanation for the efficiency of learning in a 3D-based program. PMID:17474852

  18. The CAVE (TM) automatic virtual environment: Characteristics and applications

    NASA Technical Reports Server (NTRS)

    Kenyon, Robert V.

    1995-01-01

    Virtual reality may best be defined as the wide-field presentation of computer-generated, multi-sensory information that tracks a user in real time. In addition to the more well-known modes of virtual reality -- head-mounted displays and boom-mounted displays -- the Electronic Visualization Laboratory at the University of Illinois at Chicago recently introduced a third mode: a room constructed from large screens on which the graphics are projected on to three walls and the floor. The CAVE is a multi-person, room sized, high resolution, 3D video and audio environment. Graphics are rear projected in stereo onto three walls and the floor, and viewed with stereo glasses. As a viewer wearing a location sensor moves within its display boundaries, the correct perspective and stereo projections of the environment are updated, and the image moves with and surrounds the viewer. The other viewers in the CAVE are like passengers in a bus, along for the ride. 'CAVE,' the name selected for the virtual reality theater, is both a recursive acronym (Cave Automatic Virtual Environment) and a reference to 'The Simile of the Cave' found in Plato's 'Republic,' in which the philosopher explores the ideas of perception, reality, and illusion. Plato used the analogy of a person facing the back of a cave alive with shadows that are his/her only basis for ideas of what real objects are. Rather than having evolved from video games or flight simulation, the CAVE has its motivation rooted in scientific visualization and the SIGGRAPH 92 Showcase effort. The CAVE was designed to be a useful tool for scientific visualization. The Showcase event was an experiment; the Showcase chair and committee advocated an environment for computational scientists to interactively present their research at a major professional conference in a one-to-many format on high-end workstations attached to large projection screens. The CAVE was developed as a 'virtual reality theater' with scientific content and

  19. A new dynamic 3D virtual methodology for teaching the mechanics of atrial septation as seen in the human heart.

    PubMed

    Schleich, Jean-Marc; Dillenseger, Jean-Louis; Houyel, Lucile; Almange, Claude; Anderson, Robert H

    2009-01-01

    Learning embryology remains difficult, since it requires understanding of many complex phenomena. The temporal evolution of developmental events has classically been illustrated using cartoons, which create difficulty in linking spatial and temporal aspects, such correlation being the keystone of descriptive embryology. We synthesized the bibliographic data from recent studies of atrial septal development. On the basis of this synthesis, consensus on the stages of atrial septation as seen in the human heart has been reached by a group of experts in cardiac embryology and pediatric cardiology. This has permitted the preparation of three-dimensional (3D) computer graphic objects for the anatomical components involved in the different stages of normal human atrial septation. We have provided a virtual guide to the process of normal atrial septation, the animation providing an appreciation of the temporal and morphologic events necessary to separate the systemic and pulmonary venous returns. We have shown that our animations of normal human atrial septation increase significantly the teaching of the complex developmental processes involved, and provide a new dynamic for the process of learning. PMID:19363807

  20. Implementation of 3d Tools and Immersive Experience Interaction for Supporting Learning in a Library-Archive Environment. Visions and Challenges

    NASA Astrophysics Data System (ADS)

    Angeletaki, A.; Carrozzino, M.; Johansen, S.

    2013-07-01

    In this paper we present an experimental environment of 3D books combined with a game application that has been developed by a collaboration project between the Norwegian University of Science and Technology in Trondheim, Norway the NTNU University Library, and the Percro laboratory of Santa Anna University in Pisa, Italy. MUBIL is an international research project involving museums, libraries and ICT academy partners aiming to develop a consistent methodology enabling the use of Virtual Environments as a metaphor to present manuscripts content through the paradigms of interaction and immersion, evaluating different possible alternatives. This paper presents the results of the application of two prototypes of books augmented with the use of XVR and IL technology. We explore immersive-reality design strategies in archive and library contexts for attracting new users. Our newly established Mubil-lab has invited school classes to test the books augmented with 3D models and other multimedia content in order to investigate whether the immersion in such environments can create wider engagement and support learning. The metaphor of 3D books and game designs in a combination allows the digital books to be handled through a tactile experience and substitute the physical browsing. In this paper we present some preliminary results about the enrichment of the user experience in such environment.

  1. SHARED VIRTUAL ENVIRONMENTS FOR COLLECTIVE TRAINING

    NASA Technical Reports Server (NTRS)

    Loftin, R. Bowen

    2000-01-01

    Historically NASA has trained teams of astronauts by bringing them to the Johnson Space Center in Houston to undergo generic training, followed by mission-specific training. This latter training begins after a crew has been selected for a mission (perhaps two years before the launch of that mission). While some Space Shuttle flights have included an astronaut from a foreign country, the International Space Station will be consistently crewed by teams comprised of astronauts from two or more of the partner nations. The cost of training these international teams continues to grow in both monetary and personal terms. Thus, NASA has been seeking alternative training approaches for the International Space Station program. Since 1994 we have been developing, testing, and refining shared virtual environments for astronaut team training, including the use of virtual environments for use while in or in transit to the task location. In parallel with this effort, we have also been preparing applications for training teams of military personnel engaged in peacekeeping missions. This paper will describe the applications developed to date, some of the technological challenges that have been overcome in their development, and the research performed to guide the development and to measure the efficacy of these shared environments as training tools.

  2. A Virtual Mission Operations Center: Collaborative Environment

    NASA Technical Reports Server (NTRS)

    Medina, Barbara; Bussman, Marie; Obenschain, Arthur F. (Technical Monitor)

    2002-01-01

    The Virtual Mission Operations Center - Collaborative Environment (VMOC-CE) intent is to have a central access point for all the resources used in a collaborative mission operations environment to assist mission operators in communicating on-site and off-site in the investigation and resolution of anomalies. It is a framework that as a minimum incorporates online chat, realtime file sharing and remote application sharing components in one central location. The use of a collaborative environment in mission operations opens up the possibilities for a central framework for other project members to access and interact with mission operations staff remotely. The goal of the Virtual Mission Operations Center (VMOC) Project is to identify, develop, and infuse technology to enable mission control by on-call personnel in geographically dispersed locations. In order to achieve this goal, the following capabilities are needed: Autonomous mission control systems Automated systems to contact on-call personnel Synthesis and presentation of mission control status and history information Desktop tools for data and situation analysis Secure mechanism for remote collaboration commanding Collaborative environment for remote cooperative work The VMOC-CE is a collaborative environment that facilitates remote cooperative work. It is an application instance of the Virtual System Design Environment (VSDE), developed by NASA Goddard Space Flight Center's (GSFC) Systems Engineering Services & Advanced Concepts (SESAC) Branch. The VSDE is a web-based portal that includes a knowledge repository and collaborative environment to serve science and engineering teams in product development. It is a "one stop shop" for product design, providing users real-time access to product development data, engineering and management tools, and relevant design specifications and resources through the Internet. The initial focus of the VSDE has been to serve teams working in the early portion of the system

  3. Virtual environments for nuclear power plant design

    SciTech Connect

    Brown-VanHoozer, S.A.; Singleterry, R.C. Jr.; King, R.W.

    1996-03-01

    In the design and operation of nuclear power plants, the visualization process inherent in virtual environments (VE) allows for abstract design concepts to be made concrete and simulated without using a physical mock-up. This helps reduce the time and effort required to design and understand the system, thus providing the design team with a less complicated arrangement. Also, the outcome of human interactions with the components and system can be minimized through various testing of scenarios in real-time without the threat of injury to the user or damage to the equipment. If implemented, this will lead to a minimal total design and construction effort for nuclear power plants (NPP).

  4. UAV based 3D digital surface model to estimate paleolandscape in high mountainous environment

    NASA Astrophysics Data System (ADS)

    Mészáros, János; Árvai, Mátyás; Kohán, Balázs; Deák, Márton; Nagy, Balázs

    2016-04-01

    Our method to present current state of a peat bog was focused on the possible use of a UAV-system and later Structure-from-motion algorithms as processing technique. The peat bog site is located on the Vinderel Plateau, Farcǎu Massif, Maramures Mountains (Romania). The peat bog (1530 m a.s.l., N47°54'11", E24°26'37") lies below Rugasu ridge (c. 1820 m a.s.l.) and the locality serves as a conservation area for fallen down coniferous trees. Peat deposits were formed in a landslide concavity on the western slope of Farcǎu Massif. Nowadays the site is surrounded by a completely deforested landscape, and Farcǎu Massif lies above the depressed treeline. The peat bog has an extraordinary geomorphological situation, because a gully reached the bog and drained the water. In the recent past sedimentological and dendrochronological researches have been initiated. However, an accurate 3D digital surface model also needed for a complex paleoenvironmental research. Last autumn the bog and its surroundings were finally surveyed by a multirotor UAV developed in-house based on an open-source flight management unit and its firmware. During this survey a lightweight action camera (mainly to decrease payload weight) was used to take aerial photographs. While our quadcopter is capable to fly automatically on a predefined flight route, several over- and sidelapping flight lines were generated prior to the actual survey on the ground using a control software running on a notebook. Despite those precautions, limited number of batteries and severe weather affected our final flights, resulting a reduced surveyed area around peat bog. Later, during the processing we looked for a reliable tool which powerful enough to process more than 500 photos taken during flights. After testing several software Agisoft PhotoScan was used to create 3D point cloud and mesh about bog and its environment. Due to large number of photographs PhotoScan had to be configured for network processing to get

  5. A Learner-Centered Approach for Training Science Teachers through Virtual Reality and 3D Visualization Technologies: Practical Experience for Sharing

    ERIC Educational Resources Information Center

    Yeung, Yau-Yuen

    2004-01-01

    This paper presentation will report on how some science educators at the Science Department of The Hong Kong Institute of Education have successfully employed an array of innovative learning media such as three-dimensional (3D) and virtual reality (VR) technologies to create seven sets of resource kits, most of which are being placed on the…

  6. Three Dimensional (3D) Printing: A Straightforward, User-Friendly Protocol to Convert Virtual Chemical Models to Real-Life Objects

    ERIC Educational Resources Information Center

    Rossi, Sergio; Benaglia, Maurizio; Brenna, Davide; Porta, Riccardo; Orlandi, Manuel

    2015-01-01

    A simple procedure to convert protein data bank files (.pdb) into a stereolithography file (.stl) using VMD software (Virtual Molecular Dynamic) is reported. This tutorial allows generating, with a very simple protocol, three-dimensional customized structures that can be printed by a low-cost 3D-printer, and used for teaching chemical education…

  7. 3D Direct Simulation Monte Carlo Modeling of the Spacecraft Environment of Rosetta

    NASA Astrophysics Data System (ADS)

    Bieler, A. M.; Tenishev, V.; Fougere, N.; Gombosi, T. I.; Hansen, K. C.; Combi, M. R.; Huang, Z.; Jia, X.; Toth, G.; Altwegg, K.; Wurz, P.; Jäckel, A.; Le Roy, L.; Gasc, S.; Calmonte, U.; Rubin, M.; Tzou, C. Y.; Hässig, M.; Fuselier, S.; De Keyser, J.; Berthelier, J. J.; Mall, U. A.; Rème, H.; Fiethe, B.; Balsiger, H.

    2014-12-01

    The European Space Agency's Rosetta mission is the first to escort a comet over an extended time as the comet makes its way through the inner solar system. The ROSINA instrument suite consisting of a double focusing mass spectrometer, a time of flight mass spectrometer and a pressure sensor, will provide temporally and spatially resolved data on the comet's volatile inventory. The effect of spacecraft outgassing is well known and has been measured with the ROSINA instruments onboard Rosetta throughout the cruise phase. The flux of released neutral gas originating from the spacecraft cannot be distinguished from the cometary signal by the mass spectrometers and varies significantly with solar illumination conditions. For accurate interpretation of the instrument data, a good understanding of spacecraft outgassing is necessary. In this talk we present results simulating the spacecraft environment with the Adaptive Mesh Particle Simulator (AMPS) code. AMPS is a direct simulation monte carlo code that includes multiple species in a 3D adaptive mesh to describe a full scale model of the spacecraft environment. We use the triangulated surface model of the spacecraft to implement realistic outgassing rates for different areas on the surface and take shadowing effects in consideration. The resulting particle fluxes are compared to the measurements of the ROSINA experiment and implications for ROSINA measurements and data analysis are discussed. Spacecraft outgassing has implications for future space missions to rarefied atmospheres as it imposes a limit on the detection of various species.

  8. Applications and a three-dimensional desktop environment for an immersive virtual reality system

    NASA Astrophysics Data System (ADS)

    Kageyama, Akira; Masada, Youhei

    2013-08-01

    We developed an application launcher called Multiverse for scientific visualizations in a CAVE-type virtual reality (VR) system. Multiverse can be regarded as a type of three-dimensional (3D) desktop environment. In Multiverse, a user in a CAVE room can browse multiple visualization applications with 3D icons and explore movies that float in the air. Touching one of the movies causes "teleportation" into the application's VR space. After analyzing the simulation data using the application, the user can jump back into Multiverse's VR desktop environment in the CAVE.

  9. CaveCAD: a tool for architectural design in immersive virtual environments

    NASA Astrophysics Data System (ADS)

    Schulze, Jürgen P.; Hughes, Cathleen E.; Zhang, Lelin; Edelstein, Eve; Macagno, Eduardo

    2014-02-01

    Existing 3D modeling tools were designed to run on desktop computers with monitor, keyboard and mouse. To make 3D modeling possible with mouse and keyboard, many 3D interactions, such as point placement or translations of geometry, had to be mapped to the 2D parameter space of the mouse, possibly supported by mouse buttons or keyboard keys. We hypothesize that had the designers of these existing systems had been able to assume immersive virtual reality systems as their target platforms, they would have been able to design 3D interactions much more intuitively. In collaboration with professional architects, we created a simple, but complete 3D modeling tool for virtual environments from the ground up and use direct 3D interaction wherever possible and adequate. In this publication, we present our approaches for interactions for typical 3D modeling functions, such as geometry creation, modification of existing geometry, and assignment of surface materials. We also discuss preliminary user experiences with this system.

  10. Evaluation of navigation interfaces in virtual environments

    NASA Astrophysics Data System (ADS)

    Mestre, Daniel R.

    2014-02-01

    When users are immersed in cave-like virtual reality systems, navigational interfaces have to be used when the size of the virtual environment becomes larger than the physical extent of the cave floor. However, using navigation interfaces, physically static users experience self-motion (visually-induced vection). As a consequence, sensorial incoherence between vision (indicating self-motion) and other proprioceptive inputs (indicating immobility) can make them feel dizzy and disoriented. We tested, in two experimental studies, different locomotion interfaces. The objective was twofold: testing spatial learning and cybersickness. In a first experiment, using first-person navigation with a flystick ®, we tested the effect of sensorial aids, a spatialized sound or guiding arrows on the ground, attracting the user toward the goal of the navigation task. Results revealed that sensorial aids tended to impact negatively spatial learning. Moreover, subjects reported significant levels of cybersickness. In a second experiment, we tested whether such negative effects could be due to poorly controlled rotational motion during simulated self-motion. Subjects used a gamepad, in which rotational and translational displacements were independently controlled by two joysticks. Furthermore, we tested first- versus third-person navigation. No significant difference was observed between these two conditions. Overall, cybersickness tended to be lower, as compared to experiment 1, but the difference was not significant. Future research should evaluate further the hypothesis of the role of passively perceived optical flow in cybersickness, but manipulating the virtual environment'sperrot structure. It also seems that video-gaming experience might be involved in the user's sensitivity to cybersickness.

  11. The role of the cytoskeleton in cellular force generation in 2D and 3D environments

    NASA Astrophysics Data System (ADS)

    Kraning-Rush, Casey M.; Carey, Shawn P.; Califano, Joseph P.; Smith, Brooke N.; Reinhart-King, Cynthia A.

    2011-02-01

    To adhere and migrate, cells generate forces through the cytoskeleton that are transmitted to the surrounding matrix. While cellular force generation has been studied on 2D substrates, less is known about cytoskeletal-mediated traction forces of cells embedded in more in vivo-like 3D matrices. Recent studies have revealed important differences between the cytoskeletal structure, adhesion, and migration of cells in 2D and 3D. Because the cytoskeleton mediates force, we sought to directly compare the role of the cytoskeleton in modulating cell force in 2D and 3D. MDA-MB-231 cells were treated with agents that perturbed actin, microtubules, or myosin, and analyzed for changes in cytoskeletal organization and force generation in both 2D and 3D. To quantify traction stresses in 2D, traction force microscopy was used; in 3D, force was assessed based on single cell-mediated collagen fibril reorganization imaged using confocal reflectance microscopy. Interestingly, even though previous studies have observed differences in cell behaviors like migration in 2D and 3D, our data indicate that forces generated on 2D substrates correlate with forces within 3D matrices. Disruption of actin, myosin or microtubules in either 2D or 3D microenvironments disrupts cell-generated force. These data suggest that despite differences in cytoskeletal organization in 2D and 3D, actin, microtubules and myosin contribute to contractility and matrix reorganization similarly in both microenvironments.

  12. Cell type-specific adaptation of cellular and nuclear volume in micro-engineered 3D environments.

    PubMed

    Greiner, Alexandra M; Klein, Franziska; Gudzenko, Tetyana; Richter, Benjamin; Striebel, Thomas; Wundari, Bayu G; Autenrieth, Tatjana J; Wegener, Martin; Franz, Clemens M; Bastmeyer, Martin

    2015-11-01

    Bio-functionalized three-dimensional (3D) structures fabricated by direct laser writing (DLW) are structurally and mechanically well-defined and ideal for systematically investigating the influence of three-dimensionality and substrate stiffness on cell behavior. Here, we show that different fibroblast-like and epithelial cell lines maintain normal proliferation rates and form functional cell-matrix contacts in DLW-fabricated 3D scaffolds of different mechanics and geometry. Furthermore, the molecular composition of cell-matrix contacts forming in these 3D micro-environments and under conventional 2D culture conditions is identical, based on the analysis of several marker proteins (paxillin, phospho-paxillin, phospho-focal adhesion kinase, vinculin, β1-integrin). However, fibroblast-like and epithelial cells differ markedly in the way they adapt their total cell and nuclear volumes in 3D environments. While fibroblast-like cell lines display significantly increased cell and nuclear volumes in 3D substrates compared to 2D substrates, epithelial cells retain similar cell and nuclear volumes in 2D and 3D environments. Despite differential cell volume regulation between fibroblasts and epithelial cells in 3D environments, the nucleus-to-cell (N/C) volume ratios remain constant for all cell types and culture conditions. Thus, changes in cell and nuclear volume during the transition from 2D to 3D environments are strongly cell type-dependent, but independent of scaffold stiffness, while cells maintain the N/C ratio regardless of culture conditions. PMID:26283159

  13. Cloudspace: virtual environments in the VO

    NASA Astrophysics Data System (ADS)

    Graham, M. J.; Williams, R. D.

    2008-08-01

    The grid community is moving towards providing on-demand computing in the form of virtual workspaces - abstracted execution environments that are dynamically made available to authorized clients. In part this is a reaction to market forces represented by such commercial initiatives as Amazon EC2 and in part a solution to hot service deployment. One danger, though, is that a multiplicity of implementations will lead to a lack of interoperability. Such a concern in the VO regarding distributed data storage led to the development of VOSpace, a lightweight abstraction layer that sits on top of existing storage solutions such as SRB. In this paper, we introduce Cloudspace, a resource-oriented extension of VOSpace, that incorporates UWS, the VO pattern for managing asynchronous services, to form a natural habitat for virtual environments in the VO. A notable feature of the Cloudspace concept is that distributed data and computing can be managed seamlessly through a single mechanism thus making the astronomer's life easier as we move into a new era of sophisticated computational astronomy.

  14. ARC+(Registered Trademark) and ARC PC Welding Simulators: Teach Welders with Virtual Interactive 3D Technologies

    NASA Technical Reports Server (NTRS)

    Choquet, Claude

    2011-01-01

    123 Certification Inc., a Montreal based company, has developed an innovative hands-on welding simulator solution to help build the welding workforce in the most simple way. The solution lies in virtual reality technology, which has been fully tested since the early 90's. President and founder of 123 Certification Inc., Mr. Claude Choquet Ing. Msc. IWE. acts as a bridge between the welding and the programming world. Working in these fields for more than 20 years. he has filed 12 patents world-wide for a gesture control platform with leading edge hardware related to simulation. In the summer of 2006. Mr Choquet was proud to be invited to the annual IIW International Weld ing Congress in Quebec City to launch the ARC+ welding simulator. A 100% virtual reality system and web based training center was developed to simulate multi process. multi-materiaL multi-position and multi pass welding. The simulator is intended to train welding students and apprentices in schools or industries. The welding simulator is composed of a real welding e[eetrode holder (SMAW-GTAW) and gun (GMAW-FCAW). a head mounted display (HMD), a 6 degrees of freedom tracking system for interaction between the user's hands and head. as well as external audio speakers. Both guns and HMD are interacting online and simultaneously. The welding simulation is based on the law of physics and empirical results from detailed analysis of a series of welding tests based on industrial applications tested over the last 20 years. The simulation runs in real-time, using a local logic network to determine the quality and shape of the created weld. These results are based on the orientation distance. and speed of the welding torch and depth of penetration. The welding process and resulting weld bc.1d are displayed in a virtual environment with screenplay interactive training modules. For review. weld quality and recorded process values can be displayed and diagnosed after welding. To help in the le.tming process, a

  15. Use of a virtual environment to facilitate instruction of an interprofessional home assessment.

    PubMed

    Sabus, Carla; Sabata, Dory; Antonacci, David

    2011-01-01

    Technology has become a ubiquitous part of our society and is largely embedded in today's educational system. 3D virtual reality technology can be used to simulate environments and activities and may be used as an instructional technology. The purpose of this research was to better understand the utility of a web-based virtual environment as a teaching tool to represent clinical assessment and interventions in the home environment. Specifically, students' learning outcomes related to interprofessional collaboration, patient-centered decision-making, and appreciation of the environmental and social context of functional mobility and occupational performance will be described through descriptive analysis. Thirty-four physical therapist students and 35 occupational therapist students participated in an instructor-guided virtual assessment of a client's function in a home environment utilizing a virtual environment, Second Life®. Teams formulated task-specific, functional client goals and home modification recommendations. Students revisited a solution virtual environment to view and evaluate recommendations in a follow-up instructor-guided tour. Students completed a web-based survey capturing student perception of the experience. Team assignments were analyzed based on a rubric representing learning objectives. Descriptive analysis was conducted on the survey. Assignment analysis revealed contextual and client-centered recommendations. Student surveys revealed that students found the virtual environment supportive of learning. Student surveys and reflection statements were supportive of the interprofessional collaboration. Use of a virtual environment in instruction allows an authentic means of representing interprofessional home assessment. The virtual environment allowed a temporal depiction of home environment issues and solutions providing the unique opportunity for students to evaluate home recommendations. PMID:22138875

  16. Neurite outgrowth at the interface of 2D and 3D growth environments

    NASA Astrophysics Data System (ADS)

    Kofron, Celinda M.; Fong, Vivian J.; Hoffman-Kim, Diane

    2009-02-01

    Growing neurons navigate complex environments, but in vitro systems for studying neuronal growth typically limit the cues to flat surfaces or a single type of cue, thereby limiting the resulting growth. Here we examined the growth of neurons presented with two-dimensional (2D) substrate-bound cues when these cues were presented in conjunction with a more complex three-dimensional (3D) architecture. Dorsal root ganglia (DRG) explants were cultured at the interface between a collagen I matrix and a glass coverslip. Laminin (LN) or chondroitin sulfate proteoglycans (CSPG) were uniformly coated on the surface of the glass coverslip or patterned in 50 µm tracks by microcontact printing. Quantitative analysis of neurite outgrowth with a novel grid system at multiple depths in the gel revealed several interesting trends. Most of the neurites extended at the surface of the gel when LN was presented whereas more neurites extended into the gel when CSPG was presented. Patterning of cues did not affect neurite density or depth of growth. However, neurite outgrowth near the surface of the gel aligned with LN patterns, and these extensions were significantly longer than neurites extended in other cultures. In interface cultures, DRG growth patterns varied with the type of cue where neurite density was higher in cultures presenting LN than in cultures presenting CSPG. These results represent an important step toward understanding how neurons integrate local structural and chemical cues to make net growth decisions.

  17. Teaching undergraduate nursing students renal care in a 3D Gaming Environment.

    PubMed

    Foster, Joanne; Dallemagne, Catherine

    2009-01-01

    The original program on renal care was developed between 1995-1997 using 'Toolbook' software, which presented the content in a non interactive graphical way without tracking student progress or recording of results and was available to students via a CDRom. The content described the clinical decision making process that practitioners had to follow when diagnosing and managing renal diseases. These processes followed a learning sequence whereby a series of decisions lead to the next phase of the diagnosis and treatment. The purpose was to simulate the live clinical decision making processes for practitioners. An additional build-in 'Ask the Expert'-button (Help function) guided students in correct clinical decision making. One of the problems encountered in the original program is that the navigation is not intuitive to the user and students could get easily lost while going through the step-by-step introduction as well as the lack of interactivity. The original program still has relevant learning content, but the software, illustrations and tracking of learning outcomes are out-of-date. Therefore a re-design of the original program using a 3D Gaming Environment with updated content is being undertaken. This paper will discuss the methodology underpinning the new development, a demonstration of the program and the results from student feedback which will be undertaken in February - March 2009. PMID:19593009

  18. Nuclear deformability constitutes a rate-limiting step during cell migration in 3-D environments

    PubMed Central

    Davidson, Patricia M.; Denais, Celine; Bakshi, Maya C.; Lammerding, Jan

    2014-01-01

    Cell motility plays a critical role in many physiological and pathological settings, ranging from wound healing to cancer metastasis. While cell migration on 2-dimensional (2-D) substrates has been studied for decades, the physical challenges cells face when moving in 3-D environments are only now emerging. In particular, the cell nucleus, which occupies a large fraction of the cell volume and is normally substantially stiffer than the surrounding cytoplasm, may impose a major obstacle when cells encounter narrow constrictions in the interstitial space, the extracellular matrix, or small capillaries. Using novel microfluidic devices that allow observation of cells moving through precisely defined geometries at high spatial and temporal resolution, we determined nuclear deformability as a critical factor in the cells’ ability to pass through constrictions smaller than the size of the nucleus. Furthermore, we found that cells with reduced levels of the nuclear envelope proteins lamins A/C, which are the main determinants of nuclear stiffness, passed significantly faster through narrow constrictions during active migration and passive perfusion. Given recent reports that many human cancers have altered lamin expression, our findings suggest a novel biophysical mechanism by which changes in nuclear structure and composition may promote cancer cell invasion and metastasis. PMID:25436017

  19. Augmenting ViSP's 3D Model-Based Tracker with RGB-D SLAM for 3D Pose Estimation in Indoor Environments

    NASA Astrophysics Data System (ADS)

    Li-Chee-Ming, J.; Armenakis, C.

    2016-06-01

    This paper presents a novel application of the Visual Servoing Platform's (ViSP) for pose estimation in indoor and GPS-denied outdoor environments. Our proposed solution integrates the trajectory solution from RGBD-SLAM into ViSP's pose estimation process. Li-Chee-Ming and Armenakis (2015) explored the application of ViSP in mapping large outdoor environments, and tracking larger objects (i.e., building models). Their experiments revealed that tracking was often lost due to a lack of model features in the camera's field of view, and also because of rapid camera motion. Further, the pose estimate was often biased due to incorrect feature matches. This work proposes a solution to improve ViSP's pose estimation performance, aiming specifically to reduce the frequency of tracking losses and reduce the biases present in the pose estimate. This paper explores the integration of ViSP with RGB-D SLAM. We discuss the performance of the combined tracker in mapping indoor environments and tracking 3D wireframe indoor building models, and present preliminary results from our experiments.

  20. Virtual Learning Environment for Astronomy Education

    NASA Astrophysics Data System (ADS)

    Hoban, S.; Kumar, S.

    2004-12-01

    We have developed a virtual learning environment for astronomy education, which we call VTIE (for Virtual Telescopes in Education). While astronomy often inspires "oohs" and "ahhs" with glorious imagery, the VTIE architecture emphasizes the scientific process, eliciting questions about the nature of celestial objects and the physical processes which give rise to the pretty pictures. VTIE aims to bring observational astronomy directly to learners in both formal and informal settings by providing tools for both educators and students. For educators, VTIE provides the capability to design astronomy experiments, an online review tool to comment upon students proposals and papers, and classroom management tools (e.g. messaging service and ability to create a reading list). For students, VTIE provides an interface for developing an observing proposal (details of which are designed by the educators), access to online data services, an online observing log, and a Paper Writing Tool to complete the process by reporting their results. Details of the system and practical examples will be provided.

  1. Europeana and 3D

    NASA Astrophysics Data System (ADS)

    Pletinckx, D.

    2011-09-01

    The current 3D hype creates a lot of interest in 3D. People go to 3D movies, but are we ready to use 3D in our homes, in our offices, in our communication? Are we ready to deliver real 3D to a general public and use interactive 3D in a meaningful way to enjoy, learn, communicate? The CARARE project is realising this for the moment in the domain of monuments and archaeology, so that real 3D of archaeological sites and European monuments will be available to the general public by 2012. There are several aspects to this endeavour. First of all is the technical aspect of flawlessly delivering 3D content over all platforms and operating systems, without installing software. We have currently a working solution in PDF, but HTML5 will probably be the future. Secondly, there is still little knowledge on how to create 3D learning objects, 3D tourist information or 3D scholarly communication. We are still in a prototype phase when it comes to integrate 3D objects in physical or virtual museums. Nevertheless, Europeana has a tremendous potential as a multi-facetted virtual museum. Finally, 3D has a large potential to act as a hub of information, linking to related 2D imagery, texts, video, sound. We describe how to create such rich, explorable 3D objects that can be used intuitively by the generic Europeana user and what metadata is needed to support the semantic linking.

  2. Exploring the Potential of Aerial Photogrammetry for 3d Modelling of High-Alpine Environments

    NASA Astrophysics Data System (ADS)

    Legat, K.; Moe, K.; Poli, D.; Bollmannb, E.

    2016-03-01

    cameras of Microsoft's UltraCam series and the in-house processing chain centred on the Dense-Image-Matching (DIM) software SURE by nFrames. This paper reports the work carried out at AVT for the surface- and terrain modelling of several high-alpine areas using DIM- and ALS-based approaches. A special focus is dedicated to the influence of terrain morphology, flight planning, GNSS/IMU measurements, and ground-control distribution in the georeferencing process on the data quality. Based on the very promising results, some general recommendations for aerial photogrammetry processing in high-alpine areas are made to achieve best possible accuracy of the final 3D-, 2.5D- and 2D products.

  3. Electric Circuits in a Virtual Environment

    NASA Astrophysics Data System (ADS)

    Meisner, Gerald W.; Hoffman, H.; Turner, M.

    2006-12-01

    Online tutorials present opportunities that are difficult to replicate elsewhere. A virtual environment permits carefully scripted material, driven by exemplary pedagogy; tutoring by branching at check points and directed by PER delineated misconceptions and laboratory experience; the ability of each user to make mistakes and to engage in all aspects of learning (making errors, collecting data, graphing, analyzing, drawing conclusions); being able to deploy carefully constructed physics models in visually rich and unusual settings outside the ‘laboratory’ the ability of students to record not only their data, but also their ideas, doubts, questions and conclusions in a easily searchable data format; the ability of faculty to respond to student questions and conclusions in a manner timely to both parties. We will show how ‘Electric Circuits’, one of the tutorials in LAB-Physics, satisfies the above conditions. Student responses will be given.

  4. Olfactory Stimuli Increase Presence in Virtual Environments

    PubMed Central

    Munyan, Benson G.; Neer, Sandra M.; Beidel, Deborah C.; Jentsch, Florian

    2016-01-01

    Background Exposure therapy (EXP) is the most empirically supported treatment for anxiety and trauma-related disorders. EXP consists of repeated exposure to a feared object or situation in the absence of the feared outcome in order to extinguish associated anxiety. Key to the success of EXP is the need to present the feared object/event/situation in as much detail and utilizing as many sensory modalities as possible, in order to augment the sense of presence during exposure sessions. Various technologies used to augment the exposure therapy process by presenting multi-sensory cues (e.g., sights, smells, sounds). Studies have shown that scents can elicit emotionally charged memories, but no prior research has examined the effect of olfactory stimuli upon the patient’s sense of presence during simulated exposure tasks. Methods 60 adult participants navigated a mildly anxiety-producing virtual environment (VE) similar to those used in the treatment of anxiety disorders. Participants had no autobiographical memory associated with the VE. State anxiety, Presence ratings, and electrodermal (EDA) activity were collected throughout the experiment. Results Utilizing a Bonferroni corrected Linear Mixed Model, our results showed statistically significant relationships between olfactory stimuli and presence as assessed by both the Igroup Presence Questionnaire (IPQ: R2 = 0.85, (F(3,52) = 6.625, p = 0.0007) and a single item visual-analogue scale (R2 = 0.85, (F(3,52) = 5.382, p = 0.0027). State anxiety was unaffected by the presence or absence of olfactory cues. EDA was unaffected by experimental condition. Conclusion Olfactory stimuli increase presence in virtual environments that approximate those typical in exposure therapy, but did not increase EDA. Additionally, once administered, the removal of scents resulted in a disproportionate decrease in presence. Implications for incorporating the use of scents to increase the efficacy of exposure therapy is discussed. PMID

  5. New weather depiction technology for night vision goggle (NVG) training: 3D virtual/augmented reality scene-weather-atmosphere-target simulation

    NASA Astrophysics Data System (ADS)

    Folaron, Michelle; Deacutis, Martin; Hegarty, Jennifer; Vollmerhausen, Richard; Schroeder, John; Colby, Frank P.

    2007-04-01

    US Navy and Marine Corps pilots receive Night Vision Goggle (NVG) training as part of their overall training to maintain the superiority of our forces. This training must incorporate realistic targets; backgrounds; and representative atmospheric and weather effects they may encounter under operational conditions. An approach for pilot NVG training is to use the Night Imaging and Threat Evaluation Laboratory (NITE Lab) concept. The NITE Labs utilize a 10' by 10' static terrain model equipped with both natural and cultural lighting that are used to demonstrate various illumination conditions, and visual phenomena which might be experienced when utilizing night vision goggles. With this technology, the military can safely, systematically, and reliably expose pilots to the large number of potentially dangerous environmental conditions that will be experienced in their NVG training flights. A previous SPIE presentation described our work for NAVAIR to add realistic atmospheric and weather effects to the NVG NITE Lab training facility using the NVG - WDT(Weather Depiction Technology) system (Colby, et al.). NVG -WDT consist of a high end multiprocessor server with weather simulation software, and several fixed and goggle mounted Heads Up Displays (HUDs). Atmospheric and weather effects are simulated using state-of-the-art computer codes such as the WRF (Weather Research μ Forecasting) model; and the US Air Force Research Laboratory MODTRAN radiative transport model. Imagery for a variety of natural and man-made obscurations (e.g. rain, clouds, snow, dust, smoke, chemical releases) are being calculated and injected into the scene observed through the NVG via the fixed and goggle mounted HUDs. This paper expands on the work described in the previous presentation and will describe the 3D Virtual/Augmented Reality Scene - Weather - Atmosphere - Target Simulation part of the NVG - WDT. The 3D virtual reality software is a complete simulation system to generate realistic

  6. Web-Based 3D and Haptic Interactive Environments for e-Learning, Simulation, and Training

    NASA Astrophysics Data System (ADS)

    Hamza-Lup, Felix G.; Sopin, Ivan

    Knowledge creation occurs in the process of social interaction. As our service-based society is evolving into a knowledge-based society, there is an acute need for more effective collaboration and knowledge-sharing systems to be used by geographically scattered people. We present the use of 3D components and standards, such as Web3D, in combination with the haptic paradigm, for e-Learning and simulation.

  7. MGLab3D: An interactive environment for iterative solvers for elliptic PDEs in two and three dimensions

    SciTech Connect

    Bordner, J.; Saied, F.

    1996-12-31

    GLab3D is an enhancement of an interactive environment (MGLab) for experimenting with iterative solvers and multigrid algorithms. It is implemented in MATLAB. The new version has built-in 3D elliptic pde`s and several iterative methods and preconditioners that were not available in the original version. A sparse direct solver option has also been included. The multigrid solvers have also been extended to 3D. The discretization and pde domains are restricted to standard finite differences on the unit square/cube. The power of this software studies in the fact that no programming is needed to solve, for example, the convection-diffusion equation in 3D with TFQMR and a customized V-cycle preconditioner, for a variety of problem sizes and mesh Reynolds, numbers. In addition to the graphical user interface, some sample drivers are included to show how experiments can be composed using the underlying suite of problems and solvers.

  8. Effects of Na+ and He+ pickup ions on the lunar plasma environment: 3D hybrid modeling

    NASA Astrophysics Data System (ADS)

    Lipatov, A. S.; Cooper, J. F.; Sittler, E. C.; Hartle, R. E.; Sarantos, M.

    2011-12-01

    The hybrid kinetic model used here supports comprehensive simulation of the interaction between different spatial and energetic elements of the moon-solar wind-magnetosphere of the Earth system. There is a set of MHD,kinetic, hybrid, drift kinetic, electrostatic and full kinetic modeling of the lunar plasma environment [1]. However, observations show the existence of several species of the neutrals and pickup ions like Na, He, K, O etc., (see e.g., [2,3,4]). The solar wind parameters are chosen from the ARTEMIS observations [5]. The Na+, He+ lunar exosphere's parameters are chosen from [6,7]. The hybrid kinetic model allows us to take into account the finite gyroradius effects of pickup ions and to correctly estimate the ions velocity distribution and the fluxes along the magnetic field, and on the lunar surface. Modeling shows the formation of the asymmetric Mach cone, the structuring of the pickup ion tails, and presents another type of lunar-solar wind interaction. We will compare the results of our modeling with observed distributions. References [1] Lipatov, A.S., and Cooper, J.F., Hybrid kinetic modeling of the Lunar plasma environment: Past, present and future. In: Lunar Dust, Plasma and Atmosphere: The Next Steps, January 27-29, 2010, Boulder, Colorado, Abstracts/lpa2010.colorado.edu/. [2] Potter, A.E., and Morgan, T.H., Discovery of sodium and potassium vapor in the atmosphere of the Moon, Science, 241, 675-680, doi:10.1126/science.241.4866.675, 1988. [3] Tyler, A.L., et al., Observations of sodium in the tenuous lunar atmosphere, Geophys. Res. Lett., 15(10), 1141-1144, doi:10.1029/GL015i010p01141, 1988. [4] Tanaka, T., et al., First in situ observation of the Moon-originating ions in the Earth's Magnetosphere by MAP-PACE on SELENE (KAGUYA), Geophys. Res. Lett., 36, L22106, doi:10.1029/2009GL040682, 2009. [5] Wiehle, S., et al., First Lunar Wake Passage of ARTEMIS: Discrimination of Wake Effects and Solar Wind Fluctuations by 3D Hybrid Simulations, Planet

  9. Augmented reality and photogrammetry: A synergy to visualize physical and virtual city environments

    NASA Astrophysics Data System (ADS)

    Portalés, Cristina; Lerma, José Luis; Navarro, Santiago

    2010-01-01

    Close-range photogrammetry is based on the acquisition of imagery to make accurate measurements and, eventually, three-dimensional (3D) photo-realistic models. These models are a photogrammetric product per se. They are usually integrated into virtual reality scenarios where additional data such as sound, text or video can be introduced, leading to multimedia virtual environments. These environments allow users both to navigate and interact on different platforms such as desktop PCs, laptops and small hand-held devices (mobile phones or PDAs). In very recent years, a new technology derived from virtual reality has emerged: Augmented Reality (AR), which is based on mixing real and virtual environments to boost human interactions and real-life navigations. The synergy of AR and photogrammetry opens up new possibilities in the field of 3D data visualization, navigation and interaction far beyond the traditional static navigation and interaction in front of a computer screen. In this paper we introduce a low-cost outdoor mobile AR application to integrate buildings of different urban spaces. High-accuracy 3D photo-models derived from close-range photogrammetry are integrated in real (physical) urban worlds. The augmented environment that is presented herein requires for visualization a see-through video head mounted display (HMD), whereas user's movement navigation is achieved in the real world with the help of an inertial navigation sensor. After introducing the basics of AR technology, the paper will deal with real-time orientation and tracking in combined physical and virtual city environments, merging close-range photogrammetry and AR. There are, however, some software and complex issues, which are discussed in the paper.

  10. Increasing 3D Matrix Rigidity Strengthens Proliferation and Spheroid Development of Human Liver Cells in a Constant Growth Factor Environment.

    PubMed

    Bomo, Jérémy; Ezan, Frédéric; Tiaho, François; Bellamri, Medjda; Langouët, Sophie; Theret, Nathalie; Baffet, Georges

    2016-03-01

    Mechanical forces influence the growth and shape of virtually all tissues and organs. Recent studies show that increased cell contractibility, growth and differentiation might be normalized by modulating cell tensions. Particularly, the role of these tensions applied by the extracellular matrix during liver fibrosis could influence the hepatocarcinogenesis process. The objective of this study is to determine if 3D stiffness could influence growth and phenotype of normal and transformed hepatocytes and to integrate extracellular matrix (ECM) stiffness to tensional homeostasis. We have developed an appropriate 3D culture model: hepatic cells within three-dimensional collagen matrices with varying rigidity. Our results demonstrate that the rigidity influenced the cell phenotype and induced spheroid clusters development whereas in soft matrices, Huh7 transformed cells were less proliferative, well-spread and flattened. We confirmed that ERK1 played a predominant role over ERK2 in cisplatin-induced death, whereas ERK2 mainly controlled proliferation. As compared to 2D culture, 3D cultures are associated with epithelial markers expression. Interestingly, proliferation of normal hepatocytes was also induced in rigid gels. Furthermore, biotransformation activities are increased in 3D gels, where CYP1A2 enzyme can be highly induced/activated in primary culture of human hepatocytes embedded in the matrix. In conclusion, we demonstrated that increasing 3D rigidity could promote proliferation and spheroid developments of liver cells demonstrating that 3D collagen gels are an attractive tool for studying rigidity-dependent homeostasis of the liver cells embedded in the matrix and should be privileged for both chronic toxicological and pharmacological drug screening. PMID:26331987

  11. A Second Chance at Health: How a 3D Virtual World Can Improve Health Self-Efficacy for Weight Loss Management Among Adults.

    PubMed

    Behm-Morawitz, Elizabeth; Lewallen, Jennifer; Choi, Grace

    2016-02-01

    Health self-efficacy, or the beliefs in one's capabilities to perform health behaviors, is a significant factor in eliciting health behavior change, such as weight loss. Research has demonstrated that virtual embodiment has the potential to alter one's psychology and physicality, particularly in health contexts; however, little is known about the impacts embodiment in a virtual world has on health self-efficacy. The present research is a randomized controlled trial (N = 90) examining the effectiveness of virtual embodiment and play in a social virtual world (Second Life [SL]) for increasing health self-efficacy (exercise and nutrition efficacy) among overweight adults. Participants were randomly assigned to a 3D social virtual world (avatar virtual interaction experimental condition), 2D social networking site (no avatar virtual interaction control condition), or no intervention (no virtual interaction control condition). The findings of this study provide initial evidence for the use of SL to improve exercise efficacy and to support weight loss. Results also suggest that individuals who have higher self-presence with their avatar reap more benefits. Finally, quantitative findings are triangulated with qualitative data to increase confidence in the results and provide richer insight into the perceived effectiveness and limitations of SL for meeting weight loss goals. Themes resulting from the qualitative analysis indicate that participation in SL can improve motivation and efficacy to try new physical activities; however, individuals who have a dislike for video games may not be benefitted by avatar-based virtual interventions. Implications for research on the transformative potential of virtual embodiment and self-presence in general are discussed. PMID:26882324

  12. Development of Virtual Geographic Environments and Geography Research

    NASA Astrophysics Data System (ADS)

    Huang, Fengru; Lin, Hui; Chen, Bin

    Geographic environment is a combination of natural and cultural environments under which humans survive. Virtual Geographic Environment (VGE) is a new multi-disciplinary initiative that links geosciences, geographic information sciences and information technologies. A VGE is a virtual representation of the natural world that enables a person to explore and interact with vast amounts of natural and cultural information on the physical and cultural environment in cyberspace. Virtual Geography and Experimental Geography are the two closest fields that associate with the development of VGE from the perspective of geography. This paper discusses the background of VGE, introduces its research progress, and addresses key issues of VGE research and the significance for geography research from Experimental Geography and Virtual Geography. VGE can be an extended research object for the research of Virtual Geography and enrich the contents of future geography, while VGE can also be an extended research method for Experimental Geography that geographers can operate virtual geographic experiments based on VGE platforms.

  13. Evaluation of pointing techniques for ray casting selection in virtual environments

    NASA Astrophysics Data System (ADS)

    Lee, SangYoon; Seo, Jinseok; Kim, Gerard J.; Park, Chan-Mo

    2003-04-01

    Various techniques for object selection in virtual environments have been proposed over the years. Among them, the virtual pointer or ray-casting is one of the most popular method for object selection because it is easy and intuitive to use and allows the user to select objects that are far away. Variants of the virtual pointer metaphor include the Aperture, Flashlight, and Image plane method as categorized as such. In a monoscopic environment, these methods are essentially 2D interaction techniques, as the selection is made effectively on the image plane. Such a 2D based selection (or more generally, interaction) method has an added advantage in that it can find many good uses in 3D environments ranging from a simple 2D oriented subtask (object selection on a constrained surface, menu selection) to a situation where a whole 2D application (e.g. sketching tool, desktop manager) is embedded in the 3D environment. In this paper, we experimentally compare the performance of four different virtual pointer implementations, namely, the direct image plane selection, head-directed pointer, hand directed pointer and head-hand directed pointer. The experimental results revealed that the direct image plane selection produced the best performance among the four in terms of both task completion time and the pixel-level pointing error.

  14. Cell force measurements in 3D microfabricated environments based on compliant cantilevers.

    PubMed

    Marelli, Mattia; Gadhari, Neha; Boero, Giovanni; Chiquet, Matthias; Brugger, Jürgen

    2014-01-21

    We report the fabrication, functionalization and testing of microdevices for cell culture and cell traction force measurements in three-dimensions (3D). The devices are composed of bent cantilevers patterned with cell-adhesive spots not lying on the same plane, and thus suspending cells in 3D. The cantilevers are soft enough to undergo micrometric deflections when cells pull on them, allowing cell forces to be measured by means of optical microscopy. Since individual cantilevers are mechanically independent of each other, cell traction forces are determined directly from cantilever deflections. This proves the potential of these new devices as a tool for the quantification of cell mechanics in a system with well-defined 3D geometry and mechanical properties. PMID:24217771

  15. Emerging Technologies in the Built Environment: Geographic Information Science (GIS), 3D Printing, and Additive Manufacturing

    SciTech Connect

    New, Joshua Ryan

    2014-01-01

    Abstract 1: Geographic information systems emerged as a computer application in the late 1960s, led in part by projects at ORNL. The concept of a GIS has shifted through time in response to new applications and new technologies, and is now part of a much larger world of geospatial technology. This presentation discusses the relationship of GIS and estimating hourly and seasonal energy consumption profiles in the building sector at spatial scales down to the individual parcel. The method combines annual building energy simulations for city-specific prototypical buildings and commonly available geospatial data in a GIS framework. Abstract 2: This presentation focuses on 3D printing technologies and how they have rapidly evolved over the past couple of years. At a basic level, 3D printing produces physical models quickly and easily from 3D CAD, BIM (Building Information Models), and other digital data. Many AEC firms have adopted 3D printing as part of commercial building design development and project delivery. This presentation includes an overview of 3D printing, discusses its current use in building design, and talks about its future in relation to the HVAC industry. Abstract 3: This presentation discusses additive manufacturing and how it is revolutionizing the design of commercial and residential facilities. Additive manufacturing utilizes a broad range of direct manufacturing technologies, including electron beam melting, ultrasonic, extrusion, and laser metal deposition for rapid prototyping. While there is some overlap with the 3D printing talk, this presentation focuses on the materials aspect of additive manufacturing and also some of the more advanced technologies involved with rapid prototyping. These technologies include design of carbon fiber composites, lightweight metals processing, transient field processing, and more.

  16. Design Characteristics of Virtual Learning Environments: State of Research

    ERIC Educational Resources Information Center

    Mueller, Daniel; Strohmeier, Stefan

    2011-01-01

    Virtual learning environments constitute current information systems' category for electronically supported training and development in (higher) education(al) and vocational training settings. Frequently expected advantages of using virtual learning environments refer, for instance, to the efficiency, individuality, ubiquity, timeliness and…

  17. Virtual Learning Environment for Interactive Engagement with Advanced Quantum Mechanics

    ERIC Educational Resources Information Center

    Pedersen, Mads Kock; Skyum, Birk; Heck, Robert; Müller, Romain; Bason, Mark; Lieberoth, Andreas; Sherson, Jacob F.

    2016-01-01

    A virtual learning environment can engage university students in the learning process in ways that the traditional lectures and lab formats cannot. We present our virtual learning environment "StudentResearcher," which incorporates simulations, multiple-choice quizzes, video lectures, and gamification into a learning path for quantum…

  18. The Doubtful Guest? A Virtual Research Environment for Education

    ERIC Educational Resources Information Center

    Laterza, Vito; Carmichael, Patrick; Procter, Richard

    2007-01-01

    In this paper the authors describe a novel "Virtual Research Environment" (VRE) based on the Sakai Virtual Collaboration Environment and designed to support education research. This VRE has been used for the past two years by projects of the UK Economic and Social Research Council's Teaching and Learning Research Programme, 10 of which were…

  19. Virtual Environments Supporting Learning and Communication in Special Needs Education

    ERIC Educational Resources Information Center

    Cobb, Sue V. G.

    2007-01-01

    Virtual reality (VR) describes a set of technologies that allow users to explore and experience 3-dimensional computer-generated "worlds" or "environments." These virtual environments can contain representations of real or imaginary objects on a small or large scale (from modeling of molecular structures to buildings, streets, and scenery of a…

  20. Temporal Issues in the Design of Virtual Learning Environments.

    ERIC Educational Resources Information Center

    Bergeron, Bryan; Obeid, Jihad

    1995-01-01

    Describes design methods used to influence user perception of time in virtual learning environments. Examines the use of temporal cues in medical education and clinical competence testing. Finds that user perceptions of time affects user acceptance, ease of use, and the level of realism of a virtual learning environment. Contains 51 references.…

  1. A 3-D Propagation Model for Emerging Land Mobile Radio Cellular Environments

    PubMed Central

    Ahmed, Abrar; Nawaz, Syed Junaid; Gulfam, Sardar Muhammad

    2015-01-01

    A tunable stochastic geometry based Three-Dimensional (3-D) scattering model for emerging land mobile radio cellular systems is proposed. Uniformly distributed scattering objects are assumed around the Mobile Station (MS) bounded within an ellipsoidal shaped Scattering Region (SR) hollowed with an elliptically-cylindric scattering free region in immediate vicinity of MS. To ensure the degree of expected accuracy, the proposed model is designed to be tunable (as required) with nine degrees of freedom, unlike its counterparts in the existing literature. The outer and inner boundaries of SR are designed as independently scalable along all the axes and rotatable in horizontal plane around their origin centered at MS. The elevated Base Station (BS) is considered outside the SR at a certain adjustable distance and height w.r.t. position of MS. Closed-form analytical expressions for joint and marginal Probability Density Functions (PDFs) of Angle-of-Arrival (AoA) and Time-of-Arrival (ToA) are derived for both up- and down-links. The obtained analytical results for angular and temporal statistics of the channel are presented along with a thorough analysis. The impact of various physical model parameters on angular and temporal characteristics of the channel is presented, which reveals the comprehensive insight on the proposed results. To evaluate the robustness of the proposed analytical model, a comparison with experimental datasets and simulation results is also presented. The obtained analytical results for PDF of AoA observed at BS are seen to fit a vast range of empirical datasets in the literature taken for various outdoor propagation environments. In order to establish the validity of the obtained analytical results for spatial and temporal characteristics of the channel, a comparison of the proposed analytical results with the simulation results is shown, which illustrates a good fit for 107 scattering points. Moreover, the proposed model is shown to degenerate to

  2. Coming down to Earth: Helping Teachers Use 3D Virtual Worlds in Across-Spaces Learning Situations

    ERIC Educational Resources Information Center

    Muñoz-Cristóbal, Juan A.; Prieto, Luis P.; Asensio-Pérez, Juan I.; Martínez-Monés, Alejandra; Jorrín-Abellán, Iván M.; Dimitriadis, Yannis

    2015-01-01

    Different approaches have explored how to provide seamless learning across multiple ICT-enabled physical and virtual spaces, including three-dimensional virtual worlds (3DVW). However, these approaches present limitations that may reduce their acceptance in authentic educational practice: The difficulties of authoring and sharing teacher-created…

  3. Virtual Virtuosos: A Case Study in Learning Music in Virtual Learning Environments in Spain

    ERIC Educational Resources Information Center

    Alberich-Artal, Enric; Sangra, Albert

    2012-01-01

    In recent years, the development of Information and Communication Technologies (ICT) has contributed to the generation of a number of interesting initiatives in the field of music education and training in virtual learning environments. However, music education initiatives employing virtual learning environments have replicated and perpetuated the…

  4. Faculty Perceptions of Instruction in Collaborative Virtual Immersive Learning Environments in Higher Education

    ERIC Educational Resources Information Center

    Janson, Barbara

    2013-01-01

    Use of 3D (three-dimensional) avatars in a synchronous virtual world for educational purposes has only been adopted for about a decade. Universities are offering synchronous, avatar-based virtual courses for credit - within 3D worlds (Luo & Kemp, 2008). Faculty and students immerse themselves, via avatars, in virtual worlds and communicate…

  5. EXTRACTING A RADAR REFLECTION FROM A CLUTTERED ENVIRONMENT USING 3-D INTERPRETATION

    EPA Science Inventory

    A 3-D Ground Penetrating Radar (GPR) survey at 50 MHz center frequency was conducted at Hill Air Force Base, Utah, to define the topography of the base of a shallow aquifer. The site for the survey was Chemical Disposal Pit #2 where there are many man-made features that generate ...

  6. Analysis for Clinical Effect of Virtual Windowing and Poking Reduction Treatment for Schatzker III Tibial Plateau Fracture Based on 3D CT Data

    PubMed Central

    Zhang, Huafeng; Li, Zhijun; Xu, Qian; Zhang, Yuan; Xu, Ke; Ma, Xinlong

    2015-01-01

    Objective. To explore the applications of preoperative planning and virtual surgery including surgical windowing and elevating reduction and to determine the clinical effects of this technology on the treatment of Schatzker type III tibial plateau fractures. Methods. 32 patients with Schatzker type III tibial plateau fractures were randomised upon their admission to the hospital using a sealed envelope method. Fourteen were treated with preoperative virtual design and assisted operation (virtual group) and 18 with direct open reduction and internal fixation (control group). Results. All patients achieved primary incision healing. Compared with control group, virtual groups showed significant advantages in operative time, incision length, and blood loss (P < 0.001). The virtual surgery was consistent with the actual surgery. Conclusion. The virtual group was better than control group in the treatment of tibial plateau fractures of Schatzker type III, due to shorter operative time, smaller incision length, and lower blood loss. The reconstructed 3D fracture model could be used to preoperatively determine the surgical windowing and elevating reduction method and simulate the operation for Schatzker type III tibial plateau fractures. PMID:25767804

  7. New insights into the earliest Quaternary environments in the Central North Sea from 3D seismic

    NASA Astrophysics Data System (ADS)

    Lamb, Rachel; Huuse, Mads; Stewart, Margaret; Brocklehurst, Simon H.

    2014-05-01

    In the past the transition between an unconformable surface in the south to a conformable horizon towards the north has made identification and mapping the base-Quaternary in the central North Sea difficult (Sejrup et al 1991; Gatliff et al 1994). However recent integration of biostratigraphy, pollen analysis, paleomagnetism and amino acid analysis in the Dutch and Danish sectors (Rasmussen et al 2005; Kuhlmann et al 2006) has allowed greater confidence in the correlation to the region 3D seismic datasets and thus has allowed the base-Quaternary to be mapped across the entire basin. The base-Quaternary has been mapped using the PGS MegaSurvey dataset from wells in the Danish Sector along the initially unconformable horizon and down the delta front into the more conformable basin giving a high degree of confidence in the horizon pick. The revised base-Quaternary surface reaches a depth of 1248 ms TWT with an elongate basin shape which is significantly deeper than the traditionally mapped surface. Using RMS amplitudes and other seismic attributes the revised base-Quaternary has been investigated along the horizon and in time slice to interpret the environments of the earliest Quaternary prior to the onset of glaciation. Combined with analysis of aligned elongate furrows over 10 km long, 100 m wide and 100 m deep suggest a deep marine environment in an almost enclosed basin with persistent strong NW-SE bottom currents in the deepest parts. Pockmarks were formed by the escape of shallow gas on the sides of a small delta in the eastern part of the basin. The progradation of large deltas from both the north and south into the basin make up the majority of the deposition of sediment into the basin. Key Words: base-Quaternary; seismic interpretation; paleoenvironments References: Gatliff, R.W, Richards, P.C, Smith, K, Graham, C.C, McCormac, M, Smith, N.J.P, Long, D, Cameron, T.D.J, Evans, D, Stevenson, A.G, Bulat, J, Ritchie, J.D, (1994) 'United Kingdom offshore regional

  8. Testing the hybrid-3-D hillslope hydrological model in a controlled environment

    NASA Astrophysics Data System (ADS)

    Hazenberg, P.; Broxton, P.; Gochis, D.; Niu, G.-Y.; Pangle, L. A.; Pelletier, J. D.; Troch, P. A.; Zeng, X.

    2016-02-01

    Hillslopes are important for converting rainfall into runoff, influencing the terrestrial dynamics of the Earth's climate system. Recently, we developed a hybrid-3-D (h3D) hillslope hydrological model that gives similar results as a full 3-D hydrological model but is up to 2-3 orders of magnitude faster computationally. Here h3D is assessed using a number of recharge-drainage experiments within the Landscape Evolution Observatory (LEO) with accurate and high-resolution (both temporally and spatially) observations of the inputs, outputs, and storage dynamics of several hillslopes. Such detailed measurements are generally not available for real-world hillslopes. Results show that the h3D model captures the observed storage, base flow, and overland flow dynamics of both the larger LEO and the smaller miniLEO hillslopes very well. Sensitivity tests are also performed to understand h3Ds difficulty in representing the height of the saturated zone close to the seepage face of the miniLEO hillslope. Results reveal that a temporally constant parameters set is able to simulate the response of the miniLEO for each individual event. However, when one focuses on the saturated zone dynamics at 0.15 m from the seepage face, a stepwise evolution of the optimal model parameter for the saturated lateral conductivity parameter of the gravel layer occurs. This evolution might be related to the migration of soil particles within the hillslope. However, it is currently unclear whether and where this takes place (in the seepage face or within the parts of the loamy sand soil).

  9. A virtual environment for the accurate geologic analysis of Martian terrain

    NASA Astrophysics Data System (ADS)

    Traxler, Christoph; Paar, Gerhard; Gupta, Sanjeev; Hesina, Gerd; Sander, Kathrin; Barnes, Rob; Nauschnegg, Bernhard; Muller, Jan-Peter; Tao, Yu

    2015-04-01

    Remote geology on planetary surfaces requires immersive presentation of the environment to be investigated. Three-dimensional (3D) processing of images from rovers and satellites enables to reconstruct terrain in virtual space on Earth for scientific analysis. In this paper we present a virtual environment that allows to interactively explore 3D-reconstructed Martian terrain and perform accurate measurements on the surface. Geologists do not only require line-of-sight measurements between two points but much more the projected line-of-sight on the surface between two such points. Furthermore the tool supports to define paths of several points. It is also important for geologists to annotate the terrain they explore, especially when collaborating with colleagues. The path tool can also be used to separate geological layers or surround areas of interest. They can be linked with a text label directly positioned in 3D space and always oriented towards the viewing direction. All measurements and annotations can be maintained by a graphical user interface and used as landmarks, i.e. it is possible to fly to the corresponding locations. The virtual environment is fed with 3D vision products from rover cameras, placed in the 3D context gained from satellite images (digital elevations models and corresponding ortho images). This allows investigations in various scales from planet to microscopic level in a seamless manner. The modes of exploitation and added value of such an interactive means are manifold. The visualisation products enable us to map geological surfaces and rock layers over large areas in a quantitative framework. Accurate geometrical relationships of rock bodies especially for sedimentary layers can be reconstructed and the relationships between superposed layers can be established. Within sedimentary layers, we can delineate sedimentary faces and other characteristics. In particular, inclination of beds which may help ascertain flow directions can be

  10. Rigid body cable for virtual environments.

    PubMed

    Servin, Martin; Lacoursière, Claude

    2008-01-01

    The present paper addresses real-time simulation of cables for virtual environments. A faithful physical model based on constrained rigid bodies is introduced and discretized. The performance and stability of the numerical method are analyzed in details and found to meet the requirements of interactive heavy hoisting simulations. The physical model is well behaved in the limit of infinite stiffness as well as in the elastic regime, and the tuning parameters correspond directly to conventional material constants. The integration scheme mixes the well known Störmer-Verlet method for the dynamics equations with the linearly implicit Euler method for the constraint equations and enables physical constraint relaxation and stabilization terms. The technique is shown to have superior numerical stability properties in comparison with either chain link systems, or spring and damper models. Experimental results are presented to show that the method results in stable, real-time simulations. Stability persists for moderately large fixed integration step of Delta t = 1/60 s, with hoisting loads of up to 10(5) times heavier than the elements of the cable. Further numerical experiments validating the physical model are also presented. PMID:18467754

  11. Multimedia virtualized environment for shoulder pain rehabilitation

    PubMed Central

    Chen, Chih-Chen

    2016-01-01

    [Purpose] Researchers imported games and virtual reality training to help participants train their shoulders in a relaxed environment. [Subjects and Methods] This study included the use of Kinect somatosensory device with Unity software to develop 3-dimensional situational games. The data collected from this training process can be uploaded via the Internet to a cloud or server for participants to perform self-inspection. The data can be a reference for the medical staff to assess training effectiveness for those with impairments and plan patient rehabilitation courses. [Results] In the training activities, 8 subjects with normal shoulder function demonstrated that the system has good stability and reproducibility. Six subjects with impaired shoulder underwent 6 weeks of training. During the third week of training, average performance stabilized. The t-test comparing 1–2 weeks to 3–4 weeks and 5–6 weeks showed significant differences. [Conclusion] Using games as training methods improved patient concentration, interest in participation and allowed patients to forget about their body discomfort. The equipment utilized in this study is inexpensive, easy to obtain, and the system is easy to install. People can perform simple self-training both at home or in the office. PMID:27190481

  12. Site remediation in a virtual environment

    SciTech Connect

    Bethel, W.; Jacobsen, J.; Holland, P.

    1994-01-01

    We describe the process used in combining an existing computer simulation with both Virtual Reality (VR) input and output devices, and conventional visualization tools, so as to make the simulation easier to use and the results easier to understand. VR input technology facilitates direct user manipulation of three dimensional simulation parameters. Commercially available visualization tools provide a flexible environment for representing abstract scientific data. VR output technology provides a more flexible and convincing way to view the visualization results than is afforded in contemporary visualization software. The desired goal of this process is a prototype system that minimizes man-machine interface barriers, as well as enhanced control over the simulation itself, so as to maximize the use of scientific judgement and intuition. In environmental remediation, the goal is to clean up contaminants either by removing them or rendering them non-toxic. A computer model simulates water or chemical flooding to mobilize and extract hydrocarbon contaminants from a volume of saturated soil/rock. Several wells are drilled in the vicinity of the contaminant, water and/or chemicals are injected into some of the wells, and fluid containing the mobilized hydrocarbons is pumped out of the remaining wells. The user is tasked with finding well locations and pumping rates that maximize recovery of the contaminants while minimizing drilling and pumping costs to clean up the site of interest.

  13. Multimedia virtualized environment for shoulder pain rehabilitation.

    PubMed

    Chen, Chih-Chen

    2016-04-01

    [Purpose] Researchers imported games and virtual reality training to help participants train their shoulders in a relaxed environment. [Subjects and Methods] This study included the use of Kinect somatosensory device with Unity software to develop 3-dimensional situational games. The data collected from this training process can be uploaded via the Internet to a cloud or server for participants to perform self-inspection. The data can be a reference for the medical staff to assess training effectiveness for those with impairments and plan patient rehabilitation courses. [Results] In the training activities, 8 subjects with normal shoulder function demonstrated that the system has good stability and reproducibility. Six subjects with impaired shoulder underwent 6 weeks of training. During the third week of training, average performance stabilized. The t-test comparing 1-2 weeks to 3-4 weeks and 5-6 weeks showed significant differences. [Conclusion] Using games as training methods improved patient concentration, interest in participation and allowed patients to forget about their body discomfort. The equipment utilized in this study is inexpensive, easy to obtain, and the system is easy to install. People can perform simple self-training both at home or in the office. PMID:27190481

  14. Studying chemical reactivity in a virtual environment.

    PubMed

    Haag, Moritz P; Reiher, Markus

    2014-01-01

    Chemical reactivity of a set of reactants is determined by its potential (electronic) energy (hyper)surface. The high dimensionality of this surface renders it difficult to efficiently explore reactivity in a large reactive system. Exhaustive sampling techniques and search algorithms are not straightforward to employ as it is not clear which explored path will eventually produce the minimum energy path of a reaction passing through a transition structure. Here, the chemist's intuition would be of invaluable help, but it cannot be easily exploited because (1) no intuitive and direct tool for the scientist to manipulate molecular structures is currently available and because (2) quantum chemical calculations are inherently expensive in terms of computational effort. In this work, we elaborate on how the chemist can be reintroduced into the exploratory process within a virtual environment that provides immediate feedback and intuitive tools to manipulate a reactive system. We work out in detail how this immersion should take place. We provide an analysis of modern semi-empirical methods which already today are candidates for the interactive study of chemical reactivity. Implications of manual structure manipulations for their physical meaning and chemical relevance are carefully analysed in order to provide sound theoretical foundations for the interpretation of the interactive reactivity exploration. PMID:25340884

  15. A Virtual Mission Operations Center - Collaborative Environment

    NASA Technical Reports Server (NTRS)

    Medina, Barbara; Bussman, Marie

    2002-01-01

    Development of technologies that enable significant reductions in the cost of space mission operations is critical if constellations, formations, federations and sensor webs, are to be economically feasible. One approach to cost reduction is to infuse automation technologies into mission operations centers so that fewer personnel are needed for mission support. But missions are more culturally and politically adverse to the risks of automation. Reducing the mission risk associated with increased use of automation within a MOC is therefore of great importance. The belief that mission risk increases as more automation is used stems from the fact that there is inherently less direct human oversight to investigate and resolve anomalies in an unattended MOC. The Virtual Missions Operations Center - Collaborative Environment (VMOC-CE) project was launched to address this concern. The goal of the VMOC-CE project is to identify, develop, and infuse technology to enable mission operations between onsite operators and on-call personnel in geographically dispersed locations. VMOC-CE enables missions to more readily adopt automation because off-site operators and engineers can more easily identify, investigate, and resolve anomalies without having to be present in the MOC. The VMOC-CE intent is to have a single access point for all resources used in a collaborative mission operations environment. Team members will be able to interact during spacecraft operations, specifically for resolving anomalies, utilizing a desktop computer and the Internet. Mission operations management can use the VMOC-CE as a tool to participate in and monitor status of anomaly resolution or other mission operations issues. In this paper we present the VMOC-CE project, system capabilities and technologies, operations concept, and results of its pilot in support of the Earth Science Mission Operations System (ESMOS).

  16. 3D Imaging.

    ERIC Educational Resources Information Center

    Hastings, S. K.

    2002-01-01

    Discusses 3 D imaging as it relates to digital representations in virtual library collections. Highlights include X-ray computed tomography (X-ray CT); the National Science Foundation (NSF) Digital Library Initiatives; output peripherals; image retrieval systems, including metadata; and applications of 3 D imaging for libraries and museums. (LRW)

  17. Considerations for Designing Instructional Virtual Environments.

    ERIC Educational Resources Information Center

    Dennen, Vanessa Paz; Branch, Robert C.

    Virtual reality is an immersive, interactive medium that manipulates the senses in order provide users with simulated experiences in computer-generated worlds. The visual design of virtual reality is an important issue, but literature has tended to stress the medium's instructional potential rather than setting forth a protocol for designing…

  18. Virtual Reality: A New Learning Environment.

    ERIC Educational Resources Information Center

    Ferrington, Gary; Loge, Kenneth

    1992-01-01

    Discusses virtual reality (VR) technology and its possible uses in military training, medical education, industrial design and development, the media industry, and education. Three primary applications of VR in the learning process--visualization, simulation, and construction of virtual worlds--are described, and pedagogical and moral issues are…

  19. Guest Editor's introduction: Special issue on distributed virtual environments

    NASA Astrophysics Data System (ADS)

    Lea, Rodger

    1998-09-01

    Distributed virtual environments (DVEs) combine technology from 3D graphics, virtual reality and distributed systems to provide an interactive 3D scene that supports multiple participants. Each participant has a representation in the scene, often known as an avatar, and is free to navigate through the scene and interact with both the scene and other viewers of the scene. Changes to the scene, for example, position changes of one avatar as the associated viewer navigates through the scene, or changes to objects in the scene via manipulation, are propagated in real time to all viewers. This ensures that all viewers of a shared scene `see' the same representation of it, allowing sensible reasoning about the scene. Early work on such environments was restricted to their use in simulation, in particular in military simulation. However, over recent years a number of interesting and potentially far-reaching attempts have been made to exploit the technology for a range of other uses, including: Social spaces. Such spaces can be seen as logical extensions of the familiar text chat space. In 3D social spaces avatars, representing participants, can meet in shared 3D scenes and in addition to text chat can use visual cues and even in some cases spatial audio. Collaborative working. A number of recent projects have attempted to explore the use of DVEs to facilitate computer-supported collaborative working (CSCW), where the 3D space provides a context and work space for collaboration. Gaming. The shared 3D space is already familiar, albeit in a constrained manner, to the gaming community. DVEs are a logical superset of existing 3D games and can provide a rich framework for advanced gaming applications. e-commerce. The ability to navigate through a virtual shopping mall and to look at, and even interact with, 3D representations of articles has appealed to the e-commerce community as it searches for the best method of presenting merchandise to electronic consumers. The technology

  20. A Photo-Realistic 3-D Mapping System for Extreme Nuclear Environments: Chornobyl

    NASA Technical Reports Server (NTRS)

    Maimone, M.; Matthies, L.; Osborn, J.; Teza, J.; Thayer, S.

    1998-01-01

    We present a novel stereoscopic mapping system for use in nuclear accident settings. First we discuss a radiation shielded sensor array desigtned to tolerate 10(sup 6)R of cumulative dose. Next we give procedures to ensure timely, accurate range estimation using trinocular stereo. Finally, we review the implementation of a system for the integration of range information into a 3-D, textured, metrically accurate surface mesh.

  1. Generic robotic kinematic generator for virtual environment interfaces

    NASA Astrophysics Data System (ADS)

    Flueckiger, Lorenzo; Piguet, Laurent; Baur, Charles

    1996-12-01

    The expansion of robotic systems' performance, as well as the need for such machines to work in complex environments (hazardous, small, distant, etc.), involves the need for user interfaces which permit efficient teleoperation. Virtual Reality based interfaces provide the user with a new method for robot task planning and control: he or she can define tasks in a very intuitive way by interacting with a 3D computer generated representation of the world, which is continuously updated thanks to multiple sensors fusion and analysis. The Swiss Federal Institute of Technology has successfully tested different kinds of teleoperations. In the early 90s, a transatlantic teleoperation of a conventional robot manipulator with a vision feedback system to update the virtual world was achieved. This approach was then extended to perform teleoperation of several mobile robots (Khepera, Koala) as well as to control microrobots used for microsystems' assembly in the micrometer range. One of the problems encountered with such an approach is the necessity to program a specific kinematic algorithm for each kind of manipulator. To provide a more general solution, we started a project aiming at the design of a 'kinematic generator' (CINEGEN) for the simulation of generic serial and parallel mechanical chains. With CINEGEN, each manipulator is defined with an ascii file description and its attached graphics files; inserting a new manipulator simply requires a new description file, and none of the existing tools require modification. To have a real time behavior, we have chosen a numerical method based on the pseudo-Jacobian method to generate the inverse kinematics of the robot. The results obtained with an object-oriented implementation on a graphic workstation are presented in this paper.

  2. Future Evolution of Virtual Worlds as Communication Environments

    NASA Astrophysics Data System (ADS)

    Prisco, Giulio

    Extensive experience creating locations and activities inside virtual worlds provides the basis for contemplating their future. Users of virtual worlds are diverse in their goals for these online environments; for example, immersionists want them to be alternative realities disconnected from real life, whereas augmentationists want them to be communication media supporting real-life activities. As the technology improves, the diversity of virtual worlds will increase along with their significance. Many will incorporate more advanced virtual reality, or serve as major media for long-distance collaboration, or become the venues for futurist social movements. Key issues are how people can create their own virtual worlds, travel across worlds, and experience a variety of multimedia immersive environments. This chapter concludes by noting the view among some computer scientists that future technologies will permit uploading human personalities to artificial intelligence avatars, thereby enhancing human beings and rendering the virtual worlds entirely real.

  3. The Synergetic Effect of Learning Styles on the Interaction between Virtual Environments and the Enhancement of Spatial Thinking

    ERIC Educational Resources Information Center

    Hauptman, Hanoch; Cohen, Arie

    2011-01-01

    Students have difficulty learning 3D geometry; spatial thinking is an important aspect of the learning processes in this academic area. In light of the unique features of virtual environments and the influence of metacognitive processes (e.g., self-regulating questions) on the teaching of mathematics, we assumed that a combination of…

  4. Two implementations of shared virtual space environments.

    SciTech Connect

    Disz, T. L.

    1998-01-13

    While many issues in the area of virtual reality (VR) research have been addressed in recent years, the constant leaps forward in technology continue to push the field forward. VR research no longer is focused only on computer graphics, but instead has become even more interdisciplinary, combining the fields of networking, distributed computing, and even artificial intelligence. In this article we discuss some of the issues associated with distributed, collaborative virtual reality, as well as lessons learned during the development of two distributed virtual reality applications.

  5. Is a dark virtual environment scary?

    PubMed

    Toet, Alexander; van Welie, Marloes; Houtkamp, Joske

    2009-08-01

    This study investigated the effects of nighttime lighting conditions and stress on the affective appraisal of a virtual environment (VE). The effective application of VEs in emotionally intense simulations requires precise control over their characteristics that affect the user's emotions and behavior. It is known that humans have an innate fear of darkness, which increases after exposure to stress and extrapolates to ecologically valid (immersive) VEs. This study investigated if the simulated level of illumination determines the affective appraisal of a VE, particularly after stress. Participants explored either a daytime or a nighttime version of a VE, after performing either an acute psychosocial stress task (Trier Social Stress Test, or TSST) or a relaxing control task. The affective qualities of the VE were appraised through the Russel and Pratt semantic questionnaire on the valence and arousal dimensions. Distress was assessed through free salivary cortisol, the state self-report scale from the Spielberger State-Trait Anxiety Inventory (STAI), and heart rate. In addition, memory for scenic details was tested through a yes-no recognition test. Free salivary cortisol levels, heart rates, and scores on the STAI all indicate that participants who were subjected to the stress task indeed showed signs of distress, whereas participants in the control group showed no signs of stress. The results of the semantic questionnaire and the recognition test showed no significant overall effect of time-of-day conditions on the affective appraisal of the VE or on the recognition of its details, even after prior stress. The experiences of users exploring the VE were not affected by the simulated lighting conditions, even after acute prior stress. Thus, lowering the illumination level in a desktop VE is not sufficient to elicit anxiety. Hence, desktop VE representations are different from immersive VE representations in this respect. This finding has implications for desktop VE

  6. Nature and origins of virtual environments - A bibliographical essay

    NASA Technical Reports Server (NTRS)

    Ellis, S. R.

    1991-01-01

    Virtual environments presented via head-mounted, computer-driven displays provide a new media for communication. They may be analyzed by considering: (1) what may be meant by an environment; (2) what is meant by the process of virtualization; and (3) some aspects of human performance that constrain environmental design. Their origins are traced from previous work in vehicle simulation and multimedia research. Pointers are provided to key technical references, in the dispersed, archival literature, that are relevant to the development and evaluation of virtual-environment interface systems.

  7. Designing a Virtual-Reality-Based, Gamelike Math Learning Environment

    ERIC Educational Resources Information Center

    Xu, Xinhao; Ke, Fengfeng

    2016-01-01

    This exploratory study examined the design issues related to a virtual-reality-based, gamelike learning environment (VRGLE) developed via OpenSimulator, an open-source virtual reality server. The researchers collected qualitative data to examine the VRGLE's usability, playability, and content integration for math learning. They found it important…

  8. Virtual Worlds; Real Learning: Design Principles for Engaging Immersive Environments

    NASA Technical Reports Server (NTRS)

    Wu (u. Sjarpm)

    2012-01-01

    The EMDT master's program at Full Sail University embarked on a small project to use a virtual environment to teach graduate students. The property used for this project has evolved our several iterations and has yielded some basic design principles and pedagogy for virtual spaces. As a result, students are emerging from the program with a better grasp of future possibilities.

  9. Impact of Virtual Work Environment on Traditional Team Domains.

    ERIC Educational Resources Information Center

    Geroy, Gary D.; Olson, Joel; Hartman, Jackie

    2002-01-01

    Examines a virtual work team to determine the domains of the team and the effect the virtual work environment had on the domains. Discusses results of a literature review and a phenomenological heuristic case study, including the effects of post-modern philosophy and postindustrial society on changes in the marketplace. (Contains 79 references.)…

  10. Addition of 3D scene attributes to a virtual landscape of Al-Madinah Al-Munwwarah in Saudi Arabia

    NASA Astrophysics Data System (ADS)

    Alshammari, Saleh; Hayes, Ladson W.

    2003-03-01

    A 3-dimensional virtual landscape has been produced of Al-Madinah Al-Munwwarah in Saudi Arabia. A Triangular Irregular Network (TIN) interpolation method has been used to create a digital elevation model (DEM) from digital topographic maps at 1:1000 scale. High resolution aerial photography has been merged with satellite imagery to drape over the DEM. The resultant DEM, and fused overlay images, has been imported into Internet Space Builder software in order to add several attributes to the scene and to create an interactive virtual reality modelling language (VRML) model to support walk-throughs of the scene.

  11. Ecological validity of virtual environments to assess human navigation ability

    PubMed Central

    van der Ham, Ineke J. M.; Faber, Annemarie M. E.; Venselaar, Matthijs; van Kreveld, Marc J.; Löffler, Maarten

    2015-01-01

    Route memory is frequently assessed in virtual environments. These environments can be presented in a fully controlled manner and are easy to use. Yet they lack the physical involvement that participants have when navigating real environments. For some aspects of route memory this may result in reduced performance in virtual environments. We assessed route memory performance in four different environments: real, virtual, virtual with directional information (compass), and hybrid. In the hybrid environment, participants walked the route outside on an open field, while all route information (i.e., path, landmarks) was shown simultaneously on a handheld tablet computer. Results indicate that performance in the real life environment was better than in the virtual conditions for tasks relying on survey knowledge, like pointing to start and end point, and map drawing. Performance in the hybrid condition however, hardly differed from real life performance. Performance in the virtual environment did not benefit from directional information. Given these findings, the hybrid condition may offer the best of both worlds: the performance level is comparable to that of real life for route memory, yet it offers full control of visual input during route learning. PMID:26074831

  12. Preparation and presentation of cultural content in virtual environment

    NASA Astrophysics Data System (ADS)

    Zara, Jiri

    2003-01-01

    The paper presents a web-based application for preparation and presentation of various two and three dimensional cultural showpieces in a virtual environment. Specific task modules built on a common database provide tools for designing spatial models of a real or a fully virtual gallery, exhibit management, arrangement of exhibits within the virtual space, and final web presentation using standard VRML browser and Java applet. The whole application serves for different kinds of users gallery owners, artists, and visitors. A use of virtual reality paradigms for image presentation purposes is discussed here, too.

  13. The Viability of Virtual Worlds in Higher Education: Can Creativity Thrive outside the Traditional Classroom Environment?

    ERIC Educational Resources Information Center

    Bradford, Linda M.

    2012-01-01

    In spite of the growing popularity of virtual worlds for gaming, recreation, and education, few studies have explored the efficacy of 3D immersive virtual worlds in post-secondary instruction; even fewer discuss the ability of virtual worlds to help young adults develop creative thinking. This study investigated the effect of virtual world…

  14. Computer Vision Assisted Virtual Reality Calibration

    NASA Technical Reports Server (NTRS)

    Kim, W.

    1999-01-01

    A computer vision assisted semi-automatic virtual reality (VR) calibration technology has been developed that can accurately match a virtual environment of graphically simulated three-dimensional (3-D) models to the video images of the real task environment.

  15. Use of 3D laser radar for navigation of unmanned aerial and ground vehicles in urban and indoor environments

    NASA Astrophysics Data System (ADS)

    Uijt de Haag, Maarten; Venable, Don; Smearcheck, Mark

    2007-04-01

    This paper discusses the integration of Inertial measurements with measurements from a three-dimensional (3D) imaging sensor for position and attitude determination of unmanned aerial vehicles (UAV) and autonomous ground vehicles (AGV) in urban or indoor environments. To enable operation of UAVs and AGVs at any time in any environment a Precision Navigation, Attitude, and Time (PNAT) capability is required that is robust and not solely dependent on the Global Positioning System (GPS). In urban and indoor environments a GPS position capability may not only be unavailable due to shadowing, significant signal attenuation or multipath, but also due to intentional denial or deception. Although deep integration of GPS and Inertial Measurement Unit (IMU) data may prove to be a viable solution an alternative method is being discussed in this paper. The alternative solution is based on 3D imaging sensor technologies such as Flash Ladar (Laser Radar). Flash Ladar technology consists of a modulated laser emitter coupled with a focal plane array detector and the required optics. Like a conventional camera this sensor creates an "image" of the environment, but producing a 2D image where each pixel has associated intensity vales the flash Ladar generates an image where each pixel has an associated range and intensity value. Integration of flash Ladar with the attitude from the IMU allows creation of a 3-D scene. Current low-cost Flash Ladar technology is capable of greater than 100 x 100 pixel resolution with 5 mm depth resolution at a 30 Hz frame rate. The proposed algorithm first converts the 3D imaging sensor measurements to a point cloud of the 3D, next, significant environmental features such as planar features (walls), line features or point features (corners) are extracted and associated from one 3D imaging sensor frame to the next. Finally, characteristics of these features such as the normal or direction vectors are used to compute the platform position and attitude

  16. Transforming Clinical Imaging and 3D Data for Virtual Reality Learning Objects: HTML5 and Mobile Devices Implementation

    ERIC Educational Resources Information Center

    Trelease, Robert B.; Nieder, Gary L.

    2013-01-01

    Web deployable anatomical simulations or "virtual reality learning objects" can easily be produced with QuickTime VR software, but their use for online and mobile learning is being limited by the declining support for web browser plug-ins for personal computers and unavailability on popular mobile devices like Apple iPad and Android…

  17. Caring in the Dynamics of Design and Languaging: Exploring Second Language Learning in 3D Virtual Spaces

    ERIC Educational Resources Information Center

    Zheng, Dongping

    2012-01-01

    This study provides concrete evidence of ecological, dialogical views of languaging within the dynamics of coordination and cooperation in a virtual world. Beginning level second language learners of Chinese engaged in cooperative activities designed to provide them opportunities to refine linguistic actions by way of caring for others, for the…

  18. i-BRUSH: a gaze-contingent virtual paintbrush for dense 3D reconstruction in robotic assisted surgery.

    PubMed

    Visentini-Scarzanella, Marco; Mylonas, George P; Stoyanov, Danail; Yang, Guang-Zhong

    2009-01-01

    With increasing demand on intra-operative navigation and motion compensation during robotic assisted minimally invasive surgery, real-time 3D deformation recovery remains a central problem. Currently the majority of existing methods rely on salient features, where the inherent paucity of distinctive landmarks implies either a semi-dense reconstruction or the use of strong geometrical constraints. In this study, we propose a gaze-contingent depth reconstruction scheme by integrating human perception with semi-dense stereo and p-q based shading information. Depth inference is carried out in real-time through a novel application of Bayesian chains without smoothness priors. The practical value of the scheme is highlighted by detailed validation using a beating heart phantom model with known geometry to verify the performance of gaze-contingent 3D surface reconstruction and deformation recovery. PMID:20426007

  19. Virtual environments for scene of crime reconstruction and analysis

    NASA Astrophysics Data System (ADS)

    Howard, Toby L. J.; Murta, Alan D.; Gibson, Simon

    2000-02-01

    This paper describes research conducted in collaboration with Greater Manchester Police (UK), to evalute the utility of Virtual Environments for scene of crime analysis, forensic investigation, and law enforcement briefing and training. We present an illustrated case study of the construction of a high-fidelity virtual environment, intended to match a particular real-life crime scene as closely as possible. We describe and evaluate the combination of several approaches including: the use of the Manchester Scene Description Language for constructing complex geometrical models; the application of a radiosity rendering algorithm with several novel features based on human perceptual consideration; texture extraction from forensic photography; and experiments with interactive walkthroughs and large-screen stereoscopic display of the virtual environment implemented using the MAVERIK system. We also discuss the potential applications of Virtual Environment techniques in the Law Enforcement and Forensic communities.

  20. Headphone and Head-Mounted Visual Displays for Virtual Environments

    NASA Technical Reports Server (NTRS)

    Begault, Duran R.; Ellis, Stephen R.; Wenzel, Elizabeth M.; Trejo, Leonard J. (Technical Monitor)

    1998-01-01

    A realistic auditory environment can contribute to both the overall subjective sense of presence in a virtual display, and to a quantitative metric predicting human performance. Here, the role of audio in a virtual display and the importance of auditory-visual interaction are examined. Conjectures are proposed regarding the effectiveness of audio compared to visual information for creating a sensation of immersion, the frame of reference within a virtual display, and the compensation of visual fidelity by supplying auditory information. Future areas of research are outlined for improving simulations of virtual visual and acoustic spaces. This paper will describe some of the intersensory phenomena that arise during operator interaction within combined visual and auditory virtual environments. Conjectures regarding audio-visual interaction will be proposed.

  1. Methods and systems relating to an augmented virtuality environment

    DOEpatents

    Nielsen, Curtis W; Anderson, Matthew O; McKay, Mark D; Wadsworth, Derek C; Boyce, Jodie R; Hruska, Ryan C; Koudelka, John A; Whetten, Jonathan; Bruemmer, David J

    2014-05-20

    Systems and methods relating to an augmented virtuality system are disclosed. A method of operating an augmented virtuality system may comprise displaying imagery of a real-world environment in an operating picture. The method may further include displaying a plurality of virtual icons in the operating picture representing at least some assets of a plurality of assets positioned in the real-world environment. Additionally, the method may include displaying at least one virtual item in the operating picture representing data sensed by one or more of the assets of the plurality of assets and remotely controlling at least one asset of the plurality of assets by interacting with a virtual icon associated with the at least one asset.

  2. An interactive virtual environment for finite element analysis

    SciTech Connect

    Bradshaw, S.; Canfield, T.; Kokinis, J.; Disz, T.

    1995-06-01

    Virtual environments (VE) provide a powerful human-computer interface that opens the door to exciting new methods of interaction with high-performance computing applications in several areas of research. The authors are interested in the use of virtual environments as a user interface to real-time simulations used in rapid prototyping procedures. Consequently, the authors are developing methods for coupling finite element models of complex mechanical systems with a VE interface for real-time interaction.

  3. Human Machine Interfaces for Teleoperators and Virtual Environments

    NASA Technical Reports Server (NTRS)

    Durlach, Nathaniel I. (Compiler); Sheridan, Thomas B. (Compiler); Ellis, Stephen R. (Compiler)

    1991-01-01

    In Mar. 1990, a meeting organized around the general theme of teleoperation research into virtual environment display technology was conducted. This is a collection of conference-related fragments that will give a glimpse of the potential of the following fields and how they interplay: sensorimotor performance; human-machine interfaces; teleoperation; virtual environments; performance measurement and evaluation methods; and design principles and predictive models.

  4. Designing user models in a virtual cave environment

    SciTech Connect

    Brown-VanHoozer, S.; Hudson, R.; Gokhale, N.

    1995-12-31

    In this paper, the results of a first study into the use of virtual reality for human factor studies and design of simple and complex models of control systems, components, and processes are described. The objective was to design a model in a virtual environment that would reflect more characteristics of the user`s mental model of a system and fewer of the designer`s. The technology of a CAVE{trademark} virtual environment and the methodology of Neuro Linguistic Programming were employed in this study.

  5. Effects of 3D virtual haptics force feedback on brand personality perception: the mediating role of physical presence in advergames.

    PubMed

    Jin, Seung-A Annie

    2010-06-01

    This study gauged the effects of force feedback in the Novint Falcon haptics system on the sensory and cognitive dimensions of a virtual test-driving experience. First, in order to explore the effects of tactile stimuli with force feedback on users' sensory experience, feelings of physical presence (the extent to which virtual physical objects are experienced as actual physical objects) were measured after participants used the haptics interface. Second, to evaluate the effects of force feedback on the cognitive dimension of consumers' virtual experience, this study investigated brand personality perception. The experiment utilized the Novint Falcon haptics controller to induce immersive virtual test-driving through tactile stimuli. The author designed a two-group (haptics stimuli with force feedback versus no force feedback) comparison experiment (N = 238) by manipulating the level of force feedback. Users in the force feedback condition were exposed to tactile stimuli involving various force feedback effects (e.g., terrain effects, acceleration, and lateral forces) while test-driving a rally car. In contrast, users in the control condition test-drove the rally car using the Novint Falcon but were not given any force feedback. Results of ANOVAs indicated that (a) users exposed to force feedback felt stronger physical presence than those in the no force feedback condition, and (b) users exposed to haptics stimuli with force feedback perceived the brand personality of the car to be more rugged than those in the control condition. Managerial implications of the study for product trial in the business world are discussed. PMID:20557250

  6. Overview of 3D-TRACE, a NASA Initiative in Three-Dimensional Tomography of the Aerosol-Cloud Environment

    NASA Astrophysics Data System (ADS)

    Davis, Anthony; Diner, David; Yanovsky, Igor; Garay, Michael; Xu, Feng; Bal, Guillaume; Schechner, Yoav; Aides, Amit; Qu, Zheng; Emde, Claudia

    2013-04-01

    Remote sensing is a key tool for sorting cloud ensembles by dynamical state, aerosol environments by source region, and establishing causal relationships between aerosol amounts, type, and cloud microphysics-the so-called indirect aerosol climate impacts, and one of the main sources of uncertainty in current climate models. Current satellite imagers use data processing approaches that invariably start with cloud detection/masking to isolate aerosol air-masses from clouds, and then rely on one-dimensional (1D) radiative transfer (RT) to interpret the aerosol and cloud measurements in isolation. Not only does this lead to well-documented biases for the estimates of aerosol radiative forcing and cloud optical depths in current missions, but it is fundamentally inadequate for future missions such as EarthCARE where capturing the complex, three-dimensional (3D) interactions between clouds and aerosols is a primary objective. In order to advance the state of the art, the next generation of satellite information processing systems must incorporate technologies that will enable the treatment of the atmosphere as a fully 3D environment, represented more realistically as a continuum. At one end, there is an optically thin background dominated by aerosols and molecular scattering that is strongly stratified and relatively homogeneous in the horizontal. At the other end, there are optically thick embedded elements, clouds and aerosol plumes, which can be more or less uniform and quasi-planar or else highly 3D with boundaries in all directions; in both cases, strong internal variability may be present. To make this paradigm shift possible, we propose to combine the standard models for satellite signal prediction physically grounded in 1D and 3D RT, both scalar and vector, with technologies adapted from biomedical imaging, digital image processing, and computer vision. This will enable us to demonstrate how the 3D distribution of atmospheric constituents, and their associated

  7. Intelligent Tutors in Immersive Virtual Environments

    ERIC Educational Resources Information Center

    Yan, Peng; Slator, Brian M.; Vender, Bradley; Jin, Wei; Kariluoma, Matti; Borchert, Otto; Hokanson, Guy; Aggarwal, Vaibhav; Cosmano, Bob; Cox, Kathleen T.; Pilch, André; Marry, Andrew

    2013-01-01

    Research into virtual role-based learning has progressed over the past decade. Modern issues include gauging the difficulty of designing a goal system capable of meeting the requirements of students with different knowledge levels, and the reasonability and possibility of taking advantage of the well-designed formula and techniques served in other…

  8. Tasks for Easily Modifiable Virtual Environments

    ERIC Educational Resources Information Center

    Swier, Robert

    2014-01-01

    Recent studies of learner interaction in virtual worlds have tended to select basic tasks involving open-ended communication. There is evidence that such tasks are supportive of language acquisition, however it may also be beneficial to consider more complex tasks. Research in task-based learning has identified features such as non-linguistic…

  9. Training through Telematics in Virtual Environment.

    ERIC Educational Resources Information Center

    Sharma, C. B.

    1997-01-01

    Defines telematics and argues that India should exploit telematics resources for training. Describes training through telematics and virtual means, as well as a redefinition of the notion of training. Discusses the cost factor and its influence on policy at different levels. (AEF)

  10. Building Analysis for Urban Energy Planning Using Key Indicators on Virtual 3d City Models - the Energy Atlas of Berlin

    NASA Astrophysics Data System (ADS)

    Krüger, A.; Kolbe, T. H.

    2012-07-01

    In the context of increasing greenhouse gas emission and global demographic change with the simultaneous trend to urbanization, it is a big challenge for cities around the world to perform modifications in energy supply chain and building characteristics resulting in reduced energy consumption and carbon dioxide mitigation. Sound knowledge of energy resource demand and supply including its spatial distribution within urban areas is of great importance for planning strategies addressing greater energy efficiency. The understanding of the city as a complex energy system affects several areas of the urban living, e.g. energy supply, urban texture, human lifestyle, and climate protection. With the growing availability of 3D city models around the world based on the standard language and format CityGML, energy system modelling, analysis and simulation can be incorporated into these models. Both domains will profit from that interaction by bringing together official and accurate building models including building geometries, semantics and locations forming a realistic image of the urban structure with systemic energy simulation models. A holistic view on the impacts of energy planning scenarios can be modelled and analyzed including side effects on urban texture and human lifestyle. This paper focuses on the identification, classification, and integration of energy-related key indicators of buildings and neighbourhoods within 3D building models. Consequent application of 3D city models conforming to CityGML serves the purpose of deriving indicators for this topic. These will be set into the context of urban energy planning within the Energy Atlas Berlin. The generation of indicator objects covering the indicator values and related processing information will be presented on the sample scenario estimation of heating energy consumption in buildings and neighbourhoods. In their entirety the key indicators will form an adequate image of the local energy situation for

  11. Workflows and the Role of Images for Virtual 3d Reconstruction of no Longer Extant Historic Objects

    NASA Astrophysics Data System (ADS)

    Münster, S.

    2013-07-01

    3D reconstruction technologies have gained importance as tools for the research and visualization of no longer extant historic objects during the last decade. Within such reconstruction processes, visual media assumes several important roles: as the most important sources especially for a reconstruction of no longer extant objects, as a tool for communication and cooperation within the production process, as well as for a communication and visualization of results. While there are many discourses about theoretical issues of depiction as sources and as visualization outcomes of such projects, there is no systematic research about the importance of depiction during a 3D reconstruction process and based on empirical findings. Moreover, from a methodological perspective, it would be necessary to understand which role visual media plays during the production process and how it is affected by disciplinary boundaries and challenges specific to historic topics. Research includes an analysis of published work and case studies investigating reconstruction projects. This study uses methods taken from social sciences to gain a grounded view of how production processes would take place in practice and which functions and roles images would play within them. For the investigation of these topics, a content analysis of 452 conference proceedings and journal articles related to 3D reconstruction modeling in the field of humanities has been completed. Most of the projects described in those publications dealt with data acquisition and model building for existing objects. Only a small number of projects focused on structures that no longer or never existed physically. Especially that type of project seems to be interesting for a study of the importance of pictures as sources and as tools for interdisciplinary cooperation during the production process. In the course of the examination the authors of this paper applied a qualitative content analysis for a sample of 26 previously

  12. Identification of source velocities on 3D structures in non-anechoic environments: Theoretical background and experimental validation of the inverse patch transfer functions method

    NASA Astrophysics Data System (ADS)

    Aucejo, M.; Totaro, N.; Guyader, J.-L.

    2010-08-01

    In noise control, identification of the source velocity field remains a major problem open to investigation. Consequently, methods such as nearfield acoustical holography (NAH), principal source projection, the inverse frequency response function and hybrid NAH have been developed. However, these methods require free field conditions that are often difficult to achieve in practice. This article presents an alternative method known as inverse patch transfer functions, designed to identify source velocities and developed in the framework of the European SILENCE project. This method is based on the definition of a virtual cavity, the double measurement of the pressure and particle velocity fields on the aperture surfaces of this volume, divided into elementary areas called patches and the inversion of impedances matrices, numerically computed from a modal basis obtained by FEM. Theoretically, the method is applicable to sources with complex 3D geometries and measurements can be carried out in a non-anechoic environment even in the presence of other stationary sources outside the virtual cavity. In the present paper, the theoretical background of the iPTF method is described and the results (numerical and experimental) for a source with simple geometry (two baffled pistons driven in antiphase) are presented and discussed.

  13. The Cultural Divide: Exponential Growth in Classical 2D and Metabolic Equilibrium in 3D Environments

    PubMed Central

    Kanlaya, Rattiyaporn; Borkowski, Kamil; Schwämmle, Veit; Dai, Jie; Joensen, Kira Eyd; Wojdyla, Katarzyna; Carvalho, Vasco Botelho; Fey, Stephen J.

    2014-01-01

    Introduction Cellular metabolism can be considered to have two extremes: one is characterized by exponential growth (in 2D cultures) and the other by a dynamic equilibrium (in 3D cultures). We have analyzed the proteome and cellular architecture at these two extremes and found that they are dramatically different. Results Structurally, actin organization is changed, microtubules are increased and keratins 8 and 18 decreased. Metabolically, glycolysis, fatty acid metabolism and the pentose phosphate shunt are increased while TCA cycle and oxidative phosphorylation is unchanged. Enzymes involved in cholesterol and urea synthesis are increased consistent with the attainment of cholesterol and urea production rates seen in vivo. DNA repair enzymes are increased even though cells are predominantly in Go. Transport around the cell – along the microtubules, through the nuclear pore and in various types of vesicles has been prioritized. There are numerous coherent changes in transcription, splicing, translation, protein folding and degradation. The amount of individual proteins within complexes is shown to be highly coordinated. Typically subunits which initiate a particular function are present in increased amounts compared to other subunits of the same complex. Summary We have previously demonstrated that cells at dynamic equilibrium can match the physiological performance of cells in tissues in vivo. Here we describe the multitude of protein changes necessary to achieve this performance. PMID:25222612

  14. Using virtual reality environment to improve joint attention associated with pervasive developmental disorder.

    PubMed

    Cheng, Yufang; Huang, Ruowen

    2012-01-01

    The focus of this study is using data glove to practice Joint attention skill in virtual reality environment for people with pervasive developmental disorder (PDD). The virtual reality environment provides a safe environment for PDD people. Especially, when they made errors during practice in virtual reality environment, there is no suffering or dangerous consequences to deal with. Joint attention is a critical skill in the disorder characteristics of children with PDD. The absence of joint attention is a deficit frequently affects their social relationship in daily life. Therefore, this study designed the Joint Attention Skills Learning (JASL) systems with data glove tool to help children with PDD to practice joint attention behavior skills. The JASL specifically focus the skills of pointing, showing, sharing things and behavior interaction with other children with PDD. The system is designed in playroom-scene and presented in the first-person perspectives for users. The functions contain pointing and showing, moving virtual objects, 3D animation, text, speaking sounds, and feedback. The method was employed single subject multiple-probe design across subjects' designs, and analysis of visual inspection in this study. It took 3 months to finish the experimental section. Surprisingly, the experiment results reveal that the participants have further extension in improving the joint attention skills in their daily life after using the JASL system. The significant potential in this particular treatment of joint attention for each participant will be discussed in details in this paper. PMID:22776822

  15. Migration in Confined 3D Environments Is Determined by a Combination of Adhesiveness, Nuclear Volume, Contractility, and Cell Stiffness

    PubMed Central

    Lautscham, Lena A.; Kämmerer, Christoph; Lange, Janina R.; Kolb, Thorsten; Mark, Christoph; Schilling, Achim; Strissel, Pamela L.; Strick, Reiner; Gluth, Caroline; Rowat, Amy C.; Metzner, Claus; Fabry, Ben

    2015-01-01

    In cancer metastasis and other physiological processes, cells migrate through the three-dimensional (3D) extracellular matrix of connective tissue and must overcome the steric hindrance posed by pores that are smaller than the cells. It is currently assumed that low cell stiffness promotes cell migration through confined spaces, but other factors such as adhesion and traction forces may be equally important. To study 3D migration under confinement in a stiff (1.77 MPa) environment, we use soft lithography to fabricate polydimethylsiloxane (PDMS) devices consisting of linear channel segments with 20 μm length, 3.7 μm height, and a decreasing width from 11.2 to 1.7 μm. To study 3D migration in a soft (550 Pa) environment, we use self-assembled collagen networks with an average pore size of 3 μm. We then measure the ability of four different cancer cell lines to migrate through these 3D matrices, and correlate the results with cell physical properties including contractility, adhesiveness, cell stiffness, and nuclear volume. Furthermore, we alter cell adhesion by coating the channel walls with different amounts of adhesion proteins, and we increase cell stiffness by overexpression of the nuclear envelope protein lamin A. Although all cell lines are able to migrate through the smallest 1.7 μm channels, we find significant differences in the migration velocity. Cell migration is impeded in cell lines with larger nuclei, lower adhesiveness, and to a lesser degree also in cells with lower contractility and higher stiffness. Our data show that the ability to overcome the steric hindrance of the matrix cannot be attributed to a single cell property but instead arises from a combination of adhesiveness, nuclear volume, contractility, and cell stiffness. PMID:26331248

  16. Stereoscopic helmet mounted system for real time 3D environment reconstruction and indoor ego-motion estimation

    NASA Astrophysics Data System (ADS)

    Donato, Giuseppe; Sequeira, Vitor M.; Sadka, Abdul

    2008-04-01

    A novel type of stereoscopic Helmet Mounted System for simultaneous user localization and mapping applications is described. This paper presents precise real time volume data reconstruction. The system is designed for users that need to explore and navigate in unprepared indoor environments without any support of GPS signal or environment preparation through preinstalled markers. Augmented Reality features in support of self-navigation can be interactively added by placing virtual markers in the desired positions in the world coordinate system. They can then be retrieved when the marker is back in the user field of view being used as visual alerts or for back path finding.

  17. Human and tree classification based on a model using 3D ladar in a GPS-denied environment

    NASA Astrophysics Data System (ADS)

    Cho, Kuk; Baeg, Seung-Ho; Park, Sangdeok

    2013-05-01

    This study explained a method to classify humans and trees by extraction their geometric and statistical features in data obtained from 3D LADAR. In a wooded GPS-denied environment, it is difficult to identify the location of unmanned ground vehicles and it is also difficult to properly recognize the environment in which these vehicles move. In this study, using the point cloud data obtained via 3D LADAR, a method to extract the features of humans, trees, and other objects within an environment was implemented and verified through the processes of segmentation, feature extraction, and classification. First, for the segmentation, the radially bounded nearest neighbor method was applied. Second, for the feature extraction, each segmented object was divided into three parts, and then their geometrical and statistical features were extracted. A human was divided into three parts: the head, trunk and legs. A tree was also divided into three parts: the top, middle, and bottom. The geometric features were the variance of the x-y data for the center of each part in an object, using the distance between the two central points for each part, using K-mean clustering. The statistical features were the variance of each of the parts. In this study, three, six and six features of data were extracted, respectively, resulting in a total of 15 features. Finally, after training the extracted data via an artificial network, new data were classified. This study showed the results of an experiment that applied an algorithm proposed with a vehicle equipped with 3D LADAR in a thickly forested area, which is a GPS-denied environment. A total of 5,158 segments were obtained and the classification rates for human and trees were 82.9% and 87.4%, respectively.

  18. A workout for virtual bodybuilders (design issues for embodiment in multi-actor virtual environments)

    NASA Technical Reports Server (NTRS)

    Benford, Steve; Bowers, John; Fahlen, Lennart E.; Greenhalgh, Chris; Snowdon, Dave

    1994-01-01

    This paper explores the issue of user embodiment within collaborative virtual environments. By user embodiment we mean the provision of users with appropriate body images so as to represent them to others and also to themselves. By collaborative virtual environments we mean multi-user virtual reality systems which support cooperative work (although we argue that the results of our exploration may also be applied to other kinds of collaborative systems). The main part of the paper identifies a list of embodiment design issues including: presence, location, identity, activity, availability, history of activity, viewpoint, action point, gesture, facial expression, voluntary versus involuntary expression, degree of presence, reflecting capabilities, manipulating the user's view of others, representation across multiple media, autonomous and distributed body parts, truthfulness and efficiency. Following this, we show how these issues are reflected in our own DIVE and MASSIVE prototype collaborative virtual environments.

  19. Virtual Environments: Issues and Opportunities for Researching Inclusive Educational Practices

    NASA Astrophysics Data System (ADS)

    Sheehy, Kieron

    This chapter argues that virtual environments offer new research areas for those concerned with inclusive education. Further, it proposes that they also present opportunities for developing increasingly inclusive research processes. This chapter considers how researchers might approach researching some of these affordances. It discusses the relationship between specific features of inclusive pedagogy, derived from an international systematic literature review, and the affordances of different forms of virtual characters and environments. Examples are drawn from research in Second LifeTM (SL), virtual tutors and augmented reality. In doing this, the chapter challenges a simplistic notion of isolated physical and virtual worlds and, in the context of inclusion, between the practice of research and the research topic itself. There are a growing number of virtual worlds in which identified educational activities are taking place, or whose activities are being noted for their educational merit. These encompasses non-themed worlds such as SL and Active Worlds, game based worlds such as World of Warcraft and Runescape, and even Club Penguin, a themed virtual where younger players interact through a variety of Penguin themed environments and activities. It has been argued that these spaces, outside traditional education, are able to offer pedagogical insights (Twining 2009) i.e. that these global virtual communities have been identified as being useful as creative educational environments (Delwiche 2006; Sheehy 2009). This chapter will explore how researchers might use these spaces to investigative and create inclusive educational experiences for learners. In order to do this the chapter considers three interrelated issues: What is inclusive education?; How might inclusive education influence virtual world research? And, what might inclusive education look like in virtual worlds?

  20. 3D Face Model Dataset: Automatic Detection of Facial Expressions and Emotions for Educational Environments

    ERIC Educational Resources Information Center

    Chickerur, Satyadhyan; Joshi, Kartik

    2015-01-01

    Emotion detection using facial images is a technique that researchers have been using for the last two decades to try to analyze a person's emotional state given his/her image. Detection of various kinds of emotion using facial expressions of students in educational environment is useful in providing insight into the effectiveness of tutoring…

  1. In silico exploration of c-KIT inhibitors by pharmaco-informatics methodology: pharmacophore modeling, 3D QSAR, docking studies, and virtual screening.

    PubMed

    Chaudhari, Prashant; Bari, Sanjay

    2016-02-01

    c-KIT is a component of the platelet-derived growth factor receptor family, classified as type-III receptor tyrosine kinase. c-KIT has been reported to be involved in, small cell lung cancer, other malignant human cancers, and inflammatory and autoimmune diseases associated with mast cells. Available c-KIT inhibitors suffer from tribulations of growing resistance or cardiac toxicity. A combined in silico pharmacophore and structure-based virtual screening was performed to identify novel potential c-KIT inhibitors. In the present study, five molecules from the ZINC database were retrieved as new potential c-KIT inhibitors, using Schrödinger's Maestro 9.0 molecular modeling suite. An atom-featured 3D QSAR model was built using previously reported c-KIT inhibitors containing the indolin-2-one scaffold. The developed 3D QSAR model ADHRR.24 was found to be significant (R2 = 0.9378, Q2 = 0.7832) and instituted to be sufficiently robust with good predictive accuracy, as confirmed through external validation approaches, Y-randomization and GH approach [GH score 0.84 and Enrichment factor (E) 4.964]. The present QSAR model was further validated for the OECD principle 3, in that the applicability domain was calculated using a "standardization approach." Molecular docking of the QSAR dataset molecules and final ZINC hits were performed on the c-KIT receptor (PDB ID: 3G0E). Docking interactions were in agreement with the developed 3D QSAR model. Model ADHRR.24 was explored for ligand-based virtual screening followed by in silico ADME prediction studies. Five molecules from the ZINC database were obtained as potential c-KIT inhibitors with high in -silico predicted activity and strong key binding interactions with the c-KIT receptor. PMID:26416560

  2. High precision analysis of an embryonic extensional fault-related fold using 3D orthorectified virtual outcrops: The viewpoint importance in structural geology

    NASA Astrophysics Data System (ADS)

    Tavani, Stefano; Corradetti, Amerigo; Billi, Andrea

    2016-05-01

    Image-based 3D modeling has recently opened the way to the use of virtual outcrop models in geology. An intriguing application of this method involves the production of orthorectified images of outcrops using almost any user-defined point of view, so that photorealistic cross-sections suitable for numerous geological purposes and measurements can be easily generated. These purposes include the accurate quantitative analysis of fault-fold relationships starting from imperfectly oriented and partly inaccessible real outcrops. We applied the method of image-based 3D modeling and orthorectification to a case study from the northern Apennines, Italy, where an incipient extensional fault affecting well-layered limestones is exposed on a 10-m-high barely accessible cliff. Through a few simple steps, we constructed a high-quality image-based 3D model of the outcrop. In the model, we made a series of measurements including fault and bedding attitudes, which allowed us to derive the bedding-fault intersection direction. We then used this direction as viewpoint to obtain a distortion-free photorealistic cross-section, on which we measured bed dips and thicknesses as well as fault stratigraphic separations. These measurements allowed us to identify a slight difference (i.e. only 0.5°) between the hangingwall and footwall cutoff angles. We show that the hangingwall strain required to compensate the upward-decreasing displacement of the fault was accommodated by this 0.5° rotation (i.e. folding) and coeval 0.8% thickening of strata in the hangingwall relatively to footwall strata. This evidence is consistent with trishear fault-propagation folding. Our results emphasize the viewpoint importance in structural geology and therefore the potential of using orthorectified virtual outcrops.

  3. One concept, three implementations of 3D pharmacophore-based virtual screening: distinct coverage of chemical search space.

    PubMed

    Spitzer, Gudrun M; Heiss, Mathias; Mangold, Martina; Markt, Patrick; Kirchmair, Johannes; Wolber, Gerhard; Liedl, Klaus R

    2010-07-26

    Feature-based pharmacophore modeling is a well-established concept to support early stage drug discovery, where large virtual databases are filtered for potential drug candidates. The concept is implemented in popular molecular modeling software, including Catalyst, Phase, and MOE. With these software tools we performed a comparative virtual screening campaign on HSP90 and FXIa, taken from the 'maximum unbiased validation' data set. Despite the straightforward concept that pharmacophores are based on, we observed an unexpectedly high degree of variation among the hit lists obtained. By harmonizing the pharmacophore feature definitions of the investigated approaches, the exclusion volume sphere settings, and the screening parameters, we have derived a rationale for the observed differences, providing insight on the strengths and weaknesses of these algorithms. Application of more than one of these software tools in parallel will result in a widened coverage of chemical space. This is not only rooted in the dissimilarity of feature definitions but also in different algorithmic search strategies. PMID:20583761

  4. Sino-VirtualMoon: A 3D web platform using Chang’E-1 data for collaborative research

    NASA Astrophysics Data System (ADS)

    Chen, Min; Lin, Hui; Wen, Yongning; He, Li; Hu, Mingyuan

    2012-05-01

    The successful launch of the Chinese Chang’E-1 satellite created a valuable opportunity for lunar research, and represented China’s remarkable leap in deep space exploration. With the observed data acquired by Chang’E-1 satellite, a web platform was developed aims at providing an open research workspace for experts to conduct collaborative scientific research on the Moon. Excepting for supporting 3D visualization, the platform also provides collaborative tools for the basic geospatial analysis of the Moon, and supports collaborative simulation about the dynamic formation of lunar impact craters caused by the collision of meteors (or small asteroids). Based on this platform, related multidisciplinary experts can contribute their domain knowledge conveniently for collaborative scientific research of the Moon.

  5. Virtualizing ancient Rome: 3D acquisition and modeling of a large plaster-of-Paris model of imperial Rome

    NASA Astrophysics Data System (ADS)

    Guidi, Gabriele; Frischer, Bernard; De Simone, Monica; Cioci, Andrea; Spinetti, Alessandro; Carosso, Luca; Micoli, Laura L.; Russo, Michele; Grasso, Tommaso

    2005-01-01

    Computer modeling through digital range images has been used for many applications, including 3D modeling of objects belonging to our cultural heritage. The scales involved range from small objects (e.g. pottery), to middle-sized works of art (statues, architectural decorations), up to very large structures (architectural and archaeological monuments). For any of these applications, suitable sensors and methodologies have been explored by different authors. The object to be modeled within this project is the "Plastico di Roma antica," a large plaster-of-Paris model of imperial Rome (16x17 meters) created in the last century. Its overall size therefore demands an acquisition approach typical of large structures, but it also is characterized extremely tiny details typical of small objects (houses are a few centimeters high; their doors, windows, etc. are smaller than 1 centimeter). This paper gives an account of the procedures followed for solving this "contradiction" and describes how a huge 3D model was acquired and generated by using a special metrology Laser Radar. The procedures for reorienting in a single reference system the huge point clouds obtained after each acquisition phase, thanks to the measurement of fixed redundant references, are described. The data set was split in smaller sub-areas 2 x 2 meters each for purposes of mesh editing. This subdivision was necessary owing to the huge number of points in each individual scan (50-60 millions). The final merge of the edited parts made it possible to create a single mesh. All these processes were made with software specifically designed for this project since no commercial package could be found that was suitable for managing such a large number of points. Preliminary models are presented. Finally, the significance of the project is discussed in terms of the overall project known as "Rome Reborn," of which the present acquisition is an important component.

  6. Virtualizing ancient Rome: 3D acquisition and modeling of a large plaster-of-Paris model of imperial Rome

    NASA Astrophysics Data System (ADS)

    Guidi, Gabriele; Frischer, Bernard; De Simone, Monica; Cioci, Andrea; Spinetti, Alessandro; Carosso, Luca; Micoli, Laura L.; Russo, Michele; Grasso, Tommaso

    2004-12-01

    Computer modeling through digital range images has been used for many applications, including 3D modeling of objects belonging to our cultural heritage. The scales involved range from small objects (e.g. pottery), to middle-sized works of art (statues, architectural decorations), up to very large structures (architectural and archaeological monuments). For any of these applications, suitable sensors and methodologies have been explored by different authors. The object to be modeled within this project is the "Plastico di Roma antica," a large plaster-of-Paris model of imperial Rome (16x17 meters) created in the last century. Its overall size therefore demands an acquisition approach typical of large structures, but it also is characterized extremely tiny details typical of small objects (houses are a few centimeters high; their doors, windows, etc. are smaller than 1 centimeter). This paper gives an account of the procedures followed for solving this "contradiction" and describes how a huge 3D model was acquired and generated by using a special metrology Laser Radar. The procedures for reorienting in a single reference system the huge point clouds obtained after each acquisition phase, thanks to the measurement of fixed redundant references, are described. The data set was split in smaller sub-areas 2 x 2 meters each for purposes of mesh editing. This subdivision was necessary owing to the huge number of points in each individual scan (50-60 millions). The final merge of the edited parts made it possible to create a single mesh. All these processes were made with software specifically designed for this project since no commercial package could be found that was suitable for managing such a large number of points. Preliminary models are presented. Finally, the significance of the project is discussed in terms of the overall project known as "Rome Reborn," of which the present acquisition is an important component.

  7. Scalable Multi-Platform Distribution of Spatial 3d Contents

    NASA Astrophysics Data System (ADS)

    Klimke, J.; Hagedorn, B.; Döllner, J.

    2013-09-01

    Virtual 3D city models provide powerful user interfaces for communication of 2D and 3D geoinformation. Providing high quality visualization of massive 3D geoinformation in a scalable, fast, and cost efficient manner is still a challenging task. Especially for mobile and web-based system environments, software and hardware configurations of target systems differ significantly. This makes it hard to provide fast, visually appealing renderings of 3D data throughout a variety of platforms and devices. Current mobile or web-based solutions for 3D visualization usually require raw 3D scene data such as triangle meshes together with textures delivered from server to client, what makes them strongly limited in terms of size and complexity of the models they can handle. In this paper, we introduce a new approach for provisioning of massive, virtual 3D city models on different platforms namely web browsers, smartphones or tablets, by means of an interactive map assembled from artificial oblique image tiles. The key concept is to synthesize such images of a virtual 3D city model by a 3D rendering service in a preprocessing step. This service encapsulates model handling and 3D rendering techniques for high quality visualization of massive 3D models. By generating image tiles using this service, the 3D rendering process is shifted from the client side, which provides major advantages: (a) The complexity of the 3D city model data is decoupled from data transfer complexity (b) the implementation of client applications is simplified significantly as 3D rendering is encapsulated on server side (c) 3D city models can be easily deployed for and used by a large number of concurrent users, leading to a high degree of scalability of the overall approach. All core 3D rendering techniques are performed on a dedicated 3D rendering server, and thin-client applications can be compactly implemented for various devices and platforms.

  8. Occluded human recognition for a leader following system using 3D range and image data in forest environment

    NASA Astrophysics Data System (ADS)

    Cho, Kuk; Ilyas, Muhammad; Baeg, Seung-Ho; Park, Sangdeok

    2014-06-01

    This paper describes the occluded target recognition and tracking method for a leader-following system by fusing 3D range and image data acquired from 3D light detection and ranging (LIDAR) and a color camera installed on an autonomous vehicle in forest environment. During 3D data processing, the distance-based clustering method has an instinctive problem in close encounters. In the tracking phase, we divide an object tracking process into three phases based on occlusion scenario; before an occlusion (BO) phase, a partially or fully occlusion phase and after an occlusion (AO) phase. To improve the data association performance, we use camera's rich information to find correspondence among objects during above mentioned three phases of occlusion. In this paper, we solve a correspondence problem using the color features of human objects with the sum of squared distance (SSD) and the normalized cross correlation (NCC). The features are integrated with derived windows from Harris corner. The experimental results for a leader following on an autonomous vehicle are shown with LIDAR and camera for improving a data association problem in a multiple object tracking system.

  9. 3D mapping of stellar populations in galaxies as a function of environment

    NASA Astrophysics Data System (ADS)

    Thomas, Daniel

    2015-08-01

    MaNGA (Mapping Nearby Galaxies at Apache Point Observatory) is a6-year SDSS-IV survey that will obtain resolved spectroscopy from 3600A to 10300 A for a representative sample of 10,000 nearby galaxies. MaNGA will allow the internal kinematics and spatially-resolved properties of stellar populations and gas inside galaxies to be studied as a function of local environment and halo mass for the very first time. I will present results from our analysis of the first year MaNGA data. The main focus is on the 3-dimensional distribution of stellar population properties in galaxies - formation age, element abundance, IMF slope - studying how these vary spatially in galaxies as a function of galaxy environment and dark matter halo mass.

  10. 3D environment modeling and location tracking using off-the-shelf components

    NASA Astrophysics Data System (ADS)

    Luke, Robert H.

    2016-05-01

    The remarkable popularity of smartphones over the past decade has led to a technological race for dominance in market share. This has resulted in a flood of new processors and sensors that are inexpensive, low power and high performance. These sensors include accelerometers, gyroscope, barometers and most importantly cameras. This sensor suite, coupled with multicore processors, allows a new community of researchers to build small, high performance platforms for low cost. This paper describes a system using off-the-shelf components to perform position tracking as well as environment modeling. The system relies on tracking using stereo vision and inertial navigation to determine movement of the system as well as create a model of the environment sensed by the system.

  11. Overestimation of heights in virtual reality is influenced more by perceived distal size than by the 2-D versus 3-D dimensionality of the display

    NASA Technical Reports Server (NTRS)

    Dixon, Melissa W.; Proffitt, Dennis R.; Kaiser, M. K. (Principal Investigator)

    2002-01-01

    One important aspect of the pictorial representation of a scene is the depiction of object proportions. Yang, Dixon, and Proffitt (1999 Perception 28 445-467) recently reported that the magnitude of the vertical-horizontal illusion was greater for vertical extents presented in three-dimensional (3-D) environments compared to two-dimensional (2-D) displays. However, because all of the 3-D environments were large and all of the 2-D displays were small, the question remains whether the observed magnitude differences were due solely to the dimensionality of the displays (2-D versus 3-D) or to the perceived distal size of the extents (small versus large). We investigated this question by comparing observers' judgments of vertical relative to horizontal extents on a large but 2-D display compared to the large 3-D and the small 2-D displays used by Yang et al (1999). The results confirmed that the magnitude differences for vertical overestimation between display media are influenced more by the perceived distal object size rather than by the dimensionality of the display.

  12. Nomad devices for interactions in immersive virtual environments

    NASA Astrophysics Data System (ADS)

    George, Paul; Kemeny, Andras; Merienne, Frédéric; Chardonnet, Jean-Rémy; Thouvenin, Indira Mouttapa; Posselt, Javier; Icart, Emmanuel

    2013-03-01

    Renault is currently setting up a new CAVE™, a 5 rear-projected wall virtual reality room with a combined 3D resolution of 100 Mpixels, distributed over sixteen 4k projectors and two 2k projector as well as an additional 3D HD collaborative powerwall. Renault's CAVE™ aims at answering needs of the various vehicle conception steps [1]. Starting from vehicle Design, through the subsequent Engineering steps, Ergonomic evaluation and perceived quality control, Renault has built up a list of use-cases and carried out an early software evaluation in the four sided CAVE™ of Institute Image, called MOVE. One goal of the project is to study interactions in a CAVE™, especially with nomad devices such as IPhone or IPad to manipulate virtual objects and to develop visualization possibilities. Inspired by nomad devices current uses (multi-touch gestures, IPhone UI look'n'feel and AR applications), we have implemented an early feature set taking advantage of these popular input devices. In this paper, we present its performance through measurement data collected in our test platform, a 4-sided homemade low-cost virtual reality room, powered by ultra-short-range and standard HD home projectors.

  13. Information visualization in a distributed virtual decision support environment

    NASA Astrophysics Data System (ADS)

    Blocher, Timothy W.

    2002-07-01

    The visualization of and interaction with decision quality information is critical for effective decision makers in today's data rich environments. The generation and presentation of intuitively meaningful decision support information is the challenge. In order to investigate various visualization approaches to improve the timeliness and quality of Commander decisions, a robust, distributed virtual simulation environment, based on AFRL's Global Awareness Virtual Testbed (GAVTB), is being developed to represent an Air Operations Center (AOC) environment. The powerful Jview visualization technology is employed to efficiently and effectively utilize the simulation products to experiment with various decision quality representations and interactions required by military commanders.

  14. Accident response -- X-ray to virtual environment

    SciTech Connect

    Hefele, J.; Stupin, D.; Kelley, T.; Sheats, M.; Tsai, C.

    1999-03-01

    The Engineering Sciences and Applications (ESA) Division of Los Alamos National Laboratory (LANL) has been working to develop a process to extract topographical information from digital x-ray data for modeling in a Computer Aided Design (CAD) environment and translation into a virtual environment. The application for this process is the evolution of a field deployable tool for use by the Accident Response Group (ARG) at the Laboratory. The authors have used both CT Scan and radiography data in their process development. The data is translated into a format recognizable by Pro/ENGINEER{trademark} and then into a virtual environment that can be operated on by dVISE{trademark}. They have successfully taken both CT Scan and radiograph data of single components and created solid and virtual environment models for interrogation.

  15. Game controller modification for fMRI hyperscanning experiments in a cooperative virtual reality environment.

    PubMed

    Trees, Jason; Snider, Joseph; Falahpour, Maryam; Guo, Nick; Lu, Kun; Johnson, Douglas C; Poizner, Howard; Liu, Thomas T

    2014-01-01

    Hyperscanning, an emerging technique in which data from multiple interacting subjects' brains are simultaneously recorded, has become an increasingly popular way to address complex topics, such as "theory of mind." However, most previous fMRI hyperscanning experiments have been limited to abstract social interactions (e.g. phone conversations). Our new method utilizes a virtual reality (VR) environment used for military training, Virtual Battlespace 2 (VBS2), to create realistic avatar-avatar interactions and cooperative tasks. To control the virtual avatar, subjects use a MRI compatible Playstation 3 game controller, modified by removing all extraneous metal components and replacing any necessary ones with 3D printed plastic models. Control of both scanners' operation is initiated by a VBS2 plugin to sync scanner time to the known time within the VR environment. Our modifications include:•Modification of game controller to be MRI compatible.•Design of VBS2 virtual environment for cooperative interactions.•Syncing two MRI machines for simultaneous recording. PMID:26150964

  16. Game controller modification for fMRI hyperscanning experiments in a cooperative virtual reality environment

    PubMed Central

    Trees, Jason; Snider, Joseph; Falahpour, Maryam; Guo, Nick; Lu, Kun; Johnson, Douglas C.; Poizner, Howard; Liu, Thomas T.

    2014-01-01

    Hyperscanning, an emerging technique in which data from multiple interacting subjects’ brains are simultaneously recorded, has become an increasingly popular way to address complex topics, such as “theory of mind.” However, most previous fMRI hyperscanning experiments have been limited to abstract social interactions (e.g. phone conversations). Our new method utilizes a virtual reality (VR) environment used for military training, Virtual Battlespace 2 (VBS2), to create realistic avatar-avatar interactions and cooperative tasks. To control the virtual avatar, subjects use a MRI compatible Playstation 3 game controller, modified by removing all extraneous metal components and replacing any necessary ones with 3D printed plastic models. Control of both scanners’ operation is initiated by a VBS2 plugin to sync scanner time to the known time within the VR environment. Our modifications include:•Modification of game controller to be MRI compatible.•Design of VBS2 virtual environment for cooperative interactions.•Syncing two MRI machines for simultaneous recording. PMID:26150964

  17. Virtual Planetary Analysis Environment for Remote Science

    NASA Technical Reports Server (NTRS)

    Keely, Leslie; Beyer, Ross; Edwards. Laurence; Lees, David

    2009-01-01

    All of the data for NASA's current planetary missions and most data for field experiments are collected via orbiting spacecraft, aircraft, and robotic explorers. Mission scientists are unable to employ traditional field methods when operating remotely. We have developed a virtual exploration tool for remote sites with data analysis capabilities that extend human perception quantitatively and qualitatively. Scientists and mission engineers can use it to explore a realistic representation of a remote site. It also provides software tools to "touch" and "measure" remote sites with an immediacy that boosts scientific productivity and is essential for mission operations.

  18. Virtual building environments (VBE) - Applying information modeling to buildings

    SciTech Connect

    Bazjanac, Vladimir

    2004-06-21

    A Virtual Building Environment (VBE) is a ''place'' where building industry project staffs can get help in creating Building Information Models (BIM) and in the use of virtual buildings. It consists of a group of industry software that is operated by industry experts who are also experts in the use of that software. The purpose of a VBE is to facilitate expert use of appropriate software applications in conjunction with each other to efficiently support multidisciplinary work. This paper defines BIM and virtual buildings, and describes VBE objectives, set-up and characteristics of operation. It informs about the VBE Initiative and the benefits from a couple of early VBE projects.

  19. Utilizing Virtual and Personal Learning Environments for Optimal Learning

    ERIC Educational Resources Information Center

    Terry, Krista, Ed.; Cheney, Amy, Ed.

    2016-01-01

    The integration of emerging technologies in higher education presents a new set of challenges and opportunities for educators. With a growing need for customized lesson plans in online education, educators are rethinking the design and development of their learning environments. "Utilizing Virtual and Personal Learning Environments for…

  20. Virtual Collaborative Simulation Environment for Integrated Product and Process Development

    NASA Technical Reports Server (NTRS)

    Gulli, Michael A.

    1997-01-01

    Deneb Robotics is a leader in the development of commercially available, leading edge three- dimensional simulation software tools for virtual prototyping,, simulation-based design, manufacturing process simulation, and factory floor simulation and training applications. Deneb has developed and commercially released a preliminary Virtual Collaborative Engineering (VCE) capability for Integrated Product and Process Development (IPPD). This capability allows distributed, real-time visualization and evaluation of design concepts, manufacturing processes, and total factory and enterprises in one seamless simulation environment.