Sample records for virtual 3d space

  1. Interaction Design and Usability of Learning Spaces in 3D Multi-user Virtual Worlds

    NASA Astrophysics Data System (ADS)

    Minocha, Shailey; Reeves, Ahmad John

    Three-dimensional virtual worlds are multimedia, simulated environments, often managed over the Web, which users can 'inhabit' and interact via their own graphical, self-representations known as 'avatars'. 3D virtual worlds are being used in many applications: education/training, gaming, social networking, marketing and commerce. Second Life is the most widely used 3D virtual world in education. However, problems associated with usability, navigation and way finding in 3D virtual worlds may impact on student learning and engagement. Based on empirical investigations of learning spaces in Second Life, this paper presents design guidelines to improve the usability and ease of navigation in 3D spaces. Methods of data collection include semi-structured interviews with Second Life students, educators and designers. The findings have revealed that design principles from the fields of urban planning, Human- Computer Interaction, Web usability, geography and psychology can influence the design of spaces in 3D multi-user virtual environments.

  2. Virtual performer: single camera 3D measuring system for interaction in virtual space

    NASA Astrophysics Data System (ADS)

    Sakamoto, Kunio; Taneji, Shoto

    2006-10-01

    The authors developed interaction media systems in the 3D virtual space. In these systems, the musician virtually plays an instrument like the theremin in the virtual space or the performer plays a show using the virtual character such as a puppet. This interactive virtual media system consists of the image capture, measuring performer's position, detecting and recognizing motions and synthesizing video image using the personal computer. In this paper, we propose some applications of interaction media systems; a virtual musical instrument and superimposing CG character. Moreover, this paper describes the measuring method of the positions of the performer, his/her head and both eyes using a single camera.

  3. Embodied collaboration support system for 3D shape evaluation in virtual space

    NASA Astrophysics Data System (ADS)

    Okubo, Masashi; Watanabe, Tomio

    2005-12-01

    Collaboration mainly consists of two tasks; one is each partner's task that is performed by the individual, the other is communication with each other. Both of them are very important objectives for all the collaboration support system. In this paper, a collaboration support system for 3D shape evaluation in virtual space is proposed on the basis of both studies in 3D shape evaluation and communication support in virtual space. The proposed system provides the two viewpoints for each task. One is the viewpoint of back side of user's own avatar for the smooth communication. The other is that of avatar's eye for 3D shape evaluation. Switching the viewpoints satisfies the task conditions for 3D shape evaluation and communication. The system basically consists of PC, HMD and magnetic sensors, and users can share the embodied interaction by observing interaction between their avatars in virtual space. However, the HMD and magnetic sensors, which are put on the users, would restrict the nonverbal communication. Then, we have tried to compensate the loss of nodding of partner's avatar by introducing the speech-driven embodied interactive actor InterActor. Sensory evaluation by paired comparison of 3D shapes in the collaborative situation in virtual space and in real space and the questionnaire are performed. The result demonstrates the effectiveness of InterActor's nodding in the collaborative situation.

  4. EXPLORING ENVIRONMENTAL DATA IN A HIGHLY IMMERSIVE VIRTUAL REALITY ENVIRONMENT

    EPA Science Inventory

    Geography inherently fills a 3D space and yet we struggle with displaying geography using, primaarily, 2D display devices. Virtual environments offer a more realistically-dimensioned display space and this is being realized in the expanding area of research on 3D Geographic Infor...

  5. Design of Learning Spaces in 3D Virtual Worlds: An Empirical Investigation of "Second Life"

    ERIC Educational Resources Information Center

    Minocha, Shailey; Reeves, Ahmad John

    2010-01-01

    "Second Life" (SL) is a three-dimensional (3D) virtual world, and educational institutions are adopting SL to support their teaching and learning. Although the question of how 3D learning spaces should be designed to support student learning and engagement has been raised among SL educators and designers, there is hardly any guidance or…

  6. Attitude and Self-Efficacy Change: English Language Learning in Virtual Worlds

    ERIC Educational Resources Information Center

    Zheng, Dongping; Young, Michael F.; Brewer, Robert A.; Wagner, Manuela

    2009-01-01

    This study explored affective factors in learning English as a foreign language in a 3D game-like virtual world, Quest Atlantis (QA). Through the use of communication tools (e.g., chat, bulletin board, telegrams, and email), 3D avatars, and 2D webpage navigation tools in virtual space, nonnative English speakers (NNES) co-solved online…

  7. Coming down to Earth: Helping Teachers Use 3D Virtual Worlds in Across-Spaces Learning Situations

    ERIC Educational Resources Information Center

    Muñoz-Cristóbal, Juan A.; Prieto, Luis P.; Asensio-Pérez, Juan I.; Martínez-Monés, Alejandra; Jorrín-Abellán, Iván M.; Dimitriadis, Yannis

    2015-01-01

    Different approaches have explored how to provide seamless learning across multiple ICT-enabled physical and virtual spaces, including three-dimensional virtual worlds (3DVW). However, these approaches present limitations that may reduce their acceptance in authentic educational practice: The difficulties of authoring and sharing teacher-created…

  8. Effects of Presence, Copresence, and Flow on Learning Outcomes in 3D Learning Spaces

    ERIC Educational Resources Information Center

    Hassell, Martin D.; Goyal, Sandeep; Limayem, Moez; Boughzala, Imed

    2012-01-01

    The level of satisfaction and effectiveness of 3D virtual learning environments were examined. Additionally, 3D virtual learning environments were compared with face-to-face learning environments. Students that experienced higher levels of flow and presence also experienced more satisfaction but not necessarily more effectiveness with 3D virtual…

  9. Transduction between worlds: using virtual and mixed reality for earth and planetary science

    NASA Astrophysics Data System (ADS)

    Hedley, N.; Lochhead, I.; Aagesen, S.; Lonergan, C. D.; Benoy, N.

    2017-12-01

    Virtual reality (VR) and augmented reality (AR) have the potential to transform the way we visualize multidimensional geospatial datasets in support of geoscience research, exploration and analysis. The beauty of virtual environments is that they can be built at any scale, users can view them at many levels of abstraction, move through them in unconventional ways, and experience spatial phenomena as if they had superpowers. Similarly, augmented reality allows you to bring the power of virtual 3D data visualizations into everyday spaces. Spliced together, these interface technologies hold incredible potential to support 21st-century geoscience. In my ongoing research, my team and I have made significant advances to connect data and virtual simulations with real geographic spaces, using virtual environments, geospatial augmented reality and mixed reality. These research efforts have yielded new capabilities to connect users with spatial data and phenomena. These innovations include: geospatial x-ray vision; flexible mixed reality; augmented 3D GIS; situated augmented reality 3D simulations of tsunamis and other phenomena interacting with real geomorphology; augmented visual analytics; and immersive GIS. These new modalities redefine the ways in which we can connect digital spaces of spatial analysis, simulation and geovisualization, with geographic spaces of data collection, fieldwork, interpretation and communication. In a way, we are talking about transduction between real and virtual worlds. Taking a mixed reality approach to this, we can link real and virtual worlds. This paper presents a selection of our 3D geovisual interface projects in terrestrial, coastal, underwater and other environments. Using rigorous applied geoscience data, analyses and simulations, our research aims to transform the novelty of virtual and augmented reality interface technologies into game-changing mixed reality geoscience.

  10. Vision-based overlay of a virtual object into real scene for designing room interior

    NASA Astrophysics Data System (ADS)

    Harasaki, Shunsuke; Saito, Hideo

    2001-10-01

    In this paper, we introduce a geometric registration method for augmented reality (AR) and an application system, interior simulator, in which a virtual (CG) object can be overlaid into a real world space. Interior simulator is developed as an example of an AR application of the proposed method. Using interior simulator, users can visually simulate the location of virtual furniture and articles in the living room so that they can easily design the living room interior without placing real furniture and articles, by viewing from many different locations and orientations in real-time. In our system, two base images of a real world space are captured from two different views for defining a projective coordinate of object 3D space. Then each projective view of a virtual object in the base images are registered interactively. After such coordinate determination, an image sequence of a real world space is captured by hand-held camera with tracking non-metric measured feature points for overlaying a virtual object. Virtual objects can be overlaid onto the image sequence by taking each relationship between the images. With the proposed system, 3D position tracking device, such as magnetic trackers, are not required for the overlay of virtual objects. Experimental results demonstrate that 3D virtual furniture can be overlaid into an image sequence of the scene of a living room nearly at video rate (20 frames per second).

  11. Grasping trajectories in a virtual environment adhere to Weber's law.

    PubMed

    Ozana, Aviad; Berman, Sigal; Ganel, Tzvi

    2018-06-01

    Virtual-reality and telerobotic devices simulate local motor control of virtual objects within computerized environments. Here, we explored grasping kinematics within a virtual environment and tested whether, as in normal 3D grasping, trajectories in the virtual environment are performed analytically, violating Weber's law with respect to object's size. Participants were asked to grasp a series of 2D objects using a haptic system, which projected their movements to a virtual space presented on a computer screen. The apparatus also provided object-specific haptic information upon "touching" the edges of the virtual targets. The results showed that grasping movements performed within the virtual environment did not produce the typical analytical trajectory pattern obtained during 3D grasping. Unlike as in 3D grasping, grasping trajectories in the virtual environment adhered to Weber's law, which indicates relative resolution in size processing. In addition, the trajectory patterns differed from typical trajectories obtained during 3D grasping, with longer times to complete the movement, and with maximum grip apertures appearing relatively early in the movement. The results suggest that grasping movements within a virtual environment could differ from those performed in real space, and are subjected to irrelevant effects of perceptual information. Such atypical pattern of visuomotor control may be mediated by the lack of complete transparency between the interface and the virtual environment in terms of the provided visual and haptic feedback. Possible implications of the findings to movement control within robotic and virtual environments are further discussed.

  12. Eco-Dialogical Learning and Translanguaging in Open-Ended 3D Virtual Learning Environments: Where Place, Time, and Objects Matter

    ERIC Educational Resources Information Center

    Zheng, Dongping; Schmidt, Matthew; Hu, Ying; Liu, Min; Hsu, Jesse

    2017-01-01

    The purpose of this research was to explore the relationships between design, learning, and translanguaging in a 3D collaborative virtual learning environment for adolescent learners of Chinese and English. We designed an open-ended space congruent with ecological and dialogical perspectives on second language acquisition. In such a space,…

  13. 3D Virtual Worlds as Art Media and Exhibition Arenas: Students' Responses and Challenges in Contemporary Art Education

    ERIC Educational Resources Information Center

    Lu, Lilly

    2013-01-01

    3D virtual worlds (3D VWs) are considered one of the emerging learning spaces of the 21st century; however, few empirical studies have investigated educational applications and student learning aspects in art education. This study focused on students' responses to and challenges with 3D VWs in both aspects. The findings show that most participants…

  14. Development of visual 3D virtual environment for control software

    NASA Technical Reports Server (NTRS)

    Hirose, Michitaka; Myoi, Takeshi; Amari, Haruo; Inamura, Kohei; Stark, Lawrence

    1991-01-01

    Virtual environments for software visualization may enable complex programs to be created and maintained. A typical application might be for control of regional electric power systems. As these encompass broader computer networks than ever, construction of such systems becomes very difficult. Conventional text-oriented environments are useful in programming individual processors. However, they are obviously insufficient to program a large and complicated system, that includes large numbers of computers connected to each other; such programming is called 'programming in the large.' As a solution for this problem, the authors are developing a graphic programming environment wherein one can visualize complicated software in virtual 3D world. One of the major features of the environment is the 3D representation of concurrent process. 3D representation is used to supply both network-wide interprocess programming capability (capability for 'programming in the large') and real-time programming capability. The authors' idea is to fuse both the block diagram (which is useful to check relationship among large number of processes or processors) and the time chart (which is useful to check precise timing for synchronization) into a single 3D space. The 3D representation gives us a capability for direct and intuitive planning or understanding of complicated relationship among many concurrent processes. To realize the 3D representation, a technology to enable easy handling of virtual 3D object is a definite necessity. Using a stereo display system and a gesture input device (VPL DataGlove), our prototype of the virtual workstation has been implemented. The workstation can supply the 'sensation' of the virtual 3D space to a programmer. Software for the 3D programming environment is implemented on the workstation. According to preliminary assessments, a 50 percent reduction of programming effort is achieved by using the virtual 3D environment. The authors expect that the 3D environment has considerable potential in the field of software engineering.

  15. 3-D Sound for Virtual Reality and Multimedia

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Trejo, Leonard J. (Technical Monitor)

    2000-01-01

    Technology and applications for the rendering of virtual acoustic spaces are reviewed. Chapter 1 deals with acoustics and psychoacoustics. Chapters 2 and 3 cover cues to spatial hearing and review psychoacoustic literature. Chapter 4 covers signal processing and systems overviews of 3-D sound systems. Chapter 5 covers applications to computer workstations, communication systems, aeronautics and space, and sonic arts. Chapter 6 lists resources. This TM is a reprint of the 1994 book from Academic Press.

  16. Virtual reality 3D echocardiography in the assessment of tricuspid valve function after surgical closure of ventricular septal defect.

    PubMed

    Bol Raap, Goris; Koning, Anton H J; Scohy, Thierry V; ten Harkel, A Derk-Jan; Meijboom, Folkert J; Kappetein, A Pieter; van der Spek, Peter J; Bogers, Ad J J C

    2007-02-16

    This study was done to investigate the potential additional role of virtual reality, using three-dimensional (3D) echocardiographic holograms, in the postoperative assessment of tricuspid valve function after surgical closure of ventricular septal defect (VSD). 12 data sets from intraoperative epicardial echocardiographic studies in 5 operations (patient age at operation 3 weeks to 4 years and bodyweight at operation 3.8 to 17.2 kg) after surgical closure of VSD were included in the study. The data sets were analysed as two-dimensional (2D) images on the screen of the ultrasound system as well as holograms in an I-space virtual reality (VR) system. The 2D images were assessed for tricuspid valve function. In the I-Space, a 6 degrees-of-freedom controller was used to create the necessary projectory positions and cutting planes in the hologram. The holograms were used for additional assessment of tricuspid valve leaflet mobility. All data sets could be used for 2D as well as holographic analysis. In all data sets the area of interest could be identified. The 2D analysis showed no tricuspid valve stenosis or regurgitation. Leaflet mobility was considered normal. In the virtual reality of the I-Space, all data sets allowed to assess the tricuspid leaflet level in a single holographic representation. In 3 holograms the septal leaflet showed restricted mobility that was not appreciated in the 2D echocardiogram. In 4 data sets the posterior leaflet and the tricuspid papillary apparatus were not completely included. This report shows that dynamic holographic imaging of intraoperative postoperative echocardiographic data regarding tricuspid valve function after VSD closure is feasible. Holographic analysis allows for additional tricuspid valve leaflet mobility analysis. The large size of the probe, in relation to small size of the patient, may preclude a complete data set. At the moment the requirement of an I-Space VR system limits the applicability in virtual reality 3D echocardiography in clinical practice.

  17. 2D and 3D Traveling Salesman Problem

    ERIC Educational Resources Information Center

    Haxhimusa, Yll; Carpenter, Edward; Catrambone, Joseph; Foldes, David; Stefanov, Emil; Arns, Laura; Pizlo, Zygmunt

    2011-01-01

    When a two-dimensional (2D) traveling salesman problem (TSP) is presented on a computer screen, human subjects can produce near-optimal tours in linear time. In this study we tested human performance on a real and virtual floor, as well as in a three-dimensional (3D) virtual space. Human performance on the real floor is as good as that on a…

  18. The SEE Experience: Edutainment in 3D Virtual Worlds.

    ERIC Educational Resources Information Center

    Di Blas, Nicoletta; Paolini, Paolo; Hazan, Susan

    Shared virtual worlds are innovative applications where several users, represented by Avatars, simultaneously access via Internet a 3D space. Users cooperate through interaction with the environment and with each other, manipulating objects and chatting as they go. Apart from in the well documented online action games industry, now often played…

  19. 4D Flexible Atom-Pairs: An efficient probabilistic conformational space comparison for ligand-based virtual screening

    PubMed Central

    2011-01-01

    Background The performance of 3D-based virtual screening similarity functions is affected by the applied conformations of compounds. Therefore, the results of 3D approaches are often less robust than 2D approaches. The application of 3D methods on multiple conformer data sets normally reduces this weakness, but entails a significant computational overhead. Therefore, we developed a special conformational space encoding by means of Gaussian mixture models and a similarity function that operates on these models. The application of a model-based encoding allows an efficient comparison of the conformational space of compounds. Results Comparisons of our 4D flexible atom-pair approach with over 15 state-of-the-art 2D- and 3D-based virtual screening similarity functions on the 40 data sets of the Directory of Useful Decoys show a robust performance of our approach. Even 3D-based approaches that operate on multiple conformers yield inferior results. The 4D flexible atom-pair method achieves an averaged AUC value of 0.78 on the filtered Directory of Useful Decoys data sets. The best 2D- and 3D-based approaches of this study yield an AUC value of 0.74 and 0.72, respectively. As a result, the 4D flexible atom-pair approach achieves an average rank of 1.25 with respect to 15 other state-of-the-art similarity functions and four different evaluation metrics. Conclusions Our 4D method yields a robust performance on 40 pharmaceutically relevant targets. The conformational space encoding enables an efficient comparison of the conformational space. Therefore, the weakness of the 3D-based approaches on single conformations is circumvented. With over 100,000 similarity calculations on a single desktop CPU, the utilization of the 4D flexible atom-pair in real-world applications is feasible. PMID:21733172

  20. A Study of Multi-Representation of Geometry Problem Solving with Virtual Manipulatives and Whiteboard System

    ERIC Educational Resources Information Center

    Hwang, Wu-Yuin; Su, Jia-Han; Huang, Yueh-Min; Dong, Jian-Jie

    2009-01-01

    In this paper, the development of an innovative Virtual Manipulatives and Whiteboard (VMW) system is described. The VMW system allowed users to manipulate virtual objects in 3D space and find clues to solve geometry problems. To assist with multi-representation transformation, translucent multimedia whiteboards were used to provide a virtual 3D…

  1. Formalizing and Promoting Collaboration in 3D Virtual Environments - A Blueprint for the Creation of Group Interaction Patterns

    NASA Astrophysics Data System (ADS)

    Schmeil, Andreas; Eppler, Martin J.

    Despite the fact that virtual worlds and other types of multi-user 3D collaboration spaces have long been subjects of research and of application experiences, it still remains unclear how to best benefit from meeting with colleagues and peers in a virtual environment with the aim of working together. Making use of the potential of virtual embodiment, i.e. being immersed in a space as a personal avatar, allows for innovative new forms of collaboration. In this paper, we present a framework that serves as a systematic formalization of collaboration elements in virtual environments. The framework is based on the semiotic distinctions among pragmatic, semantic and syntactic perspectives. It serves as a blueprint to guide users in designing, implementing, and executing virtual collaboration patterns tailored to their needs. We present two team and two community collaboration pattern examples as a result of the application of the framework: Virtual Meeting, Virtual Design Studio, Spatial Group Configuration, and Virtual Knowledge Fair. In conclusion, we also point out future research directions for this emerging domain.

  2. Lead-oriented synthesis: Investigation of organolithium-mediated routes to 3-D scaffolds and 3-D shape analysis of a virtual lead-like library.

    PubMed

    Lüthy, Monique; Wheldon, Mary C; Haji-Cheteh, Chehasnah; Atobe, Masakazu; Bond, Paul S; O'Brien, Peter; Hubbard, Roderick E; Fairlamb, Ian J S

    2015-06-01

    Synthetic routes to six 3-D scaffolds containing piperazine, pyrrolidine and piperidine cores have been developed. The synthetic methodology focused on the use of N-Boc α-lithiation-trapping chemistry. Notably, suitably protected and/or functionalised medicinal chemistry building blocks were synthesised via concise, connective methodology. This represents a rare example of lead-oriented synthesis. A virtual library of 190 compounds was then enumerated from the six scaffolds. Of these, 92 compounds (48%) fit the lead-like criteria of: (i) -1⩽AlogP⩽3; (ii) 14⩽number of heavy atoms⩽26; (iii) total polar surface area⩾50Å(2). The 3-D shapes of the 190 compounds were analysed using a triangular plot of normalised principal moments of inertia (PMI). From this, 46 compounds were identified which had lead-like properties and possessed 3-D shapes in under-represented areas of pharmaceutical space. Thus, the PMI analysis of the 190 member virtual library showed that whilst scaffolds which may appear on paper to be 3-D in shape, only 24% of the compounds actually had 3-D structures in the more interesting areas of 3-D drug space. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Virtual Boutique: a 3D modeling and content-based management approach to e-commerce

    NASA Astrophysics Data System (ADS)

    Paquet, Eric; El-Hakim, Sabry F.

    2000-12-01

    The Virtual Boutique is made out of three modules: the decor, the market and the search engine. The decor is the physical space occupied by the Virtual Boutique. It can reproduce any existing boutique. For this purpose, photogrammetry is used. A set of pictures of a real boutique or space is taken and a virtual 3D representation of this space is calculated from them. Calculations are performed with software developed at NRC. This representation consists of meshes and texture maps. The camera used in the acquisition process determines the resolution of the texture maps. Decorative elements are added like painting, computer generated objects and scanned objects. The objects are scanned with laser scanner developed at NRC. This scanner allows simultaneous acquisition of range and color information based on white laser beam triangulation. The second module, the market, is made out of all the merchandises and the manipulators, which are used to manipulate and compare the objects. The third module, the search engine, can search the inventory based on an object shown by the customer in order to retrieve similar objects base don shape and color. The items of interest are displayed in the boutique by reconfiguring the market space, which mean that the boutique can be continuously customized according to the customer's needs. The Virtual Boutique is entirely written in Java 3D and can run in mono and stereo mode and has been optimized in order to allow high quality rendering.

  4. Generating Contextual Descriptions of Virtual Reality (VR) Spaces

    NASA Astrophysics Data System (ADS)

    Olson, D. M.; Zaman, C. H.; Sutherland, A.

    2017-12-01

    Virtual reality holds great potential for science communication, education, and research. However, interfaces for manipulating data and environments in virtual worlds are limited and idiosyncratic. Furthermore, speech and vision are the primary modalities by which humans collect information about the world, but the linking of visual and natural language domains is a relatively new pursuit in computer vision. Machine learning techniques have been shown to be effective at image and speech classification, as well as at describing images with language (Karpathy 2016), but have not yet been used to describe potential actions. We propose a technique for creating a library of possible context-specific actions associated with 3D objects in immersive virtual worlds based on a novel dataset generated natively in virtual reality containing speech, image, gaze, and acceleration data. We will discuss the design and execution of a user study in virtual reality that enabled the collection and the development of this dataset. We will also discuss the development of a hybrid machine learning algorithm linking vision data with environmental affordances in natural language. Our findings demonstrate that it is possible to develop a model which can generate interpretable verbal descriptions of possible actions associated with recognized 3D objects within immersive VR environments. This suggests promising applications for more intuitive user interfaces through voice interaction within 3D environments. It also demonstrates the potential to apply vast bodies of embodied and semantic knowledge to enrich user interaction within VR environments. This technology would allow for applications such as expert knowledge annotation of 3D environments, complex verbal data querying and object manipulation in virtual spaces, and computer-generated, dynamic 3D object affordances and functionality during simulations.

  5. CliniSpace: a multiperson 3D online immersive training environment accessible through a browser.

    PubMed

    Dev, Parvati; Heinrichs, W LeRoy; Youngblood, Patricia

    2011-01-01

    Immersive online medical environments, with dynamic virtual patients, have been shown to be effective for scenario-based learning (1). However, ease of use and ease of access have been barriers to their use. We used feedback from prior evaluation of these projects to design and develop CliniSpace. To improve usability, we retained the richness of prior virtual environments but modified the user interface. To improve access, we used a Software-as-a-Service (SaaS) approach to present a richly immersive 3D environment within a web browser.

  6. Optical 3D surface digitizing in forensic medicine: 3D documentation of skin and bone injuries.

    PubMed

    Thali, Michael J; Braun, Marcel; Dirnhofer, Richard

    2003-11-26

    Photography process reduces a three-dimensional (3D) wound to a two-dimensional level. If there is a need for a high-resolution 3D dataset of an object, it needs to be three-dimensionally scanned. No-contact optical 3D digitizing surface scanners can be used as a powerful tool for wound and injury-causing instrument analysis in trauma cases. The 3D skin wound and a bone injury documentation using the optical scanner Advanced TOpometric Sensor (ATOS II, GOM International, Switzerland) will be demonstrated using two illustrative cases. Using this 3D optical digitizing method the wounds (the virtual 3D computer model of the skin and the bone injuries) and the virtual 3D model of the injury-causing tool are graphically documented in 3D in real-life size and shape and can be rotated in the CAD program on the computer screen. In addition, the virtual 3D models of the bone injuries and tool can now be compared in a 3D CAD program against one another in virtual space, to see if there are matching areas. Further steps in forensic medicine will be a full 3D surface documentation of the human body and all the forensic relevant injuries using optical 3D scanners.

  7. Enhancement of Spatial Thinking with Virtual Spaces 1.0

    ERIC Educational Resources Information Center

    Hauptman, Hanoch

    2010-01-01

    Developing a software environment to enhance 3D geometric proficiency demands the consideration of theoretical views of the learning process. Simultaneously, this effort requires taking into account the range of tools that technology offers, as well as their limitations. In this paper, we report on the design of Virtual Spaces 1.0 software, a…

  8. EEG Control of a Virtual Helicopter in 3-Dimensional Space Using Intelligent Control Strategies

    PubMed Central

    Royer, Audrey S.; Doud, Alexander J.; Rose, Minn L.

    2011-01-01

    Films like Firefox, Surrogates, and Avatar have explored the possibilities of using brain-computer interfaces (BCIs) to control machines and replacement bodies with only thought. Real world BCIs have made great progress toward that end. Invasive BCIs have enabled monkeys to fully explore 3-dimensional (3D) space using neuroprosthetics. However, non-invasive BCIs have not been able to demonstrate such mastery of 3D space. Here, we report our work, which demonstrates that human subjects can use a non-invasive BCI to fly a virtual helicopter to any point in a 3D world. Through use of intelligent control strategies, we have facilitated the realization of controlled flight in 3D space. We accomplished this through a reductionist approach that assigns subject-specific control signals to the crucial components of 3D flight. Subject control of the helicopter was comparable when using either the BCI or a keyboard. By using intelligent control strategies, the strengths of both the user and the BCI system were leveraged and accentuated. Intelligent control strategies in BCI systems such as those presented here may prove to be the foundation for complex BCIs capable of doing more than we ever imagined. PMID:20876032

  9. Inertial Sensor-Based Touch and Shake Metaphor for Expressive Control of 3D Virtual Avatars

    PubMed Central

    Patil, Shashidhar; Chintalapalli, Harinadha Reddy; Kim, Dubeom; Chai, Youngho

    2015-01-01

    In this paper, we present an inertial sensor-based touch and shake metaphor for expressive control of a 3D virtual avatar in a virtual environment. An intuitive six degrees-of-freedom wireless inertial motion sensor is used as a gesture and motion control input device with a sensor fusion algorithm. The algorithm enables user hand motions to be tracked in 3D space via magnetic, angular rate, and gravity sensors. A quaternion-based complementary filter is implemented to reduce noise and drift. An algorithm based on dynamic time-warping is developed for efficient recognition of dynamic hand gestures with real-time automatic hand gesture segmentation. Our approach enables the recognition of gestures and estimates gesture variations for continuous interaction. We demonstrate the gesture expressivity using an interactive flexible gesture mapping interface for authoring and controlling a 3D virtual avatar and its motion by tracking user dynamic hand gestures. This synthesizes stylistic variations in a 3D virtual avatar, producing motions that are not present in the motion database using hand gesture sequences from a single inertial motion sensor. PMID:26094629

  10. A Proposed Framework for Collaborative Design in a Virtual Environment

    NASA Astrophysics Data System (ADS)

    Breland, Jason S.; Shiratuddin, Mohd Fairuz

    This paper describes a proposed framework for a collaborative design in a virtual environment. The framework consists of components that support a true collaborative design in a real-time 3D virtual environment. In support of the proposed framework, a prototype application is being developed. The authors envision the framework will have, but not limited to the following features: (1) real-time manipulation of 3D objects across the network, (2) support for multi-designer activities and information access, (3) co-existence within same virtual space, etc. This paper also discusses a proposed testing to determine the possible benefits of a collaborative design in a virtual environment over other forms of collaboration, and results from a pilot test.

  11. EEG control of a virtual helicopter in 3-dimensional space using intelligent control strategies.

    PubMed

    Royer, Audrey S; Doud, Alexander J; Rose, Minn L; He, Bin

    2010-12-01

    Films like Firefox, Surrogates, and Avatar have explored the possibilities of using brain-computer interfaces (BCIs) to control machines and replacement bodies with only thought. Real world BCIs have made great progress toward that end. Invasive BCIs have enabled monkeys to fully explore 3-D space using neuroprosthetics. However, noninvasive BCIs have not been able to demonstrate such mastery of 3-D space. Here, we report our work, which demonstrates that human subjects can use a noninvasive BCI to fly a virtual helicopter to any point in a 3-D world. Through use of intelligent control strategies, we have facilitated the realization of controlled flight in 3-D space. We accomplished this through a reductionist approach that assigns subject-specific control signals to the crucial components of 3-D flight. Subject control of the helicopter was comparable when using either the BCI or a keyboard. By using intelligent control strategies, the strengths of both the user and the BCI system were leveraged and accentuated. Intelligent control strategies in BCI systems such as those presented here may prove to be the foundation for complex BCIs capable of doing more than we ever imagined.

  12. Designing a Virtual Social Space for Language Acquisition

    ERIC Educational Resources Information Center

    Woolson, Maria Alessandra

    2012-01-01

    Middleverse de Español (MdE) is an evolving platform for foreign language (FL) study, aligned to the goals of ACTFL's National Standards and 2007 MLA report. The project simulates an immersive environment in a virtual 3-D space for the acquisition of translingual and transcultural competence in Spanish meant to support content-based and…

  13. CaveCAD: a tool for architectural design in immersive virtual environments

    NASA Astrophysics Data System (ADS)

    Schulze, Jürgen P.; Hughes, Cathleen E.; Zhang, Lelin; Edelstein, Eve; Macagno, Eduardo

    2014-02-01

    Existing 3D modeling tools were designed to run on desktop computers with monitor, keyboard and mouse. To make 3D modeling possible with mouse and keyboard, many 3D interactions, such as point placement or translations of geometry, had to be mapped to the 2D parameter space of the mouse, possibly supported by mouse buttons or keyboard keys. We hypothesize that had the designers of these existing systems had been able to assume immersive virtual reality systems as their target platforms, they would have been able to design 3D interactions much more intuitively. In collaboration with professional architects, we created a simple, but complete 3D modeling tool for virtual environments from the ground up and use direct 3D interaction wherever possible and adequate. In this publication, we present our approaches for interactions for typical 3D modeling functions, such as geometry creation, modification of existing geometry, and assignment of surface materials. We also discuss preliminary user experiences with this system.

  14. The Use of Virtual Ethnography in Distance Education Research

    ERIC Educational Resources Information Center

    Uzun, Kadriye; Aydin, Cengiz Hakan

    2012-01-01

    3D virtual worlds can and have been used as a meeting place for distance education courses. Virtual worlds allow for group learning of the kind enjoyed by students gathered in a virtual classroom, where they know they are in a communal space, they are aware of the social process of learning and are affected by the presence and behaviour of their…

  15. Image-based 3D reconstruction and virtual environmental walk-through

    NASA Astrophysics Data System (ADS)

    Sun, Jifeng; Fang, Lixiong; Luo, Ying

    2001-09-01

    We present a 3D reconstruction method, which combines geometry-based modeling, image-based modeling and rendering techniques. The first component is an interactive geometry modeling method which recovery of the basic geometry of the photographed scene. The second component is model-based stereo algorithm. We discus the image processing problems and algorithms of walking through in virtual space, then designs and implement a high performance multi-thread wandering algorithm. The applications range from architectural planning and archaeological reconstruction to virtual environments and cinematic special effects.

  16. ConfocalVR: Immersive Visualization Applied to Confocal Microscopy.

    PubMed

    Stefani, Caroline; Lacy-Hulbert, Adam; Skillman, Thomas

    2018-06-24

    ConfocalVR is a virtual reality (VR) application created to improve the ability of researchers to study the complexity of cell architecture. Confocal microscopes take pictures of fluorescently labeled proteins or molecules at different focal planes to create a stack of 2D images throughout the specimen. Current software applications reconstruct the 3D image and render it as a 2D projection onto a computer screen where users need to rotate the image to expose the full 3D structure. This process is mentally taxing, breaks down if you stop the rotation, and does not take advantage of the eye's full field of view. ConfocalVR exploits consumer-grade virtual reality (VR) systems to fully immerse the user in the 3D cellular image. In this virtual environment the user can: 1) adjust image viewing parameters without leaving the virtual space, 2) reach out and grab the image to quickly rotate and scale the image to focus on key features, and 3) interact with other users in a shared virtual space enabling real-time collaborative exploration and discussion. We found that immersive VR technology allows the user to rapidly understand cellular architecture and protein or molecule distribution. We note that it is impossible to understand the value of immersive visualization without experiencing it first hand, so we encourage readers to get access to a VR system, download this software, and evaluate it for yourself. The ConfocalVR software is available for download at http://www.confocalvr.com, and is free for nonprofits. Copyright © 2018. Published by Elsevier Ltd.

  17. 3D gaze tracking system for NVidia 3D Vision®.

    PubMed

    Wibirama, Sunu; Hamamoto, Kazuhiko

    2013-01-01

    Inappropriate parallax setting in stereoscopic content generally causes visual fatigue and visual discomfort. To optimize three dimensional (3D) effects in stereoscopic content by taking into account health issue, understanding how user gazes at 3D direction in virtual space is currently an important research topic. In this paper, we report the study of developing a novel 3D gaze tracking system for Nvidia 3D Vision(®) to be used in desktop stereoscopic display. We suggest an optimized geometric method to accurately measure the position of virtual 3D object. Our experimental result shows that the proposed system achieved better accuracy compared to conventional geometric method by average errors 0.83 cm, 0.87 cm, and 1.06 cm in X, Y, and Z dimensions, respectively.

  18. A 3D visualization and simulation of the individual human jaw.

    PubMed

    Muftić, Osman; Keros, Jadranka; Baksa, Sarajko; Carek, Vlado; Matković, Ivo

    2003-01-01

    A new biomechanical three-dimensional (3D) model for the human mandible based on computer-generated virtual model is proposed. Using maps obtained from the special kinds of photos of the face of the real subject, it is possible to attribute personality to the virtual character, while computer animation offers movements and characteristics within the confines of space and time of the virtual world. A simple two-dimensional model of the jaw cannot explain the biomechanics, where the muscular forces through occlusion and condylar surfaces are in the state of 3D equilibrium. In the model all forces are resolved into components according to a selected coordinate system. The muscular forces act on the jaw, along with the necessary force level for chewing as some kind of mandible balance, preventing dislocation and loading of nonarticular tissues. In the work is used new approach to computer-generated animation of virtual 3D characters (called "Body SABA"), using in one object package of minimal costs and easy for operation.

  19. Interactive Immersive Virtualmuseum: Digital Documentation for Virtual Interaction

    NASA Astrophysics Data System (ADS)

    Clini, P.; Ruggeri, L.; Angeloni, R.; Sasso, M.

    2018-05-01

    Thanks to their playful and educational approach Virtual Museum systems are very effective for the communication of Cultural Heritage. Among the latest technologies Immersive Virtual Reality is probably the most appealing and potentially effective to serve this purpose; nevertheless, due to a poor user-system interaction, caused by an incomplete maturity of a specific technology for museum applications, it is still quite uncommon to find immersive installations in museums. This paper explore the possibilities offered by this technology and presents a workflow that, starting from digital documentation, makes possible an interaction with archaeological finds or any other cultural heritage inside different kinds of immersive virtual reality spaces. Two different cases studies are presented: the National Archaeological Museum of Marche in Ancona and the 3D reconstruction of the Roman Forum of Fanum Fortunae. Two different approaches not only conceptually but also in contents; while the Archaeological Museum is represented in the application simply using spherical panoramas to give the perception of the third dimension, the Roman Forum is a 3D model that allows visitors to move in the virtual space as in the real one. In both cases, the acquisition phase of the artefacts is central; artefacts are digitized with the photogrammetric technique Structure for Motion then they are integrated inside the immersive virtual space using a PC with a HTC Vive system that allows the user to interact with the 3D models turning the manipulation of objects into a fun and exciting experience. The challenge, taking advantage of the latest opportunities made available by photogrammetry and ICT, is to enrich visitors' experience in Real Museum making possible the interaction with perishable, damaged or lost objects and the public access to inaccessible or no longer existing places promoting in this way the preservation of fragile sites.

  20. Photographer: Digital Telepresence: Dr Murial Ross's Virtual Reality Application for Neuroscience

    NASA Technical Reports Server (NTRS)

    1995-01-01

    Photographer: Digital Telepresence: Dr Murial Ross's Virtual Reality Application for Neuroscience Research Biocomputation. To study human disorders of balance and space motion sickness. Shown here is a 3D reconstruction of a nerve ending in inner ear, nature's wiring of balance organs.

  1. Virtual hand: a 3D tactile interface to virtual environments

    NASA Astrophysics Data System (ADS)

    Rogowitz, Bernice E.; Borrel, Paul

    2008-02-01

    We introduce a novel system that allows users to experience the sensation of touch in a computer graphics environment. In this system, the user places his/her hand on an array of pins, which is moved about space on a 6 degree-of-freedom robot arm. The surface of the pins defines a surface in the virtual world. This "virtual hand" can move about the virtual world. When the virtual hand encounters an object in the virtual world, the heights of the pins are adjusted so that they represent the object's shape, surface, and texture. A control system integrates pin and robot arm motions to transmit information about objects in the computer graphics world to the user. It also allows the user to edit, change and move the virtual objects, shapes and textures. This system provides a general framework for touching, manipulating, and modifying objects in a 3-D computer graphics environment, which may be useful in a wide range of applications, including computer games, computer aided design systems, and immersive virtual worlds.

  2. Learning Behaviors and Interaction Patterns among Students in Virtual Learning Worlds

    ERIC Educational Resources Information Center

    Lin, Chi-Syan; Ma, Jung Tsan; Chen, Yi-Lung; Kuo, Ming-Shiou

    2010-01-01

    The goal of this study is to investigate how students behave themselves in the virtual learning worlds. The study creates a 3D virtual learning world, entitled the Best Digital Village, and implements a learning program on it. The learning program, the Expo, takes place at the Exhibition Center in the Best Digital Village. The space in the Expo is…

  3. Evaluation of the performance of 3D virtual screening protocols: RMSD comparisons, enrichment assessments, and decoy selection--what can we learn from earlier mistakes?

    PubMed

    Kirchmair, Johannes; Markt, Patrick; Distinto, Simona; Wolber, Gerhard; Langer, Thierry

    2008-01-01

    Within the last few years a considerable amount of evaluative studies has been published that investigate the performance of 3D virtual screening approaches. Thereby, in particular assessments of protein-ligand docking are facing remarkable interest in the scientific community. However, comparing virtual screening approaches is a non-trivial task. Several publications, especially in the field of molecular docking, suffer from shortcomings that are likely to affect the significance of the results considerably. These quality issues often arise from poor study design, biasing, by using improper or inexpressive enrichment descriptors, and from errors in interpretation of the data output. In this review we analyze recent literature evaluating 3D virtual screening methods, with focus on molecular docking. We highlight problematic issues and provide guidelines on how to improve the quality of computational studies. Since 3D virtual screening protocols are in general assessed by their ability to discriminate between active and inactive compounds, we summarize the impact of the composition and preparation of test sets on the outcome of evaluations. Moreover, we investigate the significance of both classic enrichment parameters and advanced descriptors for the performance of 3D virtual screening methods. Furthermore, we review the significance and suitability of RMSD as a measure for the accuracy of protein-ligand docking algorithms and of conformational space sub sampling algorithms.

  4. A Demonstration of ‘Broken’ Visual Space

    PubMed Central

    Gilson, Stuart

    2012-01-01

    It has long been assumed that there is a distorted mapping between real and ‘perceived’ space, based on demonstrations of systematic errors in judgements of slant, curvature, direction and separation. Here, we have applied a direct test to the notion of a coherent visual space. In an immersive virtual environment, participants judged the relative distance of two squares displayed in separate intervals. On some trials, the virtual scene expanded by a factor of four between intervals although, in line with recent results, participants did not report any noticeable change in the scene. We found that there was no consistent depth ordering of objects that can explain the distance matches participants made in this environment (e.g. A>B>D yet also A

  5. Real-time interactive virtual tour on the World Wide Web (WWW)

    NASA Astrophysics Data System (ADS)

    Yoon, Sanghyuk; Chen, Hai-jung; Hsu, Tom; Yoon, Ilmi

    2003-12-01

    Web-based Virtual Tour has become a desirable and demanded application, yet challenging due to the nature of web application's running environment such as limited bandwidth and no guarantee of high computation power on the client side. Image-based rendering approach has attractive advantages over traditional 3D rendering approach in such Web Applications. Traditional approach, such as VRML, requires labor-intensive 3D modeling process, high bandwidth and computation power especially for photo-realistic virtual scenes. QuickTime VR and IPIX as examples of image-based approach, use panoramic photos and the virtual scenes that can be generated from photos directly skipping the modeling process. But, these image-based approaches may require special cameras or effort to take panoramic views and provide only one fixed-point look-around and zooming in-out rather than 'walk around', that is a very important feature to provide immersive experience to virtual tourists. The Web-based Virtual Tour using Tour into the Picture employs pseudo 3D geometry with image-based rendering approach to provide viewers with immersive experience of walking around the virtual space with several snap shots of conventional photos.

  6. Research on 3D virtual campus scene modeling based on 3ds Max and VRML

    NASA Astrophysics Data System (ADS)

    Kang, Chuanli; Zhou, Yanliu; Liang, Xianyue

    2015-12-01

    With the rapid development of modem technology, the digital information management and the virtual reality simulation technology has become a research hotspot. Virtual campus 3D model can not only express the real world objects of natural, real and vivid, and can expand the campus of the reality of time and space dimension, the combination of school environment and information. This paper mainly uses 3ds Max technology to create three-dimensional model of building and on campus buildings, special land etc. And then, the dynamic interactive function is realized by programming the object model in 3ds Max by VRML .This research focus on virtual campus scene modeling technology and VRML Scene Design, and the scene design process in a variety of real-time processing technology optimization strategy. This paper guarantees texture map image quality and improve the running speed of image texture mapping. According to the features and architecture of Guilin University of Technology, 3ds Max, AutoCAD and VRML were used to model the different objects of the virtual campus. Finally, the result of virtual campus scene is summarized.

  7. The New Realm of 3-D Vision

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Dimension Technologies Inc., developed a line of 2-D/3-D Liquid Crystal Display (LCD) screens, including a 15-inch model priced at consumer levels. DTI's family of flat panel LCD displays, called the Virtual Window(TM), provide real-time 3-D images without the use of glasses, head trackers, helmets, or other viewing aids. Most of the company initial 3-D display research was funded through NASA's Small Business Innovation Research (SBIR) program. The images on DTI's displays appear to leap off the screen and hang in space. The display accepts input from computers or stereo video sources, and can be switched from 3-D to full-resolution 2-D viewing with the push of a button. The Virtual Window displays have applications in data visualization, medicine, architecture, business, real estate, entertainment, and other research, design, military, and consumer applications. Displays are currently used for computer games, protein analysis, and surgical imaging. The technology greatly benefits the medical field, as surgical simulators are helping to increase the skills of surgical residents. Virtual Window(TM) is a trademark of Dimension Technologies Inc.

  8. The effects of viewpoint on the virtual space of pictures

    NASA Technical Reports Server (NTRS)

    Sedgwick, H. A.

    1989-01-01

    Pictorial displays whose primary purpose is to convey accurate information about the 3-D spatial layout of an environment are discussed. How and how well, pictures can convey such information is discussed. It is suggested that picture perception is not best approached as a unitary, indivisible process. Rather, it is a complex process depending on multiple, partially redundant, interacting sources of visual information for both the real surface of the picture and the virtual space beyond. Each picture must be assessed for the particular information that it makes available. This will determine how accurately the virtual space represented by the picture is seen, as well as how it is distorted when seen from the wrong viewpoint.

  9. Reaching to virtual targets: The oblique effect reloaded in 3-D.

    PubMed

    Kaspiris-Rousellis, Christos; Siettos, Constantinos I; Evdokimidis, Ioannis; Smyrnis, Nikolaos

    2017-02-20

    Perceiving and reproducing direction of visual stimuli in 2-D space produces the visual oblique effect, which manifests as increased precision in the reproduction of cardinal compared to oblique directions. A second cognitive oblique effect emerges when stimulus information is degraded (such as when reproducing stimuli from memory) and manifests as a systematic distortion where reproduced directions close to the cardinal axes deviate toward the oblique, leading to space expansion at cardinal and contraction at oblique axes. We studied the oblique effect in 3-D using a virtual reality system to present a large number of stimuli, covering the surface of an imaginary half sphere, to which subjects had to reach. We used two conditions, one with no delay (no-memory condition) and one where a three-second delay intervened between stimulus presentation and movement initiation (memory condition). A visual oblique effect was observed for the reproduction of cardinal directions compared to oblique, which did not differ with memory condition. A cognitive oblique effect also emerged, which was significantly larger in the memory compared to the no-memory condition, leading to distortion of directional space with expansion near the cardinal axes and compression near the oblique axes on the hemispherical surface. This effect provides evidence that existing models of 2-D directional space categorization could be extended in the natural 3-D space. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.

  10. Learning in 3-D Multiuser Virtual Environments: Exploring the Use of Unique 3-D Attributes for Online Problem-Based Learning

    ERIC Educational Resources Information Center

    Omale, Nicholas; Hung, Wei-Chen; Luetkehans, Lara; Cooke-Plagwitz, Jessamine

    2009-01-01

    The purpose of this article is to present the results of a study conducted to investigate how the attributes of 3-D technology such as avatars, 3-D space, and comic style bubble dialogue boxes affect participants' social, cognitive, and teaching presences in a blended problem-based learning environment. The community of inquiry model was adopted…

  11. Planning, Implementation and Optimization of Future space Missions using an Immersive Visualization Environement (IVE) Machine

    NASA Astrophysics Data System (ADS)

    Harris, E.

    Planning, Implementation and Optimization of Future Space Missions using an Immersive Visualization Environment (IVE) Machine E. N. Harris, Lockheed Martin Space Systems, Denver, CO and George.W. Morgenthaler, U. of Colorado at Boulder History: A team of 3-D engineering visualization experts at the Lockheed Martin Space Systems Company have developed innovative virtual prototyping simulation solutions for ground processing and real-time visualization of design and planning of aerospace missions over the past 6 years. At the University of Colorado, a team of 3-D visualization experts are developing the science of 3-D visualization and immersive visualization at the newly founded BP Center for Visualization, which began operations in October, 2001. (See IAF/IAA-01-13.2.09, "The Use of 3-D Immersive Visualization Environments (IVEs) to Plan Space Missions," G. A. Dorn and G. W. Morgenthaler.) Progressing from Today's 3-D Engineering Simulations to Tomorrow's 3-D IVE Mission Planning, Simulation and Optimization Techniques: 3-D (IVEs) and visualization simulation tools can be combined for efficient planning and design engineering of future aerospace exploration and commercial missions. This technology is currently being developed and will be demonstrated by Lockheed Martin in the (IVE) at the BP Center using virtual simulation for clearance checks, collision detection, ergonomics and reach-ability analyses to develop fabrication and processing flows for spacecraft and launch vehicle ground support operations and to optimize mission architecture and vehicle design subject to realistic constraints. Demonstrations: Immediate aerospace applications to be demonstrated include developing streamlined processing flows for Reusable Space Transportation Systems and Atlas Launch Vehicle operations and Mars Polar Lander visual work instructions. Long-range goals include future international human and robotic space exploration missions such as the development of a Mars Reconnaissance Orbiter and Lunar Base construction scenarios. Innovative solutions utilizing Immersive Visualization provide the key to streamlining the mission planning and optimizing engineering design phases of future aerospace missions.

  12. Virtual finger boosts three-dimensional imaging and microsurgery as well as terabyte volume image visualization and analysis.

    PubMed

    Peng, Hanchuan; Tang, Jianyong; Xiao, Hang; Bria, Alessandro; Zhou, Jianlong; Butler, Victoria; Zhou, Zhi; Gonzalez-Bellido, Paloma T; Oh, Seung W; Chen, Jichao; Mitra, Ananya; Tsien, Richard W; Zeng, Hongkui; Ascoli, Giorgio A; Iannello, Giulio; Hawrylycz, Michael; Myers, Eugene; Long, Fuhui

    2014-07-11

    Three-dimensional (3D) bioimaging, visualization and data analysis are in strong need of powerful 3D exploration techniques. We develop virtual finger (VF) to generate 3D curves, points and regions-of-interest in the 3D space of a volumetric image with a single finger operation, such as a computer mouse stroke, or click or zoom from the 2D-projection plane of an image as visualized with a computer. VF provides efficient methods for acquisition, visualization and analysis of 3D images for roundworm, fruitfly, dragonfly, mouse, rat and human. Specifically, VF enables instant 3D optical zoom-in imaging, 3D free-form optical microsurgery, and 3D visualization and annotation of terabytes of whole-brain image volumes. VF also leads to orders of magnitude better efficiency of automated 3D reconstruction of neurons and similar biostructures over our previous systems. We use VF to generate from images of 1,107 Drosophila GAL4 lines a projectome of a Drosophila brain.

  13. Second Life in Higher Education: Assessing the Potential for and the Barriers to Deploying Virtual Worlds in Learning and Teaching

    ERIC Educational Resources Information Center

    Warburton, Steven

    2009-01-01

    "Second Life" (SL) is currently the most mature and popular multi-user virtual world platform being used in education. Through an in-depth examination of SL, this article explores its potential and the barriers that multi-user virtual environments present to educators wanting to use immersive 3-D spaces in their teaching. The context is set by…

  14. Mobile viewer system for virtual 3D space using infrared LED point markers and camera

    NASA Astrophysics Data System (ADS)

    Sakamoto, Kunio; Taneji, Shoto

    2006-09-01

    The authors have developed a 3D workspace system using collaborative imaging devices. A stereoscopic display enables this system to project 3D information. In this paper, we describe the position detecting system for a see-through 3D viewer. A 3D display system is useful technology for virtual reality, mixed reality and augmented reality. We have researched spatial imaging and interaction system. We have ever proposed 3D displays using the slit as a parallax barrier, the lenticular screen and the holographic optical elements(HOEs) for displaying active image 1)2)3)4). The purpose of this paper is to propose the interactive system using these 3D imaging technologies. The observer can view virtual images in the real world when the user watches the screen of a see-through 3D viewer. The goal of our research is to build the display system as follows; when users see the real world through the mobile viewer, the display system gives users virtual 3D images, which is floating in the air, and the observers can touch these floating images and interact them such that kids can make play clay. The key technologies of this system are the position recognition system and the spatial imaging display. The 3D images are presented by the improved parallax barrier 3D display. Here the authors discuss the measuring method of the mobile viewer using infrared LED point markers and a camera in the 3D workspace (augmented reality world). The authors show the geometric analysis of the proposed measuring method, which is the simplest method using a single camera not the stereo camera, and the results of our viewer system.

  15. Embryonic delay in growth and development related to confined placental trisomy 16 mosaicism, diagnosed by I-Space Virtual Reality.

    PubMed

    Verwoerd-Dikkeboom, Christine M; van Heesch, Peter N A C M; Koning, Anton H J; Galjaard, Robert-Jan H; Exalto, Niek; Steegers, Eric A P

    2008-11-01

    To demonstrate the use of a novel three-dimensional (3D) virtual reality (VR) system in the visualization of first trimester growth and development in a case of confined placental trisomy 16 mosaicism (CPM+16). Case report. Prospective study on first trimester growth using a 3D VR system. A 34-year-old gravida 1, para 0 was seen weekly in the first trimester for 3D ultrasound examinations. Chorionic villus sampling was performed because of an enlarged nuchal translucency (NT) measurement and low pregnancy-associated plasma protein-A levels, followed by amniocentesis. Amniocentesis revealed a CPM+16. On two-dimensional (2D) and 3D ultrasound no structural anomalies were found with normal fetal Dopplers. Growth remained below the 2.3 percentile. At 37 weeks, a female child of 2010 g (<2.5 percentile) was born. After birth, growth climbed to the 50th percentile in the first 2 months. The I-Space VR system provided information about phenotypes not obtainable by standard 2D ultrasound. In this case, the delay in growth and development could be observed very early in pregnancy. Since first trimester screening programs are still improving and becoming even more important, systems such as the I-Space open a new era for in vivo studies on the physiologic and pathologic processes involved in embryogenesis.

  16. Three-dimensional face model reproduction method using multiview images

    NASA Astrophysics Data System (ADS)

    Nagashima, Yoshio; Agawa, Hiroshi; Kishino, Fumio

    1991-11-01

    This paper describes a method of reproducing three-dimensional face models using multi-view images for a virtual space teleconferencing system that achieves a realistic visual presence for teleconferencing. The goal of this research, as an integral component of a virtual space teleconferencing system, is to generate a three-dimensional face model from facial images, synthesize images of the model virtually viewed from different angles, and with natural shadow to suit the lighting conditions of the virtual space. The proposed method is as follows: first, front and side view images of the human face are taken by TV cameras. The 3D data of facial feature points are obtained from front- and side-views by an image processing technique based on the color, shape, and correlation of face components. Using these 3D data, the prepared base face models, representing typical Japanese male and female faces, are modified to approximate the input facial image. The personal face model, representing the individual character, is then reproduced. Next, an oblique view image is taken by TV camera. The feature points of the oblique view image are extracted using the same image processing technique. A more precise personal model is reproduced by fitting the boundary of the personal face model to the boundary of the oblique view image. The modified boundary of the personal face model is determined by using face direction, namely rotation angle, which is detected based on the extracted feature points. After the 3D model is established, the new images are synthesized by mapping facial texture onto the model.

  17. Evaluation of a low-cost 3D sound system for immersive virtual reality training systems.

    PubMed

    Doerr, Kai-Uwe; Rademacher, Holger; Huesgen, Silke; Kubbat, Wolfgang

    2007-01-01

    Since Head Mounted Displays (HMD), datagloves, tracking systems, and powerful computer graphics resources are nowadays in an affordable price range, the usage of PC-based "Virtual Training Systems" becomes very attractive. However, due to the limited field of view of HMD devices, additional modalities have to be provided to benefit from 3D environments. A 3D sound simulation can improve the capabilities of VR systems dramatically. Unfortunately, realistic 3D sound simulations are expensive and demand a tremendous amount of computational power to calculate reverberation, occlusion, and obstruction effects. To use 3D sound in a PC-based training system as a way to direct and guide trainees to observe specific events in 3D space, a cheaper alternative has to be provided, so that a broader range of applications can take advantage of this modality. To address this issue, we focus in this paper on the evaluation of a low-cost 3D sound simulation that is capable of providing traceable 3D sound events. We describe our experimental system setup using conventional stereo headsets in combination with a tracked HMD device and present our results with regard to precision, speed, and used signal types for localizing simulated sound events in a virtual training environment.

  18. Visual selective attention with virtual barriers.

    PubMed

    Schneider, Darryl W

    2017-07-01

    Previous studies have shown that interference effects in the flanker task are reduced when physical barriers (e.g., hands) are placed around rather than below a target flanked by distractors. One explanation of this finding is the referential coding hypothesis, whereby the barriers serve as reference objects for allocating attention. In five experiments, the generality of the referential coding hypothesis was tested by investigating whether interference effects are modulated by the placement of virtual barriers (e.g., parentheses). Modulation of flanker interference was found only when target and distractors differed in size and the virtual barriers were beveled wood-grain objects. Under these conditions and those of previous studies, the author conjectures that an impression of depth was produced when the barriers were around the target, such that the target was perceived to be on a different depth plane than the distractors. Perception of depth in the stimulus display might have led to referential coding of the stimuli in three-dimensional (3-D) space, influencing the allocation of attention beyond the horizontal and vertical dimensions. This 3-D referential coding hypothesis is consistent with research on selective attention in 3-D space that shows flanker interference is reduced when target and distractors are separated in depth.

  19. Heroes for a Wicked World: Enders Game as a Case for Fiction in PME

    DTIC Science & Technology

    2015-06-10

    Sengers, “Narrative Intelligence,” 3. 84. Blair, D. and Meyer, T. “Tools for an Interactive Virtual Cinema .” Creating Personalities for Synthetic Actors...Conflict . . . in Space. Historian Max Hastings writes, “It was the Japanese people’s ill-fortune that it became feasible to bomb them just when American...2012). Blair, D. and Meyer, T. “Tools for an Interactive Virtual Cinema .” Creating Personalities for Synthetic Actors: Towards Autonomous Personality

  20. A 3D character animation engine for multimodal interaction on mobile devices

    NASA Astrophysics Data System (ADS)

    Sandali, Enrico; Lavagetto, Fabio; Pisano, Paolo

    2005-03-01

    Talking virtual characters are graphical simulations of real or imaginary persons that enable natural and pleasant multimodal interaction with the user, by means of voice, eye gaze, facial expression and gestures. This paper presents an implementation of a 3D virtual character animation and rendering engine, compliant with the MPEG-4 standard, running on Symbian-based SmartPhones. Real-time animation of virtual characters on mobile devices represents a challenging task, since many limitations must be taken into account with respect to processing power, graphics capabilities, disk space and execution memory size. The proposed optimization techniques allow to overcome these issues, guaranteeing a smooth and synchronous animation of facial expressions and lip movements on mobile phones such as Sony-Ericsson's P800 and Nokia's 6600. The animation engine is specifically targeted to the development of new "Over The Air" services, based on embodied conversational agents, with applications in entertainment (interactive story tellers), navigation aid (virtual guides to web sites and mobile services), news casting (virtual newscasters) and education (interactive virtual teachers).

  1. Flexible Virtual Structure Consideration in Dynamic Modeling of Mobile Robots Formation

    NASA Astrophysics Data System (ADS)

    El Kamel, A. Essghaier; Beji, L.; Lerbet, J.; Abichou, A.

    2009-03-01

    In cooperative mobile robotics, we look for formation keeping and maintenance of a geometric configuration during movement. As a solution to these problems, the concept of a virtual structure is considered. Based on this idea, we have developed an efficient flexible virtual structure, describing the dynamic model of n vehicles in formation and where the whole formation is kept dependant. Notes that, for 2D and 3D space navigation, only a rigid virtual structure was proposed in the literature. Further, the problem was limited to a kinematic behavior of the structure. Hence, the flexible virtual structure in dynamic modeling of mobile robots formation presented in this paper, gives more capabilities to the formation to avoid obstacles in hostile environment while keeping formation and avoiding inter-agent collision.

  2. Implementing Advanced Characteristics of X3D Collaborative Virtual Environments for Supporting e-Learning: The Case of EVE Platform

    ERIC Educational Resources Information Center

    Bouras, Christos; Triglianos, Vasileios; Tsiatsos, Thrasyvoulos

    2014-01-01

    Three dimensional Collaborative Virtual Environments are a powerful form of collaborative telecommunication applications, enabling the users to share a common three-dimensional space and interact with each other as well as with the environment surrounding them, in order to collaboratively solve problems or aid learning processes. Such an…

  3. Caring in the Dynamics of Design and Languaging: Exploring Second Language Learning in 3D Virtual Spaces

    ERIC Educational Resources Information Center

    Zheng, Dongping

    2012-01-01

    This study provides concrete evidence of ecological, dialogical views of languaging within the dynamics of coordination and cooperation in a virtual world. Beginning level second language learners of Chinese engaged in cooperative activities designed to provide them opportunities to refine linguistic actions by way of caring for others, for the…

  4. Youth and the Ethics of Identity Play in Virtual Spaces

    ERIC Educational Resources Information Center

    Siyahhan, Sinem; Barab, Sasha; James, Carrie

    2011-01-01

    In this study, we explored a new experimental methodology for investigating children's (ages 10 to 14) stances with respect to the ethics of online identity play. We used a scenario about peer identity misrepresentation embedded in a 3D virtual game environment and randomly assigned 265 elementary students (162 female, 103 male) to three…

  5. An Interactive 3D Virtual Anatomy Puzzle for Learning and Simulation - Initial Demonstration and Evaluation.

    PubMed

    Messier, Erik; Wilcox, Jascha; Dawson-Elli, Alexander; Diaz, Gabriel; Linte, Cristian A

    2016-01-01

    To inspire young students (grades 6-12) to become medical practitioners and biomedical engineers, it is necessary to expose them to key concepts of the field in a way that is both exciting and informative. Recent advances in medical image acquisition, manipulation, processing, visualization, and display have revolutionized the approach in which the human body and internal anatomy can be seen and studied. It is now possible to collect 3D, 4D, and 5D medical images of patient specific data, and display that data to the end user using consumer level 3D stereoscopic display technology. Despite such advancements, traditional 2D modes of content presentation such as textbooks and slides are still the standard didactic equipment used to teach young students anatomy. More sophisticated methods of display can help to elucidate the complex 3D relationships between structures that are so often missed when viewing only 2D media, and can instill in students an appreciation for the interconnection between medicine and technology. Here we describe the design, implementation, and preliminary evaluation of a 3D virtual anatomy puzzle dedicated to helping users learn the anatomy of various organs and systems by manipulating 3D virtual data. The puzzle currently comprises several components of the human anatomy and can be easily extended to include additional organs and systems. The 3D virtual anatomy puzzle game was implemented and piloted using three display paradigms - a traditional 2D monitor, a 3D TV with active shutter glass, and the DK2 version Oculus Rift, as well as two different user interaction devices - a space mouse and traditional keyboard controls.

  6. Virtual Reality in Neurointervention.

    PubMed

    Ong, Chin Siang; Deib, Gerard; Yesantharao, Pooja; Qiao, Ye; Pakpoor, Jina; Hibino, Narutoshi; Hui, Ferdinand; Garcia, Juan R

    2018-06-01

    Virtual reality (VR) allows users to experience realistic, immersive 3D virtual environments with the depth perception and binocular field of view of real 3D settings. Newer VR technology has now allowed for interaction with 3D objects within these virtual environments through the use of VR controllers. This technical note describes our preliminary experience with VR as an adjunct tool to traditional angiographic imaging in the preprocedural workup of a patient with a complex pseudoaneurysm. Angiographic MRI data was imported and segmented to create 3D meshes of bilateral carotid vasculature. The 3D meshes were then projected into VR space, allowing the operator to inspect the carotid vasculature using a 3D VR headset as well as interact with the pseudoaneurysm (handling, rotation, magnification, and sectioning) using two VR controllers. 3D segmentation of a complex pseudoaneurysm in the distal cervical segment of the right internal carotid artery was successfully performed and projected into VR. Conventional and VR visualization modes were equally effective in identifying and classifying the pathology. VR visualization allowed the operators to manipulate the dataset to achieve a greater understanding of the anatomy of the parent vessel, the angioarchitecture of the pseudoaneurysm, and the surface contours of all visualized structures. This preliminary study demonstrates the feasibility of utilizing VR for preprocedural evaluation in patients with anatomically complex neurovascular disorders. This novel visualization approach may serve as a valuable adjunct tool in deciding patient-specific treatment plans and selection of devices prior to intervention.

  7. ARC-1995-AC95-0368-3

    NASA Image and Video Library

    1995-10-27

    Dr Murial Ross's Virtual Reality Application for Neuroscience Research Biocomputation. To study human disorders of balance and space motion sickness. Shown here is a 3D reconstruction of a nerve ending in inner ear, nature's wiring of balance organs.

  8. Comparison of Actual Surgical Outcomes and 3D Surgical Simulations

    PubMed Central

    Tucker, Scott; Cevidanes, Lucia; Styner, Martin; Kim, Hyungmin; Reyes, Mauricio; Proffit, William; Turvey, Timothy

    2009-01-01

    Purpose The advent of imaging software programs have proved to be useful for diagnosis, treatment planning, and outcome measurement, but precision of 3D surgical simulation still needs to be tested. This study was conducted to determine if the virtual surgery performed on 3D models constructed from Cone-beam CT (CBCT) can correctly simulate the actual surgical outcome and to validate the ability of this emerging technology to recreate the orthognathic surgery hard tissue movements in 3 translational and 3 rotational planes of space. Methods Construction of pre- and post-surgery 3D models from CBCTs of 14 patients who had combined maxillary advancement and mandibular setback surgery and 6 patients who had one-piece maxillary advancement surgery was performed. The post-surgery and virtually simulated surgery 3D models were registered at the cranial base to quantify differences between simulated and actual surgery models. Hotelling T-test were used to assess the differences between simulated and actual surgical outcomes. Results For all anatomic regions of interest, there was no statistically significant difference between the simulated and the actual surgical models. The right lateral ramus was the only region that showed a statistically significant, but small difference when comparing two- and one-jaw surgeries. Conclusions Virtual surgical methods were reliably reproduced, oral surgery residents could benefit from virtual surgical training, and computer simulation has the potential to increase predictability in the operating room. PMID:20591553

  9. Mental Representation of Spatial Cues During Spaceflight (3D-SPACE)

    NASA Astrophysics Data System (ADS)

    Clement, Gilles; Lathan, Corinna; Skinner, Anna; Lorigny, Eric

    2008-06-01

    The 3D-SPACE experiment is a joint effort between ESA and NASA to develop a simple virtual reality platform to enable astronauts to complete a series of tests while aboard the International Space Station (ISS). These tests will provide insights into the effects of the space environment on: (a) depth perception, by presenting 2D geometric illusions and 3D objects that subjects adjust with a finger trackball; (b) distance perception, by presenting natural or computer-generated 3D scenes where subjects estimate and report absolute distances or adjust distances; and (c) handwriting/drawing, by analyzing trajectories and velocities when subjects write or draw memorized objects with an electronic pen on a digitizing tablet. The objective of these tasks is to identify problems associated with 3D perception in astronauts with the goal of developing countermeasures to alleviate any associated performance risks. The equipment has been uploaded to the ISS in April 2008, and the first measurements should take place during Increment 17.

  10. Data Visualization Using Immersive Virtual Reality Tools

    NASA Astrophysics Data System (ADS)

    Cioc, Alexandru; Djorgovski, S. G.; Donalek, C.; Lawler, E.; Sauer, F.; Longo, G.

    2013-01-01

    The growing complexity of scientific data poses serious challenges for an effective visualization. Data sets, e.g., catalogs of objects detected in sky surveys, can have a very high dimensionality, ~ 100 - 1000. Visualizing such hyper-dimensional data parameter spaces is essentially impossible, but there are ways of visualizing up to ~ 10 dimensions in a pseudo-3D display. We have been experimenting with the emerging technologies of immersive virtual reality (VR) as a platform for a scientific, interactive, collaborative data visualization. Our initial experiments used the virtual world of Second Life, and more recently VR worlds based on its open source code, OpenSimulator. There we can visualize up to ~ 100,000 data points in ~ 7 - 8 dimensions (3 spatial and others encoded as shapes, colors, sizes, etc.), in an immersive virtual space where scientists can interact with their data and with each other. We are now developing a more scalable visualization environment using the popular (practically an emerging standard) Unity 3D Game Engine, coded using C#, JavaScript, and the Unity Scripting Language. This visualization tool can be used through a standard web browser, or a standalone browser of its own. Rather than merely plotting data points, the application creates interactive three-dimensional objects of various shapes, colors, and sizes, and of course the XYZ positions, encoding various dimensions of the parameter space, that can be associated interactively. Multiple users can navigate through this data space simultaneously, either with their own, independent vantage points, or with a shared view. At this stage ~ 100,000 data points can be easily visualized within seconds on a simple laptop. The displayed data points can contain linked information; e.g., upon a clicking on a data point, a webpage with additional information can be rendered within the 3D world. A range of functionalities has been already deployed, and more are being added. We expect to make this visualization tool freely available to the academic community within a few months, on an experimental (beta testing) basis.

  11. Photorealistic virtual anatomy based on Chinese Visible Human data.

    PubMed

    Heng, P A; Zhang, S X; Xie, Y M; Wong, T T; Chui, Y P; Cheng, C Y

    2006-04-01

    Virtual reality based learning of human anatomy is feasible when a database of 3D organ models is available for the learner to explore, visualize, and dissect in virtual space interactively. In this article, we present our latest work on photorealistic virtual anatomy applications based on the Chinese Visible Human (CVH) data. We have focused on the development of state-of-the-art virtual environments that feature interactive photo-realistic visualization and dissection of virtual anatomical models constructed from ultra-high resolution CVH datasets. We also outline our latest progress in applying these highly accurate virtual and functional organ models to generate realistic look and feel to advanced surgical simulators. (c) 2006 Wiley-Liss, Inc.

  12. Interactive floating windows: a new technique for stereoscopic video games

    NASA Astrophysics Data System (ADS)

    Zerebecki, Chris; Stanfield, Brodie; Tawadrous, Mina; Buckstein, Daniel; Hogue, Andrew; Kapralos, Bill

    2012-03-01

    The film industry has a long history of creating compelling experiences in stereoscopic 3D. Recently, the video game as an artistic medium has matured into an effective way to tell engaging and immersive stories. Given the current push to bring stereoscopic 3D technology into the consumer market there is considerable interest to develop stereoscopic 3D video games. Game developers have largely ignored the need to design their games specifically for stereoscopic 3D and have thus relied on automatic conversion and driver technology. Game developers need to evaluate solutions used in other media, such as film, to correct perceptual problems such as window violations, and modify or create new solutions to work within an interactive framework. In this paper we extend the dynamic floating window technique into the interactive domain enabling the player to position a virtual window in space. Interactively changing the position, size, and the 3D rotation of the virtual window, objects can be made to 'break the mask' dramatically enhancing the stereoscopic effect. By demonstrating that solutions from the film industry can be extended into the interactive space, it is our hope that this initiates further discussion in the game development community to strengthen their story-telling mechanisms in stereoscopic 3D games.

  13. Plot of virtual surgery based on CT medical images

    NASA Astrophysics Data System (ADS)

    Song, Limei; Zhang, Chunbo

    2009-10-01

    Although the CT device can give the doctors a series of 2D medical images, it is difficult to give vivid view for the doctors to acknowledge the decrease part. In order to help the doctors to plot the surgery, the virtual surgery system is researched based on the three-dimensional visualization technique. After the disease part of the patient is scanned by the CT device, the 3D whole view will be set up based on the 3D reconstruction module of the system. TCut a part is the usually used function for doctors in the real surgery. A curve will be created on the 3D space; and some points can be added on the curve automatically or manually. The position of the point can change the shape of the cut curves. The curve can be adjusted by controlling the points. If the result of the cut function is not satisfied, all the operation can be cancelled to restart. The flexible virtual surgery gives more convenience to the real surgery. Contrast to the existing medical image process system, the virtual surgery system is added to the system, and the virtual surgery can be plotted for a lot of times, till the doctors have enough confidence to start the real surgery. Because the virtual surgery system can give more 3D information of the disease part, some difficult surgery can be discussed by the expert doctors in different city via internet. It is a useful function to understand the character of the disease part, thus to decrease the surgery risk.

  14. Systems and Methods for Data Visualization Using Three-Dimensional Displays

    NASA Technical Reports Server (NTRS)

    Davidoff, Scott (Inventor); Djorgovski, Stanislav G. (Inventor); Estrada, Vicente (Inventor); Donalek, Ciro (Inventor)

    2017-01-01

    Data visualization systems and methods for generating 3D visualizations of a multidimensional data space are described. In one embodiment a 3D data visualization application directs a processing system to: load a set of multidimensional data points into a visualization table; create representations of a set of 3D objects corresponding to the set of data points; receive mappings of data dimensions to visualization attributes; determine the visualization attributes of the set of 3D objects based upon the selected mappings of data dimensions to 3D object attributes; update a visibility dimension in the visualization table for each of the plurality of 3D object to reflect the visibility of each 3D object based upon the selected mappings of data dimensions to visualization attributes; and interactively render 3D data visualizations of the 3D objects within the virtual space from viewpoints determined based upon received user input.

  15. Research on multi - channel interactive virtual assembly system for power equipment under the “VR+” era

    NASA Astrophysics Data System (ADS)

    Ren, Yilong; Duan, Xitong; Wu, Lei; He, Jin; Xu, Wu

    2017-06-01

    With the development of the “VR+” era, the traditional virtual assembly system of power equipment has been unable to satisfy our growing needs. In this paper, based on the analysis of the traditional virtual assembly system of electric power equipment and the application of VR technology in the virtual assembly system of electric power equipment in our country, this paper puts forward the scheme of establishing the virtual assembly system of power equipment: At first, we should obtain the information of power equipment, then we should using OpenGL and multi texture technology to build 3D solid graphics library. After the completion of three-dimensional modeling, we can use the dynamic link library DLL package three-dimensional solid graphics generation program to realize the modularization of power equipment model library and power equipment model library generated hidden algorithm. After the establishment of 3D power equipment model database, we set up the virtual assembly system of 3D power equipment to separate the assembly operation of the power equipment from the space. At the same time, aiming at the deficiency of the traditional gesture recognition algorithm, we propose a gesture recognition algorithm based on improved PSO algorithm for BP neural network data glove. Finally, the virtual assembly system of power equipment can really achieve multi-channel interaction function.

  16. Intelligibility of speech in a virtual 3-D environment.

    PubMed

    MacDonald, Justin A; Balakrishnan, J D; Orosz, Michael D; Karplus, Walter J

    2002-01-01

    In a simulated air traffic control task, improvement in the detection of auditory warnings when using virtual 3-D audio depended on the spatial configuration of the sounds. Performance improved substantially when two of four sources were placed to the left and the remaining two were placed to the right of the participant. Surprisingly, little or no benefits were observed for configurations involving the elevation or transverse (front/back) dimensions of virtual space, suggesting that position on the interaural (left/right) axis is the crucial factor to consider in auditory display design. The relative importance of interaural spacing effects was corroborated in a second, free-field (real space) experiment. Two additional experiments showed that (a) positioning signals to the side of the listener is superior to placing them in front even when two sounds are presented in the same location, and (b) the optimal distance on the interaural axis varies with the amplitude of the sounds. These results are well predicted by the behavior of an ideal observer under the different display conditions. This suggests that guidelines for auditory display design that allow for effective perception of speech information can be developed from an analysis of the physical sound patterns.

  17. Virtual environment display for a 3D audio room simulation

    NASA Technical Reports Server (NTRS)

    Chapin, William L.; Foster, Scott H.

    1992-01-01

    The development of a virtual environment simulation system integrating a 3D acoustic audio model with an immersive 3D visual scene is discussed. The system complements the acoustic model and is specified to: allow the listener to freely move about the space, a room of manipulable size, shape, and audio character, while interactively relocating the sound sources; reinforce the listener's feeling of telepresence in the acoustical environment with visual and proprioceptive sensations; enhance the audio with the graphic and interactive components, rather than overwhelm or reduce it; and serve as a research testbed and technology transfer demonstration. The hardware/software design of two demonstration systems, one installed and one portable, are discussed through the development of four iterative configurations.

  18. A Virtual Reality Simulator Prototype for Learning and Assessing Phaco-sculpting Skills

    NASA Astrophysics Data System (ADS)

    Choi, Kup-Sze

    This paper presents a virtual reality based simulator prototype for learning phacoemulsification in cataract surgery, with focus on the skills required for making a cross-shape trench in cataractous lens by an ultrasound probe during the phaco-sculpting procedure. An immersive virtual environment is created with 3D models of the lens and surgical tools. Haptic device is also used as 3D user interface. Phaco-sculpting is simulated by interactively deleting the constituting tetrahedrons of the lens model. Collisions between the virtual probe and the lens are effectively identified by partitioning the space containing the lens hierarchically with an octree. The simulator can be programmed to collect real-time quantitative user data for reviewing and assessing trainee's performance in an objective manner. A game-based learning environment can be created on top of the simulator by incorporating gaming elements based on the quantifiable performance metrics.

  19. Explore the virtual side of earth science

    USGS Publications Warehouse

    ,

    1998-01-01

    Scientists have always struggled to find an appropriate technology that could represent three-dimensional (3-D) data, facilitate dynamic analysis, and encourage on-the-fly interactivity. In the recent past, scientific visualization has increased the scientist's ability to visualize information, but it has not provided the interactive environment necessary for rapidly changing the model or for viewing the model in ways not predetermined by the visualization specialist. Virtual Reality Modeling Language (VRML 2.0) is a new environment for visualizing 3-D information spaces and is accessible through the Internet with current browser technologies. Researchers from the U.S. Geological Survey (USGS) are using VRML as a scientific visualization tool to help convey complex scientific concepts to various audiences. Kevin W. Laurent, computer scientist, and Maura J. Hogan, technical information specialist, have created a collection of VRML models available through the Internet at Virtual Earth Science (virtual.er.usgs.gov).

  20. Exploring the Use of Three-Dimensional Multi-User Virtual Environments for Online Problem-Based Learning

    ERIC Educational Resources Information Center

    Omale, Nicholas M.

    2010-01-01

    This exploratory case study examines how three media attributes in 3-D MUVEs--avatars, 3-D spaces and bubble dialogue boxes--affect interaction in an online problem-based learning (PBL) activity. The study participants were eleven undergraduate students enrolled in a 200-level, three-credit-hour technology integration course at a Midwestern…

  1. Virtual arthroscopy of the visible human female temporomandibular joint.

    PubMed

    Ishimaru, T; Lew, D; Haller, J; Vannier, M W

    1999-07-01

    This study was designed to obtain views of the temporomandibular joint (TMJ) by means of computed arthroscopic simulation (virtual arthroscopy) using three-dimensional (3D) processing. Volume renderings of the TMJ from very thin cryosection slices of the Visible Human Female were taken off the Internet. Analyze(AVW) software (Biomedical Imaging Resource, Mayo Foundation, Rochester, MN) on a Silicon Graphics 02 workstation (Mountain View, CA) was then used to obtain 3D images and allow the navigation "fly-through" of the simulated joint. Good virtual arthroscopic views of the upper and lower joint spaces of both TMJs were obtained by fly-through simulation from the lateral and endaural sides. It was possible to observe the presence of a partial defect in the articular disc and an osteophyte on the condyle. Virtual arthroscopy provided visualization of regions not accessible to real arthroscopy. These results indicate that virtual arthroscopy will be a new technique to investigate the TMJ of the patient with TMJ disorders in the near future.

  2. Voxel inversion of airborne electromagnetic data

    NASA Astrophysics Data System (ADS)

    Auken, E.; Fiandaca, G.; Kirkegaard, C.; Vest Christiansen, A.

    2013-12-01

    Inversion of electromagnetic data usually refers to a model space being linked to the actual observation points, and for airborne surveys the spatial discretization of the model space reflects the flight lines. On the contrary, geological and groundwater models most often refer to a regular voxel grid, not correlated to the geophysical model space. This means that incorporating the geophysical data into the geological and/or hydrological modelling grids involves a spatial relocation of the models, which in itself is a subtle process where valuable information is easily lost. Also the integration of prior information, e.g. from boreholes, is difficult when the observation points do not coincide with the position of the prior information, as well as the joint inversion of airborne and ground-based surveys. We developed a geophysical inversion algorithm working directly in a voxel grid disconnected from the actual measuring points, which then allows for informing directly geological/hydrogeological models, for easier incorporation of prior information and for straightforward integration of different data types in joint inversion. The new voxel model space defines the soil properties (like resistivity) on a set of nodes, and the distribution of the properties is computed everywhere by means of an interpolation function f (e.g. inverse distance or kriging). The position of the nodes is fixed during the inversion and is chosen to sample the soil taking into account topography and inversion resolution. Given this definition of the voxel model space, both 1D and 2D/3D forward responses can be computed. The 1D forward responses are computed as follows: A) a 1D model subdivision, in terms of model thicknesses and direction of the "virtual" horizontal stratification, is defined for each 1D data set. For EM soundings the "virtual" horizontal stratification is set up parallel to the topography at the sounding position. B) the "virtual" 1D models are constructed by interpolating the soil properties in the medium point of the "virtual" layers. For 2D/3D forward responses the algorithm operates similarly, simply filling the 2D/3D meshes of the forward responses by computing the interpolation values in the centres of the mesh cells. The new definition of the voxel model space allows for incorporating straightforwardly the geophysical information into geological and/or hydrological models, just by using for defining the geophysical model space a voxel (hydro)geological grid. This simplify also the propagation of the uncertainty of geophysical parameters into the (hydro)geological models. Furthermore, prior information from boreholes, like resistivity logs, can be applied directly to the voxel model space, even if the borehole positions do not coincide with the actual observation points. In fact, the prior information is constrained to the model parameters through the interpolation function at the borehole locations. The presented algorithm is a further development of the AarhusInv program package developed at Aarhus University (formerly em1dinv), which manages both large scale AEM surveys and ground-based data. This work has been carried out as part of the HyGEM project, supported by the Danish Council of Strategic Research under grant number DSF 11-116763.

  3. Navigation system for robot-assisted intra-articular lower-limb fracture surgery.

    PubMed

    Dagnino, Giulio; Georgilas, Ioannis; Köhler, Paul; Morad, Samir; Atkins, Roger; Dogramadzi, Sanja

    2016-10-01

    In the surgical treatment for lower-leg intra-articular fractures, the fragments have to be positioned and aligned to reconstruct the fractured bone as precisely as possible, to allow the joint to function correctly again. Standard procedures use 2D radiographs to estimate the desired reduction position of bone fragments. However, optimal correction in a 3D space requires 3D imaging. This paper introduces a new navigation system that uses pre-operative planning based on 3D CT data and intra-operative 3D guidance to virtually reduce lower-limb intra-articular fractures. Physical reduction in the fractures is then performed by our robotic system based on the virtual reduction. 3D models of bone fragments are segmented from CT scan. Fragments are pre-operatively visualized on the screen and virtually manipulated by the surgeon through a dedicated GUI to achieve the virtual reduction in the fracture. Intra-operatively, the actual position of the bone fragments is provided by an optical tracker enabling real-time 3D guidance. The motion commands for the robot connected to the bone fragment are generated, and the fracture physically reduced based on the surgeon's virtual reduction. To test the system, four femur models were fractured to obtain four different distal femur fracture types. Each one of them was subsequently reduced 20 times by a surgeon using our system. The navigation system allowed an orthopaedic surgeon to virtually reduce the fracture with a maximum residual positioning error of [Formula: see text] (translational) and [Formula: see text] (rotational). Correspondent physical reductions resulted in an accuracy of 1.03 ± 0.2 mm and [Formula: see text], when the robot reduced the fracture. Experimental outcome demonstrates the accuracy and effectiveness of the proposed navigation system, presenting a fracture reduction accuracy of about 1 mm and [Formula: see text], and meeting the clinical requirements for distal femur fracture reduction procedures.

  4. 3D Image Display Courses for Information Media Students.

    PubMed

    Yanaka, Kazuhisa; Yamanouchi, Toshiaki

    2016-01-01

    Three-dimensional displays are used extensively in movies and games. These displays are also essential in mixed reality, where virtual and real spaces overlap. Therefore, engineers and creators should be trained to master 3D display technologies. For this reason, the Department of Information Media at the Kanagawa Institute of Technology has launched two 3D image display courses specifically designed for students who aim to become information media engineers and creators.

  5. HyFinBall: A Two-Handed, Hybrid 2D/3D Desktop VR Interface for Visualization

    DTIC Science & Technology

    2013-01-01

    user study . This is done in the context of a rich, visual analytics interface containing coordinated views with 2D and 3D visualizations and...the user interface (hardware and software), the design space, as well as preliminary results of a formal user study . This is done in the context of a ... virtual reality , user interface , two-handed interface , hybrid user interface , multi-touch, gesture,

  6. Automatic Assembly of Combined Checkingfixture for Auto-Body Components Based Onfixture Elements Libraries

    NASA Astrophysics Data System (ADS)

    Jiang, Jingtao; Sui, Rendong; Shi, Yan; Li, Furong; Hu, Caiqi

    In this paper 3-D models of combined fixture elements are designed, classified by their functions, and saved in computer as supporting elements library, jointing elements library, basic elements library, localization elements library, clamping elements library, and adjusting elements library etc. Then automatic assembly of 3-D combined checking fixture for auto-body part is presented based on modularization theory. And in virtual auto-body assembly space, Locating constraint mapping technique and assembly rule-based reasoning technique are used to calculate the position of modular elements according to localization points and clamp points of auto-body part. Auto-body part model is transformed from itself coordinate system space to virtual assembly space by homogeneous transformation matrix. Automatic assembly of different functional fixture elements and auto-body part is implemented with API function based on the second development of UG. It is proven in practice that the method in this paper is feasible and high efficiency.

  7. The Photogrammetric Survey Methodologies Applied to Low Cost 3d Virtual Exploration in Multidisciplinary Field

    NASA Astrophysics Data System (ADS)

    Palestini, C.; Basso, A.

    2017-11-01

    In recent years, an increase in international investment in hardware and software technology to support programs that adopt algorithms for photomodeling or data management from laser scanners significantly reduced the costs of operations in support of Augmented Reality and Virtual Reality, designed to generate real-time explorable digital environments integrated to virtual stereoscopic headset. The research analyzes transversal methodologies related to the acquisition of these technologies in order to intervene directly on the phenomenon of acquiring the current VR tools within a specific workflow, in light of any issues related to the intensive use of such devices , outlining a quick overview of the possible "virtual migration" phenomenon, assuming a possible integration with the new internet hyper-speed systems, capable of triggering a massive cyberspace colonization process that paradoxically would also affect the everyday life and more in general, on human space perception. The contribution aims at analyzing the application systems used for low cost 3d photogrammetry by means of a precise pipeline, clarifying how a 3d model is generated, automatically retopologized, textured by color painting or photo-cloning techniques, and optimized for parametric insertion on virtual exploration platforms. Workflow analysis will follow some case studies related to photomodeling, digital retopology and "virtual 3d transfer" of some small archaeological artifacts and an architectural compartment corresponding to the pronaus of Aurum, a building designed in the 1940s by Michelucci. All operations will be conducted on cheap or free licensed software that today offer almost the same performance as their paid counterparts, progressively improving in the data processing speed and management.

  8. 3D Simulation: Microgravity Environments and Applications

    NASA Technical Reports Server (NTRS)

    Hunter, Steve L.; Dischinger, Charles; Estes, Samantha; Parker, Nelson C. (Technical Monitor)

    2001-01-01

    Most, if not all, 3-D and Virtual Reality (VR) software programs are designed for one-G gravity applications. Space environments simulations require gravity effects of one one-thousandth to one one-million of that of the Earth's surface (10(exp -3) - 10(exp -6) G), thus one must be able to generate simulations that replicate those microgravity effects upon simulated astronauts. Unfortunately, the software programs utilized by the National Aeronautical and Space Administration does not have the ability to readily neutralize the one-G gravity effect. This pre-programmed situation causes the engineer or analysis difficulty during micro-gravity simulations. Therefore, microgravity simulations require special techniques or additional code in order to apply the power of 3D graphic simulation to space related applications. This paper discusses the problem and possible solutions to allow microgravity 3-D/VR simulations to be completed successfully without program code modifications.

  9. A Comparative Analysis of 2D and 3D Tasks for Virtual Reality Therapies Based on Robotic-Assisted Neurorehabilitation for Post-stroke Patients

    PubMed Central

    Lledó, Luis D.; Díez, Jorge A.; Bertomeu-Motos, Arturo; Ezquerro, Santiago; Badesa, Francisco J.; Sabater-Navarro, José M.; García-Aracil, Nicolás

    2016-01-01

    Post-stroke neurorehabilitation based on virtual therapies are performed completing repetitive exercises shown in visual electronic devices, whose content represents imaginary or daily life tasks. Currently, there are two ways of visualization of these task. 3D virtual environments are used to get a three dimensional space that represents the real world with a high level of detail, whose realism is determinated by the resolucion and fidelity of the objects of the task. Furthermore, 2D virtual environments are used to represent the tasks with a low degree of realism using techniques of bidimensional graphics. However, the type of visualization can influence the quality of perception of the task, affecting the patient's sensorimotor performance. The purpose of this paper was to evaluate if there were differences in patterns of kinematic movements when post-stroke patients performed a reach task viewing a virtual therapeutic game with two different type of visualization of virtual environment: 2D and 3D. Nine post-stroke patients have participated in the study receiving a virtual therapy assisted by PUPArm rehabilitation robot. Horizontal movements of the upper limb were performed to complete the aim of the tasks, which consist in reaching peripheral or perspective targets depending on the virtual environment shown. Various parameter types such as the maximum speed, reaction time, path length, or initial movement are analyzed from the data acquired objectively by the robotic device to evaluate the influence of the task visualization. At the end of the study, a usability survey was provided to each patient to analysis his/her satisfaction level. For all patients, the movement trajectories were enhanced when they completed the therapy. This fact suggests that patient's motor recovery was increased. Despite of the similarity in majority of the kinematic parameters, differences in reaction time and path length were higher using the 3D task. Regarding the success rates were very similar. In conclusion, the using of 2D environments in virtual therapy may be a more appropriate and comfortable way to perform tasks for upper limb rehabilitation of post-stroke patients, in terms of accuracy in order to effectuate optimal kinematic trajectories. PMID:27616992

  10. Accuracy of fetal sex determination in the first trimester of pregnancy using 3D virtual reality ultrasound.

    PubMed

    Bogers, Hein; Rifouna, Maria S; Koning, Anton H J; Husen-Ebbinge, Margreet; Go, Attie T J I; van der Spek, Peter J; Steegers-Theunissen, Régine P M; Steegers, Eric A P; Exalto, Niek

    2018-05-01

    Early detection of fetal sex is becoming more popular. The aim of this study was to evaluate the accuracy of fetal sex determination in the first trimester, using 3D virtual reality. Three-dimensional (3D) US volumes were obtained in 112 pregnancies between 9 and 13 weeks of gestational age. They were offline projected as a hologram in the BARCO I-Space and subsequently the genital tubercle angle was measured. Separately, the 3D US aspect of the genitalia was examined for having a male or female appearance. Although a significant difference in genital tubercle angles was found between male and female fetuses, it did not result in a reliable prediction of fetal gender. Correct sex prediction based on first trimester genital appearance was at best 56%. Our results indicate that accurate determination of the fetal sex in the first trimester of pregnancy is not possible, even using an advanced 3D US technique. © 2017 Wiley Periodicals, Inc.

  11. Low cost heads-up virtual reality (HUVR) with optical tracking and haptic feedback

    NASA Astrophysics Data System (ADS)

    Margolis, Todd; DeFanti, Thomas A.; Dawe, Greg; Prudhomme, Andrew; Schulze, Jurgen P.; Cutchin, Steve

    2011-03-01

    Researchers at the University of California, San Diego, have created a new, relatively low-cost augmented reality system that enables users to touch the virtual environment they are immersed in. The Heads-Up Virtual Reality device (HUVR) couples a consumer 3D HD flat screen TV with a half-silvered mirror to project any graphic image onto the user's hands and into the space surrounding them. With his or her head position optically tracked to generate the correct perspective view, the user maneuvers a force-feedback (haptic) device to interact with the 3D image, literally 'touching' the object's angles and contours as if it was a tangible physical object. HUVR can be used for training and education in structural and mechanical engineering, archaeology and medicine as well as other tasks that require hand-eye coordination. One of the most unique characteristics of HUVR is that a user can place their hands inside of the virtual environment without occluding the 3D image. Built using open-source software and consumer level hardware, HUVR offers users a tactile experience in an immersive environment that is functional, affordable and scalable.

  12. Virtual Heritage Tours: Developing Interactive Narrative-Based Environments for Historical Sites

    NASA Astrophysics Data System (ADS)

    Tuck, Deborah; Kuksa, Iryna

    In the last decade there has been a noticeable growth in the use of virtual reality (VR) technologies for reconstructing cultural heritage sites. However, many of these virtual reconstructions evidence little of sites' social histories. Narrating the Past is a research project that aims to re-address this issue by investigating methods for embedding social histories within cultural heritage sites and by creating narrative based virtual environments (VEs) within them. The project aims to enhance the visitor's knowledge and understanding by developing a navigable 3D story space, in which participants are immersed. This has the potential to create a malleable virtual environment allowing the visitor to configure their own narrative paths.

  13. Dark Energy and Dark Matter as w = -1 Virtual Particles and the World Hologram Model

    NASA Astrophysics Data System (ADS)

    Sarfatti, Jack

    2011-04-01

    The elementary physics battle-tested principles of Lorentz invariance, Einstein equivalence principle and the boson commutation and fermion anti-commutation rules of quantum field theory explain gravitationally repulsive dark energy as virtual bosons and gravitationally attractive dark matter as virtual fermion-antifermion pairs. The small dark energy density in our past light cone is the reciprocal entropy-area of our future light cone's 2D future event horizon in a Novikov consistent loop in time in our accelerating universe. Yakir Aharonov's "back-from-the-future" post-selected final boundary condition is set at our observer-dependent future horizon that also explains why the irreversible thermodynamic arrow of time of is aligned with the accelerating dark energy expansion of the bulk 3D space interior to our future 2D horizon surrounding it as the hologram screen. Seth Lloyd has argued that all 2D horizon surrounding surfaces are pixelated quantum computers projecting interior bulk 3D quanta of volume (Planck area)Sqrt(area of future horizon) as their hologram images in 1-1 correspondence.

  14. Biological Visualization, Imaging and Simulation(Bio-VIS) at NASA Ames Research Center: Developing New Software and Technology for Astronaut Training and Biology Research in Space

    NASA Technical Reports Server (NTRS)

    Smith, Jeffrey

    2003-01-01

    The Bio- Visualization, Imaging and Simulation (BioVIS) Technology Center at NASA's Ames Research Center is dedicated to developing and applying advanced visualization, computation and simulation technologies to support NASA Space Life Sciences research and the objectives of the Fundamental Biology Program. Research ranges from high resolution 3D cell imaging and structure analysis, virtual environment simulation of fine sensory-motor tasks, computational neuroscience and biophysics to biomedical/clinical applications. Computer simulation research focuses on the development of advanced computational tools for astronaut training and education. Virtual Reality (VR) and Virtual Environment (VE) simulation systems have become important training tools in many fields from flight simulation to, more recently, surgical simulation. The type and quality of training provided by these computer-based tools ranges widely, but the value of real-time VE computer simulation as a method of preparing individuals for real-world tasks is well established. Astronauts routinely use VE systems for various training tasks, including Space Shuttle landings, robot arm manipulations and extravehicular activities (space walks). Currently, there are no VE systems to train astronauts for basic and applied research experiments which are an important part of many missions. The Virtual Glovebox (VGX) is a prototype VE system for real-time physically-based simulation of the Life Sciences Glovebox where astronauts will perform many complex tasks supporting research experiments aboard the International Space Station. The VGX consists of a physical display system utilizing duel LCD projectors and circular polarization to produce a desktop-sized 3D virtual workspace. Physically-based modeling tools (Arachi Inc.) provide real-time collision detection, rigid body dynamics, physical properties and force-based controls for objects. The human-computer interface consists of two magnetic tracking devices (Ascention Inc.) attached to instrumented gloves (Immersion Inc.) which co-locate the user's hands with hand/forearm representations in the virtual workspace. Force-feedback is possible in a work volume defined by a Phantom Desktop device (SensAble inc.). Graphics are written in OpenGL. The system runs on a 2.2 GHz Pentium 4 PC. The prototype VGX provides astronauts and support personnel with a real-time physically-based VE system to simulate basic research tasks both on Earth and in the microgravity of Space. The immersive virtual environment of the VGX also makes it a useful tool for virtual engineering applications including CAD development, procedure design and simulation of human-system systems in a desktop-sized work volume.

  15. The WINCKELMANN300 Project: Dissemination of Culture with Virtual Reality at the Capitoline Museum in Rome

    NASA Astrophysics Data System (ADS)

    Gonizzi Barsanti, S.; Malatesta, S. G.; Lella, F.; Fanini, B.; Sala, F.; Dodero, E.; Petacco, L.

    2018-05-01

    The best way to disseminate culture is, nowadays, the creation of scenarios with virtual and augmented reality that supply the visitors of museums with a powerful, interactive tool that allows to learn sometimes difficult concepts in an easy, entertaining way. 3D models derived from reality-based techniques are nowadays used to preserve, document and restore historical artefacts. These digital contents are also powerful instrument to interactively communicate their significance to non-specialist, making easier to understand concepts sometimes complicated or not clear. Virtual and Augmented Reality are surely a valid tool to interact with 3D models and a fundamental help in making culture more accessible to the wide public. These technologies can help the museum curators to adapt the cultural proposal and the information about the artefacts based on the different type of visitor's categories. These technologies allow visitors to travel through space and time and have a great educative function permitting to explain in an easy and attractive way information and concepts that could prove to be complicated. The aim of this paper is to create a virtual scenario and an augmented reality app to recreate specific spaces in the Capitoline Museum in Rome as they were during Winckelmann's time, placing specific statues in their original position in the 18th century.

  16. The influence of the reflective environment on the absorption of a human male exposed to representative base station antennas from 300 MHz to 5 GHz.

    PubMed

    Vermeeren, G; Gosselin, M C; Kühn, S; Kellerman, V; Hadjem, A; Gati, A; Joseph, W; Wiart, J; Meyer, F; Kuster, N; Martens, L

    2010-09-21

    The environment is an important parameter when evaluating the exposure to radio-frequency electromagnetic fields. This study investigates numerically the variation on the whole-body and peak spatially averaged-specific absorption rate (SAR) in the heterogeneous virtual family male placed in front of a base station antenna in a reflective environment. The SAR values in a reflective environment are also compared to the values obtained when no environment is present (free space). The virtual family male has been placed at four distances (30 cm, 1 m, 3 m and 10 m) in front of six base station antennas (operating at 300 MHz, 450 MHz, 900 MHz, 2.1 GHz, 3.5 GHz and 5.0 GHz, respectively) and in three reflective environments (a perfectly conducting wall, a perfectly conducting ground and a perfectly conducting ground + wall). A total of 72 configurations are examined. The absorption in the heterogeneous body model is determined using the 3D electromagnetic (EM) finite-difference time-domain (FDTD) solver Semcad-X. For the larger simulations, requirements in terms of computer resources are reduced by using a generalized Huygens' box approach. It has been observed that the ratio of the SAR in the virtual family male in a reflective environment and the SAR in the virtual family male in the free-space environment ranged from -8.7 dB up to 8.0 dB. A worst-case reflective environment could not be determined. ICNIRP reference levels not always showed to be compliant with the basic restrictions.

  17. CAVE2: a hybrid reality environment for immersive simulation and information analysis

    NASA Astrophysics Data System (ADS)

    Febretti, Alessandro; Nishimoto, Arthur; Thigpen, Terrance; Talandis, Jonas; Long, Lance; Pirtle, J. D.; Peterka, Tom; Verlo, Alan; Brown, Maxine; Plepys, Dana; Sandin, Dan; Renambot, Luc; Johnson, Andrew; Leigh, Jason

    2013-03-01

    Hybrid Reality Environments represent a new kind of visualization spaces that blur the line between virtual environments and high resolution tiled display walls. This paper outlines the design and implementation of the CAVE2TM Hybrid Reality Environment. CAVE2 is the world's first near-seamless flat-panel-based, surround-screen immersive system. Unique to CAVE2 is that it will enable users to simultaneously view both 2D and 3D information, providing more flexibility for mixed media applications. CAVE2 is a cylindrical system of 24 feet in diameter and 8 feet tall, and consists of 72 near-seamless, off-axisoptimized passive stereo LCD panels, creating an approximately 320 degree panoramic environment for displaying information at 37 Megapixels (in stereoscopic 3D) or 74 Megapixels in 2D and at a horizontal visual acuity of 20/20. Custom LCD panels with shifted polarizers were built so the images in the top and bottom rows of LCDs are optimized for vertical off-center viewing- allowing viewers to come closer to the displays while minimizing ghosting. CAVE2 is designed to support multiple operating modes. In the Fully Immersive mode, the entire room can be dedicated to one virtual simulation. In 2D model, the room can operate like a traditional tiled display wall enabling users to work with large numbers of documents at the same time. In the Hybrid mode, a mixture of both 2D and 3D applications can be simultaneously supported. The ability to treat immersive work spaces in this Hybrid way has never been achieved before, and leverages the special abilities of CAVE2 to enable researchers to seamlessly interact with large collections of 2D and 3D data. To realize this hybrid ability, we merged the Scalable Adaptive Graphics Environment (SAGE) - a system for supporting 2D tiled displays, with Omegalib - a virtual reality middleware supporting OpenGL, OpenSceneGraph and Vtk applications.

  18. Fast and Forceful: Modulation of Response Activation Induced by Shifts of Perceived Depth in Virtual 3D Space

    PubMed Central

    Plewan, Thorsten; Rinkenauer, Gerhard

    2016-01-01

    Reaction time (RT) can strongly be influenced by a number of stimulus properties. For instance, there was converging evidence that perceived size rather than physical (i.e., retinal) size constitutes a major determinant of RT. However, this view has recently been challenged since within a virtual three-dimensional (3D) environment retinal size modulation failed to influence RT. In order to further investigate this issue in the present experiments response force (RF) was recorded as a supplemental measure of response activation in simple reaction tasks. In two separate experiments participants’ task was to react as fast as possible to the occurrence of a target located close to the observer or farther away while the offset between target locations was increased from Experiment 1 to Experiment 2. At the same time perceived target size (by varying the retinal size across depth planes) and target type (sphere vs. soccer ball) were modulated. Both experiments revealed faster and more forceful reactions when targets were presented closer to the observers. Perceived size and target type barely affected RT and RF in Experiment 1 but differentially affected both variables in Experiment 2. Thus, the present findings emphasize the usefulness of RF as a supplement to conventional RT measurement. On a behavioral level the results confirm that (at least) within virtual 3D space perceived object size neither strongly influences RT nor RF. Rather the relative position within egocentric (body-centered) space presumably indicates an object’s behavioral relevance and consequently constitutes an important modulator of visual processing. PMID:28018273

  19. The James Webb Space Telescope RealWorld-InWorld Design Challenge: Involving Professionals in a Virtual Classroom

    NASA Astrophysics Data System (ADS)

    Masetti, Margaret; Bowers, S.

    2011-01-01

    Students around the country are becoming experts on the James Webb Space Telescope by designing solutions to two of the design challenges presented by this complex mission. RealWorld-InWorld has two parts; the first (the Real World portion) has high-school students working face to face in their classroom as engineers and scientists. The InWorld phase starts December 15, 2010 as interested teachers and their teams of high school students register to move their work into a 3D multi-user virtual world environment. At the start of this phase, college students from all over the country choose a registered team to lead InWorld. Each InWorld team is also assigned an engineer or scientist mentor. In this virtual world setting, each team refines their design solutions and creates a 3D model of the Webb telescope. InWorld teams will use 21st century tools to collaborate and build in the virtual world environment. Each team will learn, not only from their own team members, but will have the opportunity to interact with James Webb Space Telescope researchers through the virtual world setting, which allows for synchronous interactions. Halfway through the challenge, design solutions will be critiqued and a mystery problem will be introduced for each team. The top five teams will be invited to present their work during a synchronous Education Forum April 14, 2011. The top team will earn scholarships and technology. This is an excellent opportunity for professionals in both astronomy and associated engineering disciplines to become involved with a unique educational program. Besides the chance to mentor a group of interested students, there are many opportunities to interact with the students as a guest, via chats and presentations.

  20. 10 Management Controller for Time and Space Partitioning Architectures

    NASA Astrophysics Data System (ADS)

    Lachaize, Jerome; Deredempt, Marie-Helene; Galizzi, Julien

    2015-09-01

    The Integrated Modular Avionics (IMA) has been industrialized in aeronautical domain to enable the independent qualification of different application softwares from different suppliers on the same generic computer, this latter computer being a single terminal in a deterministic network. This concept allowed to distribute efficiently and transparently the different applications across the network, sizing accurately the HW equipments to embed on the aircraft, through the configuration of the virtual computers and the virtual network. , This concept has been studied for space domain and requirements issued [D04],[D05]. Experiments in the space domain have been done, for the computer level, through ESA and CNES initiatives [D02] [D03]. One possible IMA implementation may use Time and Space Partitioning (TSP) technology. Studies on Time and Space Partitioning [D02] for controlling resources access such as CPU and memories and studies on hardware/software interface standardization [D01] showed that for space domain technologies where I/O components (or IP) do not cover advanced features such as buffering, descriptors or virtualization, CPU overhead in terms of performances is mainly due to shared interface management in the execution platform, and to the high frequency of I/O accesses, these latter leading to an important number of context switches. This paper will present a solution to reduce this execution overhead with an open, modular and configurable controller.

  1. Visualizing dynamic geosciences phenomena using an octree-based view-dependent LOD strategy within virtual globes

    NASA Astrophysics Data System (ADS)

    Li, Jing; Wu, Huayi; Yang, Chaowei; Wong, David W.; Xie, Jibo

    2011-09-01

    Geoscientists build dynamic models to simulate various natural phenomena for a better understanding of our planet. Interactive visualizations of these geoscience models and their outputs through virtual globes on the Internet can help the public understand the dynamic phenomena related to the Earth more intuitively. However, challenges arise when the volume of four-dimensional data (4D), 3D in space plus time, is huge for rendering. Datasets loaded from geographically distributed data servers require synchronization between ingesting and rendering data. Also the visualization capability of display clients varies significantly in such an online visualization environment; some may not have high-end graphic cards. To enhance the efficiency of visualizing dynamic volumetric data in virtual globes, this paper proposes a systematic framework, in which an octree-based multiresolution data structure is implemented to organize time series 3D geospatial data to be used in virtual globe environments. This framework includes a view-dependent continuous level of detail (LOD) strategy formulated as a synchronized part of the virtual globe rendering process. Through the octree-based data retrieval process, the LOD strategy enables the rendering of the 4D simulation at a consistent and acceptable frame rate. To demonstrate the capabilities of this framework, data of a simulated dust storm event are rendered in World Wind, an open source virtual globe. The rendering performances with and without the octree-based LOD strategy are compared. The experimental results show that using the proposed data structure and processing strategy significantly enhances the visualization performance when rendering dynamic geospatial phenomena in virtual globes.

  2. Intra-operative 3D imaging system for robot-assisted fracture manipulation.

    PubMed

    Dagnino, G; Georgilas, I; Tarassoli, P; Atkins, R; Dogramadzi, S

    2015-01-01

    Reduction is a crucial step in the treatment of broken bones. Achieving precise anatomical alignment of bone fragments is essential for a good fast healing process. Percutaneous techniques are associated with faster recovery time and lower infection risk. However, deducing intra-operatively the desired reduction position is quite challenging due to the currently available technology. The 2D nature of this technology (i.e. the image intensifier) doesn't provide enough information to the surgeon regarding the fracture alignment and rotation, which is actually a three-dimensional problem. This paper describes the design and development of a 3D imaging system for the intra-operative virtual reduction of joint fractures. The proposed imaging system is able to receive and segment CT scan data of the fracture, to generate the 3D models of the bone fragments, and display them on a GUI. A commercial optical tracker was included into the system to track the actual pose of the bone fragments in the physical space, and generate the corresponding pose relations in the virtual environment of the imaging system. The surgeon virtually reduces the fracture in the 3D virtual environment, and a robotic manipulator connected to the fracture through an orthopedic pin executes the physical reductions accordingly. The system is here evaluated through fracture reduction experiments, demonstrating a reduction accuracy of 1.04 ± 0.69 mm (translational RMSE) and 0.89 ± 0.71 ° (rotational RMSE).

  3. Atmospheric volatilization and distribution of (Z)- and (E)-1,3-dichloropropene in field beds with and without plastic covers.

    PubMed

    Thomas, John E; Allen, L Hartwell; McCormack, Leslie A; Vu, Joseph C; Dickson, Donald W; Ou, Li-Tse

    2004-01-01

    The fumigant 1,3-dichloropropene (1,3-D) is considered to be a potential replacement for methyl bromide when methyl bromide is phased out in 2005. This study on surface emissions and subsurface diffusion of 1,3-D in a Florida sandy soil was conducted in field beds with or without plastic covers. After injection of the commercial fumigant Telone II by conventional chisels to field beds at 30cm depth which were covered with polyethylene film (PE), virtually impermeable film, or no cover (bare), (Z)- and (E)-1,3-D rapidly diffused upward. Twenty hours after injection, majority of (Z)- and (E)-1,3-D had moved upward from 30 cm depth to the layer of 5-20 cm depth. Downward movement of the two isomers in the beds with or without a plastic cover was not significant. (Z)-1,3-D diffused more rapidly than (E)-1,3-D. Virtually impermeable films (VIF) had a good capacity to retain (Z)- and (E)-1,3-D in soil pore air space. Vapor concentrations of the two isomers in the shallow subsurface of the field bed covered with VIF were greater than that in the two beds covered with polyethylene film (PE) or no cover (bare). In addition, VIF cover provided more uniform distribution of (Z)- and (E)-1,3-D in shallow subsurface than PE cover or no cover. Virtually impermeable film also had a better capability to retard surface emissions of the two isomers from soil in field beds than PE cover or no cover.

  4. STS-120 crew along with Expedition crew members Dan Tani and Sandra Magnus

    NASA Image and Video Library

    2007-08-09

    JSC2007-E-41532 (9 Aug. 2007) --- Astronaut Stephanie D. Wilson, STS-120 mission specialist, uses the virtual reality lab at Johnson Space Center to train for her duties aboard the space shuttle and space station. This type of computer interface, paired with virtual reality training hardware and software, helps to prepare the entire team for dealing with space station elements.

  5. VR SAFER trainer

    NASA Image and Video Library

    2014-08-05

    ISS040-E-088794 (5 Aug. 2014) --- In the Unity node of the International Space Station, NASA astronaut Reid Wiseman, Expedition 40 flight engineer, uses a laptop computer 3D virtual spacewalk trainer in preparation for two upcoming U.S. sessions of extravehicular activity (EVA).

  6. 3-D Imaging In Virtual Environment: A Scientific Clinical and Teaching Tool

    NASA Technical Reports Server (NTRS)

    Ross, Muriel D.; DeVincenzi, Donald L. (Technical Monitor)

    1996-01-01

    The advent of powerful graphics workstations and computers has led to the advancement of scientific knowledge through three-dimensional (3-D) reconstruction and imaging of biological cells and tissues. The Biocomputation Center at NASA Ames Research Center pioneered the effort to produce an entirely computerized method for reconstruction of objects from serial sections studied in a transmission electron microscope (TEM). The software developed, ROSS (Reconstruction of Serial Sections), is now being distributed to users across the United States through Space Act Agreements. The software is in widely disparate fields such as geology, botany, biology and medicine. In the Biocomputation Center, ROSS serves as the basis for development of virtual environment technologies for scientific and medical use. This report will describe the Virtual Surgery Workstation Project that is ongoing with clinicians at Stanford University Medical Center, and the role of the Visible Human data in the project.

  7. An Interactive Virtual 3D Tool for Scientific Exploration of Planetary Surfaces

    NASA Astrophysics Data System (ADS)

    Traxler, Christoph; Hesina, Gerd; Gupta, Sanjeev; Paar, Gerhard

    2014-05-01

    In this paper we present an interactive 3D visualization tool for scientific analysis and planning of planetary missions. At the moment scientists have to look at individual camera images separately. There is no tool to combine them in three dimensions and look at them seamlessly as a geologist would do (by walking backwards and forwards resulting in different scales). For this reason a virtual 3D reconstruction of the terrain that can be interactively explored is necessary. Such a reconstruction has to consider multiple scales ranging from orbital image data to close-up surface image data from rover cameras. The 3D viewer allows seamless zooming between these various scales, giving scientists the possibility to relate small surface features (e.g. rock outcrops) to larger geological contexts. For a reliable geologic assessment a realistic surface rendering is important. Therefore the material properties of the rock surfaces will be considered for real-time rendering. This is achieved by an appropriate Bidirectional Reflectance Distribution Function (BRDF) estimated from the image data. The BRDF is implemented to run on the Graphical Processing Unit (GPU) to enable realistic real-time rendering, which allows a naturalistic perception for scientific analysis. Another important aspect for realism is the consideration of natural lighting conditions, which means skylight to illuminate the reconstructed scene. In our case we provide skylights from Mars and Earth, which allows switching between these two modes of illumination. This gives geologists the opportunity to perceive rock outcrops from Mars as they would appear on Earth facilitating scientific assessment. Besides viewing the virtual reconstruction on multiple scales, scientists can also perform various measurements, i.e. geo-coordinates of a selected point or distance between two surface points. Rover or other models can be placed into the scene and snapped onto certain location of the terrain. These are important features to support the planning of rover paths. In addition annotations can be placed directly into the 3D scene, which also serve as landmarks to aid navigation. The presented visualization and planning tool is a valuable asset for scientific analysis of planetary mission data. It complements traditional methods by giving access to an interactive virtual 3D reconstruction, which is realistically rendered. Representative examples and further information about the interactive 3D visualization tool can be found on the FP7-SPACE Project PRoViDE web page http://www.provide-space.eu/interactive-virtual-3d-tool/. The research leading to these results has received funding from the European Union's Seventh Framework Programme (FP7/2007-2013) under grant agreement n° 312377 'PRoViDE'.

  8. A dataset of stereoscopic images and ground-truth disparity mimicking human fixations in peripersonal space

    PubMed Central

    Canessa, Andrea; Gibaldi, Agostino; Chessa, Manuela; Fato, Marco; Solari, Fabio; Sabatini, Silvio P.

    2017-01-01

    Binocular stereopsis is the ability of a visual system, belonging to a live being or a machine, to interpret the different visual information deriving from two eyes/cameras for depth perception. From this perspective, the ground-truth information about three-dimensional visual space, which is hardly available, is an ideal tool both for evaluating human performance and for benchmarking machine vision algorithms. In the present work, we implemented a rendering methodology in which the camera pose mimics realistic eye pose for a fixating observer, thus including convergent eye geometry and cyclotorsion. The virtual environment we developed relies on highly accurate 3D virtual models, and its full controllability allows us to obtain the stereoscopic pairs together with the ground-truth depth and camera pose information. We thus created a stereoscopic dataset: GENUA PESTO—GENoa hUman Active fixation database: PEripersonal space STereoscopic images and grOund truth disparity. The dataset aims to provide a unified framework useful for a number of problems relevant to human and computer vision, from scene exploration and eye movement studies to 3D scene reconstruction. PMID:28350382

  9. Comparative analysis of video processing and 3D rendering for cloud video games using different virtualization technologies

    NASA Astrophysics Data System (ADS)

    Bada, Adedayo; Alcaraz-Calero, Jose M.; Wang, Qi; Grecos, Christos

    2014-05-01

    This paper describes a comprehensive empirical performance evaluation of 3D video processing employing the physical/virtual architecture implemented in a cloud environment. Different virtualization technologies, virtual video cards and various 3D benchmarks tools have been utilized in order to analyse the optimal performance in the context of 3D online gaming applications. This study highlights 3D video rendering performance under each type of hypervisors, and other factors including network I/O, disk I/O and memory usage. Comparisons of these factors under well-known virtual display technologies such as VNC, Spice and Virtual 3D adaptors reveal the strengths and weaknesses of the various hypervisors with respect to 3D video rendering and streaming.

  10. Stereoselective virtual screening of the ZINC database using atom pair 3D-fingerprints.

    PubMed

    Awale, Mahendra; Jin, Xian; Reymond, Jean-Louis

    2015-01-01

    Tools to explore large compound databases in search for analogs of query molecules provide a strategically important support in drug discovery to help identify available analogs of any given reference or hit compound by ligand based virtual screening (LBVS). We recently showed that large databases can be formatted for very fast searching with various 2D-fingerprints using the city-block distance as similarity measure, in particular a 2D-atom pair fingerprint (APfp) and the related category extended atom pair fingerprint (Xfp) which efficiently encode molecular shape and pharmacophores, but do not perceive stereochemistry. Here we investigated related 3D-atom pair fingerprints to enable rapid stereoselective searches in the ZINC database (23.2 million 3D structures). Molecular fingerprints counting atom pairs at increasing through-space distance intervals were designed using either all atoms (16-bit 3DAPfp) or different atom categories (80-bit 3DXfp). These 3D-fingerprints retrieved molecular shape and pharmacophore analogs (defined by OpenEye ROCS scoring functions) of 110,000 compounds from the Cambridge Structural Database with equal or better accuracy than the 2D-fingerprints APfp and Xfp, and showed comparable performance in recovering actives from decoys in the DUD database. LBVS by 3DXfp or 3DAPfp similarity was stereoselective and gave very different analogs when starting from different diastereomers of the same chiral drug. Results were also different from LBVS with the parent 2D-fingerprints Xfp or APfp. 3D- and 2D-fingerprints also gave very different results in LBVS of folded molecules where through-space distances between atom pairs are much shorter than topological distances. 3DAPfp and 3DXfp are suitable for stereoselective searches for shape and pharmacophore analogs of query molecules in large databases. Web-browsers for searching ZINC by 3DAPfp and 3DXfp similarity are accessible at www.gdb.unibe.ch and should provide useful assistance to drug discovery projects. Graphical abstractAtom pair fingerprints based on through-space distances (3DAPfp) provide better shape encoding than atom pair fingerprints based on topological distances (APfp) as measured by the recovery of ROCS shape analogs by fp similarity.

  11. Coil Compression for Accelerated Imaging with Cartesian Sampling

    PubMed Central

    Zhang, Tao; Pauly, John M.; Vasanawala, Shreyas S.; Lustig, Michael

    2012-01-01

    MRI using receiver arrays with many coil elements can provide high signal-to-noise ratio and increase parallel imaging acceleration. At the same time, the growing number of elements results in larger datasets and more computation in the reconstruction. This is of particular concern in 3D acquisitions and in iterative reconstructions. Coil compression algorithms are effective in mitigating this problem by compressing data from many channels into fewer virtual coils. In Cartesian sampling there often are fully sampled k-space dimensions. In this work, a new coil compression technique for Cartesian sampling is presented that exploits the spatially varying coil sensitivities in these non-subsampled dimensions for better compression and computation reduction. Instead of directly compressing in k-space, coil compression is performed separately for each spatial location along the fully-sampled directions, followed by an additional alignment process that guarantees the smoothness of the virtual coil sensitivities. This important step provides compatibility with autocalibrating parallel imaging techniques. Its performance is not susceptible to artifacts caused by a tight imaging fieldof-view. High quality compression of in-vivo 3D data from a 32 channel pediatric coil into 6 virtual coils is demonstrated. PMID:22488589

  12. Ray-based approach to integrated 3D visual communication

    NASA Astrophysics Data System (ADS)

    Naemura, Takeshi; Harashima, Hiroshi

    2001-02-01

    For a high sense of reality in the next-generation communications, it is very important to realize three-dimensional (3D) spatial media, instead of existing 2D image media. In order to comprehensively deal with a variety of 3D visual data formats, the authors first introduce the concept of "Integrated 3D Visual Communication," which reflects the necessity of developing a neutral representation method independent of input/output systems. Then, the following discussions are concentrated on the ray-based approach to this concept, in which any visual sensation is considered to be derived from a set of light rays. This approach is a simple and straightforward to the problem of how to represent 3D space, which is an issue shared by various fields including 3D image communications, computer graphics, and virtual reality. This paper mainly presents the several developments in this approach, including some efficient methods of representing ray data, a real-time video-based rendering system, an interactive rendering system based on the integral photography, a concept of virtual object surface for the compression of tremendous amount of data, and a light ray capturing system using a telecentric lens. Experimental results demonstrate the effectiveness of the proposed techniques.

  13. Interface for Physics Simulation Engines

    NASA Technical Reports Server (NTRS)

    Damer, Bruce

    2007-01-01

    DSS-Prototyper is an open-source, realtime 3D virtual environment software that supports design simulation for the new Vision for Space Exploration (VSE). This is a simulation of NASA's proposed Robotic Lunar Exploration Program, second mission (RLEP2). It simulates the Lunar Surface Access Module (LSAM), which is designed to carry up to four astronauts to the lunar surface for durations of a week or longer. This simulation shows the virtual vehicle making approaches and landings on a variety of lunar terrains. The physics of the descent engine thrust vector, production of dust, and the dynamics of the suspension are all modeled in this set of simulations. The RLEP2 simulations are drivable (by keyboard or joystick) virtual rovers with controls for speed and motor torque, and can be articulated into higher or lower centers of gravity (depending on driving hazards) to enable drill placement. Gravity also can be set to lunar, terrestrial, or zero-g. This software has been used to support NASA's Marshall Space Flight Center in simulations of proposed vehicles for robotically exploring the lunar surface for water ice, and could be used to model all other aspects of the VSE from the Ares launch vehicles and Crew Exploration Vehicle (CEV) to the International Space Station (ISS). This simulator may be installed and operated on any Windows PC with an installed 3D graphics card.

  14. JAMSTEC E-library of Deep-sea Images (J-EDI) Realizes a Virtual Journey to the Earth's Unexplored Deep Ocean

    NASA Astrophysics Data System (ADS)

    Sasaki, T.; Azuma, S.; Matsuda, S.; Nagayama, A.; Ogido, M.; Saito, H.; Hanafusa, Y.

    2016-12-01

    The Japan Agency for Marine-Earth Science and Technology (JAMSTEC) archives a large amount of deep-sea research videos and photos obtained by JAMSTEC's research submersibles and vehicles with cameras. The web site "JAMSTEC E-library of Deep-sea Images : J-EDI" (http://www.godac.jamstec.go.jp/jedi/e/) has made videos and photos available to the public via the Internet since 2011. Users can search for target videos and photos by keywords, easy-to-understand icons, and dive information at J-EDI because operating staffs classify videos and photos as to contents, e.g. living organism and geological environment, and add comments to them.Dive survey data including videos and photos are not only valiant academically but also helpful for education and outreach activities. With the aim of the improvement of visibility for broader communities, we added new functions of 3-dimensional display synchronized various dive survey data with videos in this year.New Functions Users can search for dive survey data by 3D maps with plotted dive points using the WebGL virtual map engine "Cesium". By selecting a dive point, users can watch deep-sea videos and photos and associated environmental data, e.g. water temperature, salinity, rock and biological sample photos, obtained by the dive survey. Users can browse a dive track visualized in 3D virtual spaces using the WebGL JavaScript library. By synchronizing this virtual dive track with videos, users can watch deep-sea videos recorded at a point on a dive track. Users can play an animation which a submersible-shaped polygon automatically traces a 3D virtual dive track and displays of dive survey data are synchronized with tracing a dive track. Users can directly refer to additional information of other JAMSTEC data sites such as marine biodiversity database, marine biological sample database, rock sample database, and cruise and dive information database, on each page which a 3D virtual dive track is displayed. A 3D visualization of a dive track makes users experience a virtual dive survey. In addition, by synchronizing a virtual dive track with videos, it is easy to understand living organisms and geological environments of a dive point. Therefore, these functions will visually support understanding of deep-sea environments in lectures and educational activities.

  15. Construction of a 3-D anatomical model for teaching temporal lobectomy.

    PubMed

    de Ribaupierre, Sandrine; Wilson, Timothy D

    2012-06-01

    Although we live and work in 3 dimensional space, most of the anatomical teaching during medical school is done on 2-D (books, TV and computer screens, etc). 3-D spatial abilities are essential for a surgeon but teaching spatial skills in a non-threatening and safe educational environment is a much more difficult pedagogical task. Currently, initial anatomical knowledge formation or specific surgical anatomy techniques, are taught either in the OR itself, or in cadaveric labs; which means that the trainee has only limited exposure. 3-D computer models incorporated into virtual learning environments may provide an intermediate and key step in a blended learning approach for spatially challenging anatomical knowledge formation. Specific anatomical structures and their spatial orientation can be further clinically contextualized through demonstrations of surgical procedures in the 3-D digital environments. Recordings of digital models enable learner reviews, taking as much time as they want, stopping the demonstration, and/or exploring the model to understand the anatomical relation of each structure. We present here how a temporal lobectomy virtual model has been developed to aid residents and fellows conceptualization of the anatomical relationships between different cerebral structures during that procedure. We suggest in comparison to cadaveric dissection, such virtual models represent a cost effective pedagogical methodology providing excellent support for anatomical learning and surgical technique training. Copyright © 2012 Elsevier Ltd. All rights reserved.

  16. Mackay campus of environmental education and digital cultural construction: the application of 3D virtual reality

    NASA Astrophysics Data System (ADS)

    Chien, Shao-Chi; Chung, Yu-Wei; Lin, Yi-Hsuan; Huang, Jun-Yi; Chang, Jhih-Ting; He, Cai-Ying; Cheng, Yi-Wen

    2012-04-01

    This study uses 3D virtual reality technology to create the "Mackay campus of the environmental education and digital cultural 3D navigation system" for local historical sites in the Tamsui (Hoba) area, in hopes of providing tourism information and navigation through historical sites using a 3D navigation system. We used Auto CAD, Sketch Up, and SpaceEyes 3D software to construct the virtual reality scenes and create the school's historical sites, such as the House of Reverends, the House of Maidens, the Residence of Mackay, and the Education Hall. We used this technology to complete the environmental education and digital cultural Mackay campus . The platform we established can indeed achieve the desired function of providing tourism information and historical site navigation. The interactive multimedia style and the presentation of the information will allow users to obtain a direct information response. In addition to showing the external appearances of buildings, the navigation platform can also allow users to enter the buildings to view lifelike scenes and textual information related to the historical sites. The historical sites are designed according to their actual size, which gives users a more realistic feel. In terms of the navigation route, the navigation system does not force users along a fixed route, but instead allows users to freely control the route they would like to take to view the historical sites on the platform.

  17. Real-time 3D visualization of volumetric video motion sensor data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlson, J.; Stansfield, S.; Shawver, D.

    1996-11-01

    This paper addresses the problem of improving detection, assessment, and response capabilities of security systems. Our approach combines two state-of-the-art technologies: volumetric video motion detection (VVMD) and virtual reality (VR). This work capitalizes on the ability of VVMD technology to provide three-dimensional (3D) information about the position, shape, and size of intruders within a protected volume. The 3D information is obtained by fusing motion detection data from multiple video sensors. The second component involves the application of VR technology to display information relating to the sensors and the sensor environment. VR technology enables an operator, or security guard, to bemore » immersed in a 3D graphical representation of the remote site. VVMD data is transmitted from the remote site via ordinary telephone lines. There are several benefits to displaying VVMD information in this way. Because the VVMD system provides 3D information and because the sensor environment is a physical 3D space, it seems natural to display this information in 3D. Also, the 3D graphical representation depicts essential details within and around the protected volume in a natural way for human perception. Sensor information can also be more easily interpreted when the operator can `move` through the virtual environment and explore the relationships between the sensor data, objects and other visual cues present in the virtual environment. By exploiting the powerful ability of humans to understand and interpret 3D information, we expect to improve the means for visualizing and interpreting sensor information, allow a human operator to assess a potential threat more quickly and accurately, and enable a more effective response. This paper will detail both the VVMD and VR technologies and will discuss a prototype system based upon their integration.« less

  18. NASA Virtual Glovebox (VBX): Emerging Simulation Technology for Space Station Experiment Design, Development, Training and Troubleshooting

    NASA Technical Reports Server (NTRS)

    Smith, Jeffrey D.; Twombly, I. Alexander; Maese, A. Christopher; Cagle, Yvonne; Boyle, Richard

    2003-01-01

    The International Space Station demonstrates the greatest capabilities of human ingenuity, international cooperation and technology development. The complexity of this space structure is unprecedented; and training astronaut crews to maintain all its systems, as well as perform a multitude of research experiments, requires the most advanced training tools and techniques. Computer simulation and virtual environments are currently used by astronauts to train for robotic arm manipulations and extravehicular activities; but now, with the latest computer technologies and recent successes in areas of medical simulation, the capability exists to train astronauts for more hands-on research tasks using immersive virtual environments. We have developed a new technology, the Virtual Glovebox (VGX), for simulation of experimental tasks that astronauts will perform aboard the Space Station. The VGX may also be used by crew support teams for design of experiments, testing equipment integration capability and optimizing the procedures astronauts will use. This is done through the 3D, desk-top sized, reach-in virtual environment that can simulate the microgravity environment in space. Additional features of the VGX allow for networking multiple users over the internet and operation of tele-robotic devices through an intuitive user interface. Although the system was developed for astronaut training and assisting support crews, Earth-bound applications, many emphasizing homeland security, have also been identified. Examples include training experts to handle hazardous biological and/or chemical agents in a safe simulation, operation of tele-robotic systems for assessing and diffusing threats such as bombs, and providing remote medical assistance to field personnel through a collaborative virtual environment. Thus, the emerging VGX simulation technology, while developed for space- based applications, can serve a dual use facilitating homeland security here on Earth.

  19. Armagh Observatory - Historic Building Information Modelling for Virtual Learning in Building Conservation

    NASA Astrophysics Data System (ADS)

    Murphy, M.; Chenaux, A.; Keenaghan, G.; GIbson, V..; Butler, J.; Pybusr, C.

    2017-08-01

    In this paper the recording and design for a Virtual Reality Immersive Model of Armagh Observatory is presented, which will replicate the historic buildings and landscape with distant meridian markers and position of its principal historic instruments within a model of the night sky showing the position of bright stars. The virtual reality model can be used for educational purposes allowing the instruments within the historic building model to be manipulated within 3D space to demonstrate how the position measurements of stars were made in the 18th century. A description is given of current student and researchers activities concerning on-site recording and surveying and the virtual modelling of the buildings and landscape. This is followed by a design for a Virtual Reality Immersive Model of Armagh Observatory use game engine and virtual learning platforms and concepts.

  20. Intraoperative virtual brain counseling

    NASA Astrophysics Data System (ADS)

    Jiang, Zhaowei; Grosky, William I.; Zamorano, Lucia J.; Muzik, Otto; Diaz, Fernando

    1997-06-01

    Our objective is to offer online real-tim e intelligent guidance to the neurosurgeon. Different from traditional image-guidance technologies that offer intra-operative visualization of medical images or atlas images, virtual brain counseling goes one step further. It can distinguish related brain structures and provide information about them intra-operatively. Virtual brain counseling is the foundation for surgical planing optimization and on-line surgical reference. It can provide a warning system that alerts the neurosurgeon if the chosen trajectory will pass through eloquent brain areas. In order to fulfill this objective, tracking techniques are involved for intra- operativity. Most importantly, a 3D virtual brian environment, different from traditional 3D digitized atlases, is an object-oriented model of the brain that stores information about different brain structures together with their elated information. An object-oriented hierarchical hyper-voxel space (HHVS) is introduced to integrate anatomical and functional structures. Spatial queries based on position of interest, line segment of interest, and volume of interest are introduced in this paper. The virtual brain environment is integrated with existing surgical pre-planning and intra-operative tracking systems to provide information for planning optimization and on-line surgical guidance. The neurosurgeon is alerted automatically if the planned treatment affects any critical structures. Architectures such as HHVS and algorithms, such as spatial querying, normalizing, and warping are presented in the paper. A prototype has shown that the virtual brain is intuitive in its hierarchical 3D appearance. It also showed that HHVS, as the key structure for virtual brain counseling, efficiently integrates multi-scale brain structures based on their spatial relationships.This is a promising development for optimization of treatment plans and online surgical intelligent guidance.

  1. Development and comparison of projection and image space 3D nodule insertion techniques

    NASA Astrophysics Data System (ADS)

    Robins, Marthony; Solomon, Justin; Sahbaee, Pooyan; Samei, Ehsan

    2016-04-01

    This study aimed to develop and compare two methods of inserting computerized virtual lesions into CT datasets. 24 physical (synthetic) nodules of three sizes and four morphologies were inserted into an anthropomorphic chest phantom (LUNGMAN, KYOTO KAGAKU). The phantom was scanned (Somatom Definition Flash, Siemens Healthcare) with and without nodules present, and images were reconstructed with filtered back projection and iterative reconstruction (SAFIRE) at 0.6 mm slice thickness using a standard thoracic CT protocol at multiple dose settings. Virtual 3D CAD models based on the physical nodules were virtually inserted (accounting for the system MTF) into the nodule-free CT data using two techniques. These techniques include projection-based and image-based insertion. Nodule volumes were estimated using a commercial segmentation tool (iNtuition, TeraRecon, Inc.). Differences were tested using paired t-tests and R2 goodness of fit between the virtually and physically inserted nodules. Both insertion techniques resulted in nodule volumes very similar to the real nodules (<3% difference) and in most cases the differences were not statistically significant. Also, R2 values were all <0.97 for both insertion techniques. These data imply that these techniques can confidently be used as a means of inserting virtual nodules in CT datasets. These techniques can be instrumental in building hybrid CT datasets composed of patient images with virtually inserted nodules.

  2. Hippocampus, Retrosplenial and Parahippocampal Cortices Encode Multicompartment 3D Space in a Hierarchical Manner.

    PubMed

    Kim, Misun; Maguire, Eleanor A

    2018-05-01

    Humans commonly operate within 3D environments such as multifloor buildings and yet there is a surprising dearth of studies that have examined how these spaces are represented in the brain. Here, we had participants learn the locations of paintings within a virtual multilevel gallery building and then used behavioral tests and fMRI repetition suppression analyses to investigate how this 3D multicompartment space was represented, and whether there was a bias in encoding vertical and horizontal information. We found faster response times for within-room egocentric spatial judgments and behavioral priming effects of visiting the same room, providing evidence for a compartmentalized representation of space. At the neural level, we observed a hierarchical encoding of 3D spatial information, with left anterior hippocampus representing local information within a room, while retrosplenial cortex, parahippocampal cortex, and posterior hippocampus represented room information within the wider building. Of note, both our behavioral and neural findings showed that vertical and horizontal location information was similarly encoded, suggesting an isotropic representation of 3D space even in the context of a multicompartment environment. These findings provide much-needed information about how the human brain supports spatial memory and navigation in buildings with numerous levels and rooms.

  3. A Survey of Real-Time Operating Systems and Virtualization Solutions for Space Systems

    DTIC Science & Technology

    2015-03-01

    probe, an unmanned spacecraft orbiting Mercury (“Messenger,” n.d.; “VxWorks Space,” n.d.). SpaceX , the private space travel company, uses an unspecified...VxWorks platform on its Dragon reusable spacecraft (“ SpaceX ,” n.d.). 5 Supports the 1003.1 standard but does not provide process creation...2013, March 6). ELC: SpaceX lessons learned. Retrieved from http://lwn.net/ Articles/540368/ 112 Embedded hardware. (n.d.). Retrieved

  4. Virtual Surgery for Conduit Reconstruction of the Right Ventricular Outflow Tract.

    PubMed

    Ong, Chin Siang; Loke, Yue-Hin; Opfermann, Justin; Olivieri, Laura; Vricella, Luca; Krieger, Axel; Hibino, Narutoshi

    2017-05-01

    Virtual surgery involves the planning and simulation of surgical reconstruction using three-dimensional (3D) modeling based upon individual patient data, augmented by simulation of planned surgical alterations including implantation of devices or grafts. Here we describe a case in which virtual cardiac surgery aided us in determining the optimal conduit size to use for the reconstruction of the right ventricular outflow tract. The patient is a young adolescent male with a history of tetralogy of Fallot with pulmonary atresia, requiring right ventricle-to-pulmonary artery (RV-PA) conduit replacement. Utilizing preoperative magnetic resonance imaging data, virtual surgery was undertaken to construct his heart in 3D and to simulate the implantation of three different sizes of RV-PA conduit (18, 20, and 22 mm). Virtual cardiac surgery allowed us to predict the ability to implant a conduit of a size that would likely remain adequate in the face of continued somatic growth and also allow for the possibility of transcatheter pulmonary valve implantation at some time in the future. Subsequently, the patient underwent uneventful conduit change surgery with implantation of a 22-mm Hancock valved conduit. As predicted, the intrathoracic space was sufficient to accommodate the relatively large conduit size without geometric distortion or sternal compression. Virtual cardiac surgery gives surgeons the ability to simulate the implantation of prostheses of different sizes in relation to the dimensions of a specific patient's own heart and thoracic cavity in 3D prior to surgery. This can be very helpful in predicting optimal conduit size, determining appropriate timing of surgery, and patient education.

  5. Virtual surgical planning in endoscopic skull base surgery.

    PubMed

    Haerle, Stephan K; Daly, Michael J; Chan, Harley H L; Vescan, Allan; Kucharczyk, Walter; Irish, Jonathan C

    2013-12-01

    Skull base surgery (SBS) involves operative tasks in close proximity to critical structures in a complex three-dimensional (3D) anatomy. The aim was to investigate the value of virtual planning (VP) based on preoperative magnetic resonance imaging (MRI) for surgical planning in SBS and to compare the effects of virtual planning with 3D contours between the expert and the surgeon in training. Retrospective analysis. Twelve patients with manually segmented anatomical structures based on preoperative MRI were evaluated by eight surgeons in a randomized order using a validated National Aeronautics and Space Administration Task Load Index (NASA-TLX) questionnaire. Multivariate analysis revealed significant reduction of workload when using VP (P<.0001) compared to standard planning. Further, it showed that the experience level of the surgeon had a significant effect on the NASA-TLX differences (P<.05). Additional subanalysis did not reveal any significant findings regarding which type of surgeon benefits the most (P>.05). Preoperative anatomical segmentation with virtual surgical planning using contours in endoscopic SBS significantly reduces the workload for the expert and the surgeon in training. Copyright © 2013 The American Laryngological, Rhinological and Otological Society, Inc.

  6. Organizational Learning Goes Virtual?: A Study of Employees' Learning Achievement in Stereoscopic 3D Virtual Reality

    ERIC Educational Resources Information Center

    Lau, Kung Wong

    2015-01-01

    Purpose: This study aims to deepen understanding of the use of stereoscopic 3D technology (stereo3D) in facilitating organizational learning. The emergence of advanced virtual technologies, in particular to the stereo3D virtual reality, has fundamentally changed the ways in which organizations train their employees. However, in academic or…

  7. Evaluation of historical museum interior lighting system using fully immersive virtual luminous environment

    NASA Astrophysics Data System (ADS)

    Navvab, Mojtaba; Bisegna, Fabio; Gugliermetti, Franco

    2013-05-01

    Saint Rocco Museum, a historical building in Venice, Italy is used as a case study to explore the performance of its' lighting system and visible light impact on viewing the large size art works. The transition from threedimensional architectural rendering to the three-dimensional virtual luminance mapping and visualization within a virtual environment is described as an integrated optical method for its application toward preservation of the cultural heritage of the space. Lighting simulation programs represent color as RGB triplets in a devicedependent color space such as ITU-R BT709. Prerequisite for this is a 3D-model which can be created within this computer aided virtual environment. The onsite measured surface luminance, chromaticity and spectral data were used as input to an established real-time indirect illumination and a physically based algorithms to produce the best approximation for RGB to be used as an input to generate the image of the objects. Conversion of RGB to and from spectra has been a major undertaking in order to match the infinite number of spectra to create the same colors that were defined by RGB in the program. The ability to simulate light intensity, candle power and spectral power distributions provide opportunity to examine the impact of color inter-reflections on historical paintings. VR offers an effective technique to quantify the visible light impact on human visual performance under precisely controlled representation of light spectrum that could be experienced in 3D format in a virtual environment as well as historical visual archives. The system can easily be expanded to include other measurements and stimuli.

  8. A specification of 3D manipulation in virtual environments

    NASA Technical Reports Server (NTRS)

    Su, S. Augustine; Furuta, Richard

    1994-01-01

    In this paper we discuss the modeling of three basic kinds of 3-D manipulations in the context of a logical hand device and our virtual panel architecture. The logical hand device is a useful software abstraction representing hands in virtual environments. The virtual panel architecture is the 3-D component of the 2-D window systems. Both of the abstractions are intended to form the foundation for adaptable 3-D manipulation.

  9. Design of combinatorial libraries for the exploration of virtual hits from fragment space searches with LoFT.

    PubMed

    Lessel, Uta; Wellenzohn, Bernd; Fischer, J Robert; Rarey, Matthias

    2012-02-27

    A case study is presented illustrating the design of a focused CDK2 library. The scaffold of the library was detected by a feature trees search in a fragment space based on reactions from combinatorial chemistry. For the design the software LoFT (Library optimizer using Feature Trees) was used. The special feature called FTMatch was applied to restrict the parts of the queries where the reagents are permitted to match. This way a 3D scoring function could be simulated. Results were compared with alternative designs by GOLD docking and ROCS 3D alignments.

  10. Geometry Processing of Conventionally Produced Mouse Brain Slice Images.

    PubMed

    Agarwal, Nitin; Xu, Xiangmin; Gopi, M

    2018-04-21

    Brain mapping research in most neuroanatomical laboratories relies on conventional processing techniques, which often introduce histological artifacts such as tissue tears and tissue loss. In this paper we present techniques and algorithms for automatic registration and 3D reconstruction of conventionally produced mouse brain slices in a standardized atlas space. This is achieved first by constructing a virtual 3D mouse brain model from annotated slices of Allen Reference Atlas (ARA). Virtual re-slicing of the reconstructed model generates ARA-based slice images corresponding to the microscopic images of histological brain sections. These image pairs are aligned using a geometric approach through contour images. Histological artifacts in the microscopic images are detected and removed using Constrained Delaunay Triangulation before performing global alignment. Finally, non-linear registration is performed by solving Laplace's equation with Dirichlet boundary conditions. Our methods provide significant improvements over previously reported registration techniques for the tested slices in 3D space, especially on slices with significant histological artifacts. Further, as one of the application we count the number of neurons in various anatomical regions using a dataset of 51 microscopic slices from a single mouse brain. To the best of our knowledge the presented work is the first that automatically registers both clean as well as highly damaged high-resolutions histological slices of mouse brain to a 3D annotated reference atlas space. This work represents a significant contribution to this subfield of neuroscience as it provides tools to neuroanatomist for analyzing and processing histological data. Copyright © 2018 Elsevier B.V. All rights reserved.

  11. Use of 3D techniques for virtual production

    NASA Astrophysics Data System (ADS)

    Grau, Oliver; Price, Marc C.; Thomas, Graham A.

    2000-12-01

    Virtual production for broadcast is currently mainly used in the form of virtual studios, where the resulting media is a sequence of 2D images. With the steady increase of 3D computing power in home PCs and the technical progress in 3D display technology, the content industry is looking for new kinds of program material, which makes use of 3D technology. The applications range form analysis of sport scenes, 3DTV, up to the creation of fully immersive content. In a virtual studio a camera films one or more actors in a controlled environment. The pictures of the actors can be segmented very accurately in real time using chroma keying techniques. The isolated silhouette can be integrated into a new synthetic virtual environment using a studio mixer. The resulting shape description of the actors is 2D so far. For the realization of more sophisticated optical interactions of the actors with the virtual environment, such as occlusions and shadows, an object-based 3D description of scenes is needed. However, the requirements of shape accuracy, and the kind of representation, differ in accordance with the application. This contribution gives an overview of requirements and approaches for the generation of an object-based 3D description in various applications studied by the BBC R and D department. An enhanced Virtual Studio for 3D programs is proposed that covers a range of applications for virtual production.

  12. Shader Lamps Virtual Patients: the physical manifestation of virtual patients.

    PubMed

    Rivera-Gutierrez, Diego; Welch, Greg; Lincoln, Peter; Whitton, Mary; Cendan, Juan; Chesnutt, David A; Fuchs, Henry; Lok, Benjamin

    2012-01-01

    We introduce the notion of Shader Lamps Virtual Patients (SLVP) - the combination of projector-based Shader Lamps Avatars and interactive virtual humans. This paradigm uses Shader Lamps Avatars technology to give a 3D physical presence to conversational virtual humans, improving their social interactivity and enabling them to share the physical space with the user. The paradigm scales naturally to multiple viewers, allowing for scenarios where an instructor and multiple students are involved in the training. We have developed a physical-virtual patient for medical students to conduct ophthalmic exams, in an interactive training experience. In this experience, the trainee practices multiple skills simultaneously, including using a surrogate optical instrument in front of a physical head, conversing with the patient about his fears, observing realistic head motion, and practicing patient safety. Here we present a prototype system and results from a preliminary formative evaluation of the system.

  13. Image fusion in craniofacial virtual reality modeling based on CT and 3dMD photogrammetry.

    PubMed

    Xin, Pengfei; Yu, Hongbo; Cheng, Huanchong; Shen, Shunyao; Shen, Steve G F

    2013-09-01

    The aim of this study was to demonstrate the feasibility of building a craniofacial virtual reality model by image fusion of 3-dimensional (3D) CT models and 3 dMD stereophotogrammetric facial surface. A CT scan and stereophotography were performed. The 3D CT models were reconstructed by Materialise Mimics software, and the stereophotogrammetric facial surface was reconstructed by 3 dMD patient software. All 3D CT models were exported as Stereo Lithography file format, and the 3 dMD model was exported as Virtual Reality Modeling Language file format. Image registration and fusion were performed in Mimics software. Genetic algorithm was used for precise image fusion alignment with minimum error. The 3D CT models and the 3 dMD stereophotogrammetric facial surface were finally merged into a single file and displayed using Deep Exploration software. Errors between the CT soft tissue model and 3 dMD facial surface were also analyzed. Virtual model based on CT-3 dMD image fusion clearly showed the photorealistic face and bone structures. Image registration errors in virtual face are mainly located in bilateral cheeks and eyeballs, and the errors are more than 1.5 mm. However, the image fusion of whole point cloud sets of CT and 3 dMD is acceptable with a minimum error that is less than 1 mm. The ease of use and high reliability of CT-3 dMD image fusion allows the 3D virtual head to be an accurate, realistic, and widespread tool, and has a great benefit to virtual face model.

  14. Human Activity Modeling and Simulation with High Biofidelity

    DTIC Science & Technology

    2013-01-01

    Human activity Modeling and Simulation (M&S) plays an important role in simulation-based training and Virtual Reality (VR). However, human activity M...kinematics and motion mapping/creation; and (e) creation and replication of human activity in 3-D space with true shape and motion. A brief review is

  15. 3D Virtual Reality Check: Learner Engagement and Constructivist Theory

    ERIC Educational Resources Information Center

    Bair, Richard A.

    2013-01-01

    The inclusion of three-dimensional (3D) virtual tools has created a need to communicate the engagement of 3D tools and specify learning gains that educators and the institutions, which are funding 3D tools, can expect. A review of literature demonstrates that specific models and theories for 3D Virtual Reality (VR) learning do not exist "per…

  16. Augmented reality and photogrammetry: A synergy to visualize physical and virtual city environments

    NASA Astrophysics Data System (ADS)

    Portalés, Cristina; Lerma, José Luis; Navarro, Santiago

    2010-01-01

    Close-range photogrammetry is based on the acquisition of imagery to make accurate measurements and, eventually, three-dimensional (3D) photo-realistic models. These models are a photogrammetric product per se. They are usually integrated into virtual reality scenarios where additional data such as sound, text or video can be introduced, leading to multimedia virtual environments. These environments allow users both to navigate and interact on different platforms such as desktop PCs, laptops and small hand-held devices (mobile phones or PDAs). In very recent years, a new technology derived from virtual reality has emerged: Augmented Reality (AR), which is based on mixing real and virtual environments to boost human interactions and real-life navigations. The synergy of AR and photogrammetry opens up new possibilities in the field of 3D data visualization, navigation and interaction far beyond the traditional static navigation and interaction in front of a computer screen. In this paper we introduce a low-cost outdoor mobile AR application to integrate buildings of different urban spaces. High-accuracy 3D photo-models derived from close-range photogrammetry are integrated in real (physical) urban worlds. The augmented environment that is presented herein requires for visualization a see-through video head mounted display (HMD), whereas user's movement navigation is achieved in the real world with the help of an inertial navigation sensor. After introducing the basics of AR technology, the paper will deal with real-time orientation and tracking in combined physical and virtual city environments, merging close-range photogrammetry and AR. There are, however, some software and complex issues, which are discussed in the paper.

  17. Architecture of web services in the enhancement of real-time 3D video virtualization in cloud environment

    NASA Astrophysics Data System (ADS)

    Bada, Adedayo; Wang, Qi; Alcaraz-Calero, Jose M.; Grecos, Christos

    2016-04-01

    This paper proposes a new approach to improving the application of 3D video rendering and streaming by jointly exploring and optimizing both cloud-based virtualization and web-based delivery. The proposed web service architecture firstly establishes a software virtualization layer based on QEMU (Quick Emulator), an open-source virtualization software that has been able to virtualize system components except for 3D rendering, which is still in its infancy. The architecture then explores the cloud environment to boost the speed of the rendering at the QEMU software virtualization layer. The capabilities and inherent limitations of Virgil 3D, which is one of the most advanced 3D virtual Graphics Processing Unit (GPU) available, are analyzed through benchmarking experiments and integrated into the architecture to further speed up the rendering. Experimental results are reported and analyzed to demonstrate the benefits of the proposed approach.

  18. 3D Technology Selection for a Virtual Learning Environment by Blending ISO 9126 Standard and AHP

    ERIC Educational Resources Information Center

    Cetin, Aydin; Guler, Inan

    2011-01-01

    Web3D presents many opportunities for learners in a virtual world or virtual environment over the web. This is a great opportunity for open-distance education institutions to benefit from web3d technologies to create courses with interactive 3d materials. There are many open source and commercial products offering 3d technologies over the web…

  19. Mobile Applications and Multi-User Virtual Reality Simulations

    NASA Technical Reports Server (NTRS)

    Gordillo, Orlando Enrique

    2016-01-01

    This is my third internship with NASA and my second one at the Johnson Space Center. I work within the engineering directorate in ER7 (Software Robotics and Simulations Division) at a graphics lab called IGOAL. We are a very well-rounded lab because we have dedicated software developers and dedicated 3D artist, and when you combine the two, what you get is the ability to create many different things such as interactive simulations, 3D models, animations, and mobile applications.

  20. Three-Dimensional Tactical Display and Method for Visualizing Data with a Probability of Uncertainty

    DTIC Science & Technology

    2009-08-03

    replacing the more complex and less intuitive displays presently provided in such contexts as commercial aircraft , marine vehicles, and air traffic...free space-virtual reality, 3-D image display system which is enabled by using a unique form of Aerogel as the primary display media. A preferred...generates and displays a real 3-D image in the Aerogel matrix. [0014] U.S. Patent No. 6,285,317, issued September 4, 2001, to Ong, discloses a

  1. Three-Dimensional Tactical Display and Method for Visualizing Data with a Probability of Uncertainty

    DTIC Science & Technology

    2009-08-03

    replacing the more complex and less intuitive displays presently provided in such contexts as commercial aircraft , marine vehicles, and air traffic...space-virtual reality, 3-D image display system which is enabled by using a unique form of Aerogel as the primary display media. A preferred...and displays a real 3-D image in the Aerogel matrix. [0014] U.S. Patent No. 6,285,317, issued September 4, 2001, to Ong, discloses a navigation

  2. A Voice and Mouse Input Interface for 3D Virtual Environments

    NASA Technical Reports Server (NTRS)

    Kao, David L.; Bryson, Steve T.

    2003-01-01

    There have been many successful stories on how 3D input devices can be fully integrated into an immersive virtual environment. Electromagnetic trackers, optical trackers, gloves, and flying mice are just some of these input devices. Though we can use existing 3D input devices that are commonly used for VR applications, there are several factors that prevent us from choosing these input devices for our applications. One main factor is that most of these tracking devices are not suitable for prolonged use due to human fatigue associated with using them. A second factor is that many of them would occupy additional office space. Another factor is that many of the 3D input devices are expensive due to the unusual hardware that are required. For our VR applications, we want a user interface that would work naturally with standard equipment. In this paper, we demonstrate applications or our proposed muitimodal interface using a 3D dome display. We also show that effective data analysis can be achieved while the scientists view their data rendered inside the dome display and perform user interactions simply using the mouse and voice input. Though the sphere coordinate grid seems to be ideal for interaction using a 3D dome display, we can also use other non-spherical grids as well.

  3. Planning, implementation and optimization of future space missions using an immersive visualization environment (IVE) machine

    NASA Astrophysics Data System (ADS)

    Nathan Harris, E.; Morgenthaler, George W.

    2004-07-01

    Beginning in 1995, a team of 3-D engineering visualization experts assembled at the Lockheed Martin Space Systems Company and began to develop innovative virtual prototyping simulation tools for performing ground processing and real-time visualization of design and planning of aerospace missions. At the University of Colorado, a team of 3-D visualization experts also began developing the science of 3-D visualization and immersive visualization at the newly founded British Petroleum (BP) Center for visualization, which began operations in October, 2001. BP acquired ARCO in the year 2000 and awarded the 3-D flexible IVE developed by ARCO (beginning in 1990) to the University of Colorado, CU, the winner in a competition among 6 Universities. CU then hired Dr. G. Dorn, the leader of the ARCO team as Center Director, and the other experts to apply 3-D immersive visualization to aerospace and to other University Research fields, while continuing research on surface interpretation of seismic data and 3-D volumes. This paper recounts further progress and outlines plans in Aerospace applications at Lockheed Martin and CU.

  4. Metric Calibration of a Focused Plenoptic Camera Based on a 3d Calibration Target

    NASA Astrophysics Data System (ADS)

    Zeller, N.; Noury, C. A.; Quint, F.; Teulière, C.; Stilla, U.; Dhome, M.

    2016-06-01

    In this paper we present a new calibration approach for focused plenoptic cameras. We derive a new mathematical projection model of a focused plenoptic camera which considers lateral as well as depth distortion. Therefore, we derive a new depth distortion model directly from the theory of depth estimation in a focused plenoptic camera. In total the model consists of five intrinsic parameters, the parameters for radial and tangential distortion in the image plane and two new depth distortion parameters. In the proposed calibration we perform a complete bundle adjustment based on a 3D calibration target. The residual of our optimization approach is three dimensional, where the depth residual is defined by a scaled version of the inverse virtual depth difference and thus conforms well to the measured data. Our method is evaluated based on different camera setups and shows good accuracy. For a better characterization of our approach we evaluate the accuracy of virtual image points projected back to 3D space.

  5. Review of Virtual Environment Interface Technology.

    DTIC Science & Technology

    1996-03-01

    1.9 SpacePad 56 1.10 CyberTrack 3.2 57 1.11 Wayfinder-VR 57 1.12 Mouse-Sense3D 57 1.13 Selcom AB, SELSPOT H 57 1.14 OPTOTRAK 3020 58 1.15...Wayfinder-VR 57 Figure 38. Mouse-Sense3D 57 Figure 39. SELSPOTII 58 Figure 40. OPTOTRAK 3020 58 Figure 41. MacReflex 58 Figure 42. DynaSight 59...OPTOTRAK3020 The OPTOTRAK 3020 by Northern Digital Inc. is an infra-red (IR)-based, non- contact position and motion measurement sys- tem. Small IR LEDs

  6. Optimal affinity ranking for automated virtual screening validated in prospective D3R grand challenges

    NASA Astrophysics Data System (ADS)

    Wingert, Bentley M.; Oerlemans, Rick; Camacho, Carlos J.

    2018-01-01

    The goal of virtual screening is to generate a substantially reduced and enriched subset of compounds from a large virtual chemistry space. Critical in these efforts are methods to properly rank the binding affinity of compounds. Prospective evaluations of ranking strategies in the D3R grand challenges show that for targets with deep pockets the best correlations (Spearman ρ 0.5) were obtained by our submissions that docked compounds to the holo-receptors with the most chemically similar ligand. On the other hand, for targets with open pockets using multiple receptor structures is not a good strategy. Instead, docking to a single optimal receptor led to the best correlations (Spearman ρ 0.5), and overall performs better than any other method. Yet, choosing a suboptimal receptor for crossdocking can significantly undermine the affinity rankings. Our submissions that evaluated the free energy of congeneric compounds were also among the best in the community experiment. Error bars of around 1 kcal/mol are still too large to significantly improve the overall rankings. Collectively, our top of the line predictions show that automated virtual screening with rigid receptors perform better than flexible docking and other more complex methods.

  7. Vectors in Use in a 3D Juggling Game Simulation

    ERIC Educational Resources Information Center

    Kynigos, Chronis; Latsi, Maria

    2006-01-01

    The new representations enabled by the educational computer game the "Juggler" can place vectors in a central role both for controlling and measuring the behaviours of objects in a virtual environment simulating motion in three-dimensional spaces. The mathematical meanings constructed by 13 year-old students in relation to vectors as…

  8. Innovations in Education and Entertainment Settings: A Quest for Convergence

    ERIC Educational Resources Information Center

    Fanning, Elizabeth; Bunch, John; Brighton, Catherine

    2011-01-01

    The purpose of this study was to compare the production processes and approaches for user engagement of virtual environments created for learning or commercial and entertainment purposes, specifically through online games and 3-D online spaces. This study used a qualitative, multiple case study approach based on interviews with developers of…

  9. Clinically Normal Stereopsis Does Not Ensure Performance Benefit from Stereoscopic 3D Depth Cues

    DTIC Science & Technology

    2014-10-28

    Stereopsis, Binocular Vision, Optometry , Depth Perception, 3D vision, 3D human factors, Stereoscopic displays, S3D, Virtual environment 16...Binocular Vision, Optometry , Depth Perception, 3D vision, 3D human factors, Stereoscopic displays, S3D, Virtual environment 1 Distribution A: Approved

  10. Spatial issues in user interface design from a graphic design perspective

    NASA Technical Reports Server (NTRS)

    Marcus, Aaron

    1989-01-01

    The user interface of a computer system is a visual display that provides information about the status of operations on data within the computer and control options to the user that enable adjustments to these operations. From the very beginning of computer technology the user interface was a spatial display, although its spatial features were not necessarily complex or explicitly recognized by the users. All text and nonverbal signs appeared in a virtual space generally thought of as a single flat plane of symbols. Current technology of high performance workstations permits any element of the display to appear as dynamic, multicolor, 3-D signs in a virtual 3-D space. The complexity of appearance and the user's interaction with the display provide significant challenges to the graphic designer of current and future user interfaces. In particular, spatial depiction provides many opportunities for effective communication of objects, structures, processes, navigation, selection, and manipulation. Issues are presented that are relevant to the graphic designer seeking to optimize the user interface's spatial attributes for effective visual communication.

  11. SSVEP-based BCI for manipulating three-dimensional contents and devices

    NASA Astrophysics Data System (ADS)

    Mun, Sungchul; Cho, Sungjin; Whang, Mincheol; Ju, Byeong-Kwon; Park, Min-Chul

    2012-06-01

    Brain Computer Interface (BCI) studies have been done to help people manipulate electronic devices in a 2D space but less has been done for a vigorous 3D environment. The purpose of this study was to investigate the possibility of applying Steady State Visual Evoked Potentials (SSVEPs) to a 3D LCD display. Eight subjects (4 females) ranging in age between 20 to 26 years old participated in the experiment. They performed simple navigation tasks on a simple 2D space and virtual environment with/without 3D flickers generated by a Flim-Type Patterned Retarder (FPR). The experiments were conducted in a counterbalanced order. The results showed that 3D stimuli enhanced BCI performance, but no significant effects were found due to the small number of subjects. Visual fatigue that might be evoked by 3D stimuli was negligible in this study. The proposed SSVEP BCI combined with 3D flickers can allow people to control home appliances and other equipment such as wheelchairs, prosthetics, and orthotics without encountering dangerous situations that may happen when using BCIs in real world. 3D stimuli-based SSVEP BCI would motivate people to use 3D displays and vitalize the 3D related industry due to its entertainment value and high performance.

  12. The use of strain gauge platform and virtual reality tool for patient stability examination

    NASA Astrophysics Data System (ADS)

    Walendziuk, Wojciech; Wysk, Lukasz; Skoczylas, Marcin

    2016-09-01

    Virtual reality is one of the fastest growing information technologies. This paper is only a prelude to a larger study on the use of virtual reality tools in analysing bony labyrinth and sense of balance. Problems with the functioning of these areas of the body are a controversial topic in debate among specialists. The result of still unresolved imbalance treatments is a constant number of people reporting this type of ailment. Considering above, authors created a system and application that contains a model of virtual environment, and a tool for the modification of the obstacles in 3D space. Preliminary studies of patients from a test group aged 22-49 years were also carried out, in which behaviour and sense of balance in relation to the horizontal curvature of the virtual world around patient has been analysed. Experiments carried out on a test group showed that the shape of the curve and the virtual world space and age of patient has a major impact on a sense of balance. The data obtained can be linked with actual disorders of bony labyrinth and human behaviour at the time of their occurrence. Another important achievement that will be the subject of further work is possible use a modified version of the software for rehabilitation purposes.

  13. VERS: a virtual environment for reconstructive surgery planning

    NASA Astrophysics Data System (ADS)

    Montgomery, Kevin N.

    1997-05-01

    The virtual environment for reconstructive surgery (VERS) project at the NASA Ames Biocomputation Center is applying virtual reality technology to aid surgeons in planning surgeries. We are working with a craniofacial surgeon at Stanford to assemble and visualize the bone structure of patients requiring reconstructive surgery either through developmental abnormalities or trauma. This project is an extension of our previous work in 3D reconstruction, mesh generation, and immersive visualization. The current VR system, consisting of an SGI Onyx RE2, FakeSpace BOOM and ImmersiveWorkbench, Virtual Technologies CyberGlove and Ascension Technologies tracker, is currently in development and has already been used to visualize defects preoperatively. In the near future it will be used to more fully plan the surgery and compute the projected result to soft tissue structure. This paper presents the work in progress and details the production of a high-performance, collaborative, and networked virtual environment.

  14. High-immersion three-dimensional display of the numerical computer model

    NASA Astrophysics Data System (ADS)

    Xing, Shujun; Yu, Xunbo; Zhao, Tianqi; Cai, Yuanfa; Chen, Duo; Chen, Zhidong; Sang, Xinzhu

    2013-08-01

    High-immersion three-dimensional (3D) displays making them valuable tools for many applications, such as designing and constructing desired building houses, industrial architecture design, aeronautics, scientific research, entertainment, media advertisement, military areas and so on. However, most technologies provide 3D display in the front of screens which are in parallel with the walls, and the sense of immersion is decreased. To get the right multi-view stereo ground image, cameras' photosensitive surface should be parallax to the public focus plane and the cameras' optical axes should be offset to the center of public focus plane both atvertical direction and horizontal direction. It is very common to use virtual cameras, which is an ideal pinhole camera to display 3D model in computer system. We can use virtual cameras to simulate the shooting method of multi-view ground based stereo image. Here, two virtual shooting methods for ground based high-immersion 3D display are presented. The position of virtual camera is determined by the people's eye position in the real world. When the observer stand in the circumcircle of 3D ground display, offset perspective projection virtual cameras is used. If the observer stands out the circumcircle of 3D ground display, offset perspective projection virtual cameras and the orthogonal projection virtual cameras are adopted. In this paper, we mainly discussed the parameter setting of virtual cameras. The Near Clip Plane parameter setting is the main point in the first method, while the rotation angle of virtual cameras is the main point in the second method. In order to validate the results, we use the D3D and OpenGL to render scenes of different viewpoints and generate a stereoscopic image. A realistic visualization system for 3D models is constructed and demonstrated for viewing horizontally, which provides high-immersion 3D visualization. The displayed 3D scenes are compared with the real objects in the real world.

  15. `We put on the glasses and Moon comes closer!' Urban Second Graders Exploring the Earth, the Sun and Moon Through 3D Technologies in a Science and Literacy Unit

    NASA Astrophysics Data System (ADS)

    Isik-Ercan, Zeynep; Zeynep Inan, Hatice; Nowak, Jeffrey A.; Kim, Beomjin

    2014-01-01

    This qualitative case study describes (a) the ways 3D visualization, coupled with other science and literacy experiences, supported young children's first exploration of the Earth-Sun-Moon system and (b) the perspectives of classroom teachers and children on using 3D visualization. We created three interactive 3D software modules that simulate day and night, Moon phases and seasons. These modules were used in a science and literacy unit for 35 second graders at an urban elementary school in Midwestern USA. Data included pre- and post-interviews, audio-taped lessons and classroom observations. Post-interviews demonstrated that children's knowledge of the shapes and the movements of the Earth and Moon, alternation of day and night, the occurrence of the seasons, and Moon's changing appearance increased. Second graders reported that they enjoyed expanding their knowledge through hands-on experiences; through its reality effect, 3D visualization enabled them to observe the space objects that move in the virtual space. The teachers noted that 3D visualization stimulated children's interest in space and that using 3D visualization in combination with other teaching methods-literacy experiences, videos and photos, simulations, discussions, and presentations-supported student learning. The teachers and the students still experienced challenges using 3D visualization due to technical problems with 3D vision and time constraints. We conclude that 3D visualization offers hands-on experiences for challenging science concepts and may support young children's ability to view phenomena that would typically be observed through direct, long-term observations in outer space. Results imply a reconsideration of assumed capabilities of young children to understand astronomical phenomena.

  16. Research on Visualization of Ground Laser Radar Data Based on Osg

    NASA Astrophysics Data System (ADS)

    Huang, H.; Hu, C.; Zhang, F.; Xue, H.

    2018-04-01

    Three-dimensional (3D) laser scanning is a new advanced technology integrating light, machine, electricity, and computer technologies. It can conduct 3D scanning to the whole shape and form of space objects with high precision. With this technology, you can directly collect the point cloud data of a ground object and create the structure of it for rendering. People use excellent 3D rendering engine to optimize and display the 3D model in order to meet the higher requirements of real time realism rendering and the complexity of the scene. OpenSceneGraph (OSG) is an open source 3D graphics engine. Compared with the current mainstream 3D rendering engine, OSG is practical, economical, and easy to expand. Therefore, OSG is widely used in the fields of virtual simulation, virtual reality, science and engineering visualization. In this paper, a dynamic and interactive ground LiDAR data visualization platform is constructed based on the OSG and the cross-platform C++ application development framework Qt. In view of the point cloud data of .txt format and the triangulation network data file of .obj format, the functions of 3D laser point cloud and triangulation network data display are realized. It is proved by experiments that the platform is of strong practical value as it is easy to operate and provides good interaction.

  17. Haptics in forensics: the possibilities and advantages in using the haptic device for reconstruction approaches in forensic science.

    PubMed

    Buck, Ursula; Naether, Silvio; Braun, Marcel; Thali, Michael

    2008-09-18

    Non-invasive documentation methods such as surface scanning and radiological imaging are gaining in importance in the forensic field. These three-dimensional technologies provide digital 3D data, which are processed and handled in the computer. However, the sense of touch gets lost using the virtual approach. The haptic device enables the use of the sense of touch to handle and feel digital 3D data. The multifunctional application of a haptic device for forensic approaches is evaluated and illustrated in three different cases: the representation of bone fractures of the lower extremities, by traffic accidents, in a non-invasive manner; the comparison of bone injuries with the presumed injury-inflicting instrument; and in a gunshot case, the identification of the gun by the muzzle imprint, and the reconstruction of the holding position of the gun. The 3D models of the bones are generated from the Computed Tomography (CT) images. The 3D models of the exterior injuries, the injury-inflicting tools and the bone injuries, where a higher resolution is necessary, are created by the optical surface scan. The haptic device is used in combination with the software FreeForm Modelling Plus for touching the surface of the 3D models to feel the minute injuries and the surface of tools, to reposition displaced bone parts and to compare an injury-causing instrument with an injury. The repositioning of 3D models in a reconstruction is easier, faster and more precisely executed by means of using the sense of touch and with the user-friendly movement in the 3D space. For representation purposes, the fracture lines of bones are coloured. This work demonstrates that the haptic device is a suitable and efficient application in forensic science. The haptic device offers a new way in the handling of digital data in the virtual 3D space.

  18. iVirtualWorld: A Domain-Oriented End-User Development Environment for Building 3D Virtual Chemistry Experiments

    ERIC Educational Resources Information Center

    Zhong, Ying

    2013-01-01

    Virtual worlds are well-suited for building virtual laboratories for educational purposes to complement hands-on physical laboratories. However, educators may face technical challenges because developing virtual worlds requires skills in programming and 3D design. Current virtual world building tools are developed for users who have programming…

  19. The differential algebra based multiple level fast multipole algorithm for 3D space charge field calculation and photoemission simulation

    DOE PAGES

    None, None

    2015-09-28

    Coulomb interaction between charged particles inside a bunch is one of the most importance collective effects in beam dynamics, becoming even more significant as the energy of the particle beam is lowered to accommodate analytical and low-Z material imaging purposes such as in the time resolved Ultrafast Electron Microscope (UEM) development currently underway at Michigan State University. In addition, space charge effects are the key limiting factor in the development of ultrafast atomic resolution electron imaging and diffraction technologies and are also correlated with an irreversible growth in rms beam emittance due to fluctuating components of the nonlinear electron dynamics.more » In the short pulse regime used in the UEM, space charge effects also lead to virtual cathode formation in which the negative charge of the electrons emitted at earlier times, combined with the attractive surface field, hinders further emission of particles and causes a degradation of the pulse properties. Space charge and virtual cathode effects and their remediation are core issues for the development of the next generation of high-brightness UEMs. Since the analytical models are only applicable for special cases, numerical simulations, in addition to experiments, are usually necessary to accurately understand the space charge effect. In this paper we will introduce a grid-free differential algebra based multiple level fast multipole algorithm, which calculates the 3D space charge field for n charged particles in arbitrary distribution with an efficiency of O(n), and the implementation of the algorithm to a simulation code for space charge dominated photoemission processes.« less

  20. Compression simulations of plant tissue in 3D using a mass-spring system approach and discrete element method.

    PubMed

    Pieczywek, Piotr M; Zdunek, Artur

    2017-10-18

    A hybrid model based on a mass-spring system methodology coupled with the discrete element method (DEM) was implemented to simulate the deformation of cellular structures in 3D. Models of individual cells were constructed using the particles which cover the surfaces of cell walls and are interconnected in a triangle mesh network by viscoelastic springs. The spatial arrangement of the cells required to construct a virtual tissue was obtained using Poisson-disc sampling and Voronoi tessellation in 3D space. Three structural features were included in the model: viscoelastic material of cell walls, linearly elastic interior of the cells (simulating compressible liquid) and a gas phase in the intercellular spaces. The response of the models to an external load was demonstrated during quasi-static compression simulations. The sensitivity of the model was investigated at fixed compression parameters with variable tissue porosity, cell size and cell wall properties, such as thickness and Young's modulus, and a stiffness of the cell interior that simulated turgor pressure. The extent of the agreement between the simulation results and other models published is discussed. The model demonstrated the significant influence of tissue structure on micromechanical properties and allowed for the interpretation of the compression test results with respect to changes occurring in the structure of the virtual tissue. During compression virtual structures composed of smaller cells produced higher reaction forces and therefore they were stiffer than structures with large cells. The increase in the number of intercellular spaces (porosity) resulted in a decrease in reaction forces. The numerical model was capable of simulating the quasi-static compression experiment and reproducing the strain stiffening observed in experiment. Stress accumulation at the edges of the cell walls where three cells meet suggests that cell-to-cell debonding and crack propagation through the contact edge of neighboring cells is one of the most prevalent ways for tissue to rupture.

  1. 3D Inhabited Virtual Worlds: Interactivity and Interaction between Avatars, Autonomous Agents, and Users.

    ERIC Educational Resources Information Center

    Jensen, Jens F.

    This paper addresses some of the central questions currently related to 3-Dimensional Inhabited Virtual Worlds (3D-IVWs), their virtual interactions, and communication, drawing from the theory and methodology of sociology, interaction analysis, interpersonal communication, semiotics, cultural studies, and media studies. First, 3D-IVWs--seen as a…

  2. Virtual Reality Simulation of the Effects of Microgravity in Gastrointestinal Physiology

    NASA Technical Reports Server (NTRS)

    Compadre, Cesar M.

    1998-01-01

    The ultimate goal of this research is to create an anatomically accurate three-dimensional (3D) simulation model of the effects of microgravity in gastrointestinal physiology and to explore the role that such changes may have in the pharmacokinetics of drugs given to the space crews for prevention or therapy. To accomplish this goal the specific aims of this research are: 1) To generate a complete 3-D reconstructions of the human GastroIntestinal (GI) tract of the male and female Visible Humans. 2) To develop and implement time-dependent computer algorithms to simulate the GI motility using the above 3-D reconstruction.

  3. Voxel inversion of airborne electromagnetic data for improved model integration

    NASA Astrophysics Data System (ADS)

    Fiandaca, Gianluca; Auken, Esben; Kirkegaard, Casper; Vest Christiansen, Anders

    2014-05-01

    Inversion of electromagnetic data has migrated from single site interpretations to inversions including entire surveys using spatial constraints to obtain geologically reasonable results. Though, the model space is usually linked to the actual observation points. For airborne electromagnetic (AEM) surveys the spatial discretization of the model space reflects the flight lines. On the contrary, geological and groundwater models most often refer to a regular voxel grid, not correlated to the geophysical model space, and the geophysical information has to be relocated for integration in (hydro)geological models. We have developed a new geophysical inversion algorithm working directly in a voxel grid disconnected from the actual measuring points, which then allows for informing directly geological/hydrogeological models. The new voxel model space defines the soil properties (like resistivity) on a set of nodes, and the distribution of the soil properties is computed everywhere by means of an interpolation function (e.g. inverse distance or kriging). Given this definition of the voxel model space, the 1D forward responses of the AEM data are computed as follows: 1) a 1D model subdivision, in terms of model thicknesses, is defined for each 1D data set, creating "virtual" layers. 2) the "virtual" 1D models at the sounding positions are finalized by interpolating the soil properties (the resistivity) in the center of the "virtual" layers. 3) the forward response is computed in 1D for each "virtual" model. We tested the new inversion scheme on an AEM survey carried out with the SkyTEM system close to Odder, in Denmark. The survey comprises 106054 dual mode AEM soundings, and covers an area of approximately 13 km X 16 km. The voxel inversion was carried out on a structured grid of 260 X 325 X 29 xyz nodes (50 m xy spacing), for a total of 2450500 inversion parameters. A classical spatially constrained inversion (SCI) was carried out on the same data set, using 106054 spatially constrained 1D models with 29 layers. For comparison, the SCI inversion models have been gridded on the same grid of the voxel inversion. The new voxel inversion and the classic SCI give similar data fit and inversion models. The voxel inversion decouples the geophysical model from the position of acquired data, and at the same time fits the data as well as the classic SCI inversion. Compared to the classic approach, the voxel inversion is better suited for informing directly (hydro)geological models and for sequential/Joint/Coupled (hydro)geological inversion. We believe that this new approach will facilitate the integration of geophysics, geology and hydrology for improved groundwater and environmental management.

  4. Integrated VR platform for 3D and image-based models: a step toward interactive image-based virtual environments

    NASA Astrophysics Data System (ADS)

    Yoon, Jayoung; Kim, Gerard J.

    2003-04-01

    Traditionally, three dimension models have been used for building virtual worlds, and a data structure called the "scene graph" is often employed to organize these 3D objects in the virtual space. On the other hand, image-based rendering has recently been suggested as a probable alternative VR platform for its photo-realism, however, due to limited interactivity, it has only been used for simple navigation systems. To combine the merits of these two approaches to object/scene representations, this paper proposes for a scene graph structure in which both 3D models and various image-based scenes/objects can be defined, traversed, and rendered together. In fact, as suggested by Shade et al., these different representations can be used as different LOD's for a given object. For instance, an object might be rendered using a 3D model at close range, a billboard at an intermediate range, and as part of an environment map at far range. The ultimate objective of this mixed platform is to breath more interactivity into the image based rendered VE's by employing 3D models as well. There are several technical challenges in devising such a platform: designing scene graph nodes for various types of image based techniques, establishing criteria for LOD/representation selection, handling their transitions, implementing appropriate interaction schemes, and correctly rendering the overall scene. Currently, we have extended the scene graph structure of the Sense8's WorldToolKit, to accommodate new node types for environment maps billboards, moving textures and sprites, "Tour-into-the-Picture" structure, and view interpolated objects. As for choosing the right LOD level, the usual viewing distance and image space criteria are used, however, the switching between the image and 3D model occurs at a distance from the user where the user starts to perceive the object's internal depth. Also, during interaction, regardless of the viewing distance, a 3D representation would be used, it if exists. Before rendering, objects are conservatively culled from the view frustum using the representation with the largest volume. Finally, we carried out experiments to verify the theoretical derivation of the switching rule and obtained positive results.

  5. Usability Evaluation of an Adaptive 3D Virtual Learning Environment

    ERIC Educational Resources Information Center

    Ewais, Ahmed; De Troyer, Olga

    2013-01-01

    Using 3D virtual environments for educational purposes is becoming attractive because of their rich presentation and interaction capabilities. Furthermore, dynamically adapting the 3D virtual environment to the personal preferences, prior knowledge, skills and competence, learning goals, and the personal or (social) context in which the learning…

  6. Along the Virtuality Continuum - Two Showcases on how xR Technologies Transform Geoscience Research and Education

    NASA Astrophysics Data System (ADS)

    Klippel, A.; Zhao, J.; Masrur, A.; Wallgruen, J. O.; La Femina, P. C.

    2017-12-01

    We present work along the virtuality continuum showcasing both AR and VR environments for geoscience applications and research. The AR/VR project focusses on one of the most prominent landmarks on the Penn State campus which, at the same time, is a representation of the geology of Pennsylvania. The Penn State Obelisk is a 32" high, 51 ton monument composed of 281 rocks collected from across Pennsylvania. While information about its origins and composition are scattered in articles and some web databases, we compiled all the available data from the web and archives and curated them as a basis for an immersive xR experience. Tabular data was amended by xR data such as 360° photos, videos, and 3D models (e.g., the Obelisk). Our xR (both AR and VR) prototype provides an immersive analytical environment that supports interactive data visualization and virtual navigation in a natural environment (a campus model of today and of 1896, the year of the Obelisk's installation). This work-in-progress project can provide an interactive immersive learning platform (specifically, for K-12 and introductory level geosciences students) where learning process is enhanced through seamless navigation between 3D data space and physical space. The, second, VR focused application is creating and empirically evaluating virtual reality (VR) experiences for geosciences research, specifically, an interactive volcano experience based on LiDAR and image data of Iceland's Thrihnukar volcano. The prototype addresses the lack of content and tools for immersive virtual reality (iVR) in geoscientific education and research and how to make it easier to integrate iVR into research and classroom experiences. It makes use of environmentally sensed data such that interaction and linked content can be integrated into a single experience. We discuss our workflows as well as methods and authoring tools for iVR analysis and creation of virtual experiences. These methods and tools aim to enhance the utility of geospatial data from repositories such as OpenTopography.org through unlocking treasure-troves of geospatial data for VR applications. Their enhanced accessibility in education and research for the geosciences and beyond will benefit geoscientists and educators who cannot be expected to be VR and 3D application experts.

  7. D Visibility Analysis in Urban Environment - Cognition Research Based on Vge

    NASA Astrophysics Data System (ADS)

    Lin, T. P.; Lin, H.; Hu, M. Y.

    2013-09-01

    The author in this research attempts to illustrate a measurable relationship between the physical environment and human's visual perception, including the distance, visual angle impact and visual field (a 3D isovist conception) against human's cognition way, by using a 3D visibility analysis method based on the platform of Virtual Geographic Environment (VGE). The whole project carries out in the CUHK campus (the Chinese University of Hong Kong), by adopting a virtual 3D model of the whole campus and survey in real world. A possible model for the simulation of human cognition in urban spaces is expected to be the output of this research, such as what the human perceive from the environment, how their feelings and behaviours are and how they affect the surrounding world. Kevin Lynch raised 5 elements of urban design in 1960s, which are "vitality, sense, fit, access and control". As the development of urban design, several problems around the human's cognitive and behaviour have come out. Due to the restriction of sensing knowledge in urban spaces, the research among the "sense" and the "fit" of urban design were not quite concerned in recent decades. The geo-spatial cognition field comes into being in 1997 and developed in recent 15 years, which made great effort in way-finding and urban behaviour simulation based on the platform of GIS (geographic information system) or VGE. The platform of VGE is recognized as a proper tool for the analysis of human's perception in urban places, because of its efficient 3D spatial data management and excellent 3D visualization for output result. This article will generally describe the visibility analysis method based on the 3D VGE platform. According to the uncertainty and variety of human perception existed in this research, the author attempts to arrange a survey of observer investigation and validation for the analysis results. Four figures related with space and human's perception will be mainly concerned in this proposal: openness, permeability, environmental pressure and visibility, and these will also be used as the identification for different type of spaces. Generally, the author is aiming at contributing a possible way to understand the reason of human's cognition in geo-spatial area, and provides efficient mathematical model between spatial information and visual perception to the related research field.

  8. Virtual 3d City Modeling: Techniques and Applications

    NASA Astrophysics Data System (ADS)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2013-08-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as Building, Tree, Vegetation, and some manmade feature belonging to urban area. There are various terms used for 3D city models such as "Cybertown", "Cybercity", "Virtual City", or "Digital City". 3D city models are basically a computerized or digital model of a city contains the graphic representation of buildings and other objects in 2.5 or 3D. Generally three main Geomatics approach are using for Virtual 3-D City models generation, in first approach, researcher are using Conventional techniques such as Vector Map data, DEM, Aerial images, second approach are based on High resolution satellite images with LASER scanning, In third method, many researcher are using Terrestrial images by using Close Range Photogrammetry with DSM & Texture mapping. We start this paper from the introduction of various Geomatics techniques for 3D City modeling. These techniques divided in to two main categories: one is based on Automation (Automatic, Semi-automatic and Manual methods), and another is Based on Data input techniques (one is Photogrammetry, another is Laser Techniques). After details study of this, finally in short, we are trying to give the conclusions of this study. In the last, we are trying to give the conclusions of this research paper and also giving a short view for justification and analysis, and present trend for 3D City modeling. This paper gives an overview about the Techniques related with "Generation of Virtual 3-D City models using Geomatics Techniques" and the Applications of Virtual 3D City models. Photogrammetry, (Close range, Aerial, Satellite), Lasergrammetry, GPS, or combination of these modern Geomatics techniques play a major role to create a virtual 3-D City model. Each and every techniques and method has some advantages and some drawbacks. Point cloud model is a modern trend for virtual 3-D city model. Photo-realistic, Scalable, Geo-referenced virtual 3-D City model is a very useful for various kinds of applications such as for planning in Navigation, Tourism, Disasters Management, Transportations, Municipality, Urban Environmental Managements and Real-estate industry. So the Construction of Virtual 3-D city models is a most interesting research topic in recent years.

  9. Applying a 3D Situational Virtual Learning Environment to the Real World Business--An Extended Research in Marketing

    ERIC Educational Resources Information Center

    Wang, Shwu-huey

    2012-01-01

    In order to understand (1) what kind of students can be facilitated through the help of three-dimensional virtual learning environment (3D VLE), and (2) the relationship between a conventional test (ie, paper and pencil test) and the 3D VLE used in this study, the study designs a 3D virtual supermarket (3DVS) to help students transform their role…

  10. Teaching Basic Field Skills Using Screen-Based Virtual Reality Landscapes

    NASA Astrophysics Data System (ADS)

    Houghton, J.; Robinson, A.; Gordon, C.; Lloyd, G. E. E.; Morgan, D. J.

    2016-12-01

    We are using screen-based virtual reality landscapes, created using the Unity 3D game engine, to augment the training geoscience students receive in preparing for fieldwork. Students explore these landscapes as they would real ones, interacting with virtual outcrops to collect data, determine location, and map the geology. Skills for conducting field geological surveys - collecting, plotting and interpreting data; time management and decision making - are introduced interactively and intuitively. As with real landscapes, the virtual landscapes are open-ended terrains with embedded data. This means the game does not structure student interaction with the information as it is through experience the student learns the best methods to work successfully and efficiently. These virtual landscapes are not replacements for geological fieldwork rather virtual spaces between classroom and field in which to train and reinforcement essential skills. Importantly, these virtual landscapes offer accessible parallel provision for students unable to visit, or fully partake in visiting, the field. The project has received positive feedback from both staff and students. Results show students find it easier to focus on learning these basic field skills in a classroom, rather than field setting, and make the same mistakes as when learning in the field, validating the realistic nature of the virtual experience and providing opportunity to learn from these mistakes. The approach also saves time, and therefore resources, in the field as basic skills are already embedded. 70% of students report increased confidence with how to map boundaries and 80% have found the virtual training a useful experience. We are also developing landscapes based on real places with 3D photogrammetric outcrops, and a virtual urban landscape in which Engineering Geology students can conduct a site investigation. This project is a collaboration between the University of Leeds and Leeds College of Art, UK, and all our virtual landscapes are freely available online at www.see.leeds.ac.uk/virtual-landscapes/.

  11. The Virtual Radiopharmacy Laboratory: A 3-D Simulation for Distance Learning

    ERIC Educational Resources Information Center

    Alexiou, Antonios; Bouras, Christos; Giannaka, Eri; Kapoulas, Vaggelis; Nani, Maria; Tsiatsos, Thrasivoulos

    2004-01-01

    This article presents Virtual Radiopharmacy Laboratory (VR LAB), a virtual laboratory accessible through the Internet. VR LAB is designed and implemented in the framework of the VirRAD European project. This laboratory represents a 3D simulation of a radio-pharmacy laboratory, where learners, represented by 3D avatars, can experiment on…

  12. 3D Flow visualization in virtual reality

    NASA Astrophysics Data System (ADS)

    Pietraszewski, Noah; Dhillon, Ranbir; Green, Melissa

    2017-11-01

    By viewing fluid dynamic isosurfaces in virtual reality (VR), many of the issues associated with the rendering of three-dimensional objects on a two-dimensional screen can be addressed. In addition, viewing a variety of unsteady 3D data sets in VR opens up novel opportunities for education and community outreach. In this work, the vortex wake of a bio-inspired pitching panel was visualized using a three-dimensional structural model of Q-criterion isosurfaces rendered in virtual reality using the HTC Vive. Utilizing the Unity cross-platform gaming engine, a program was developed to allow the user to control and change this model's position and orientation in three-dimensional space. In addition to controlling the model's position and orientation, the user can ``scroll'' forward and backward in time to analyze the formation and shedding of vortices in the wake. Finally, the user can toggle between different quantities, while keeping the time step constant, to analyze flow parameter relationships at specific times during flow development. The information, data, or work presented herein was funded in part by an award from NYS Department of Economic Development (DED) through the Syracuse Center of Excellence.

  13. Novel interactive virtual showcase based on 3D multitouch technology

    NASA Astrophysics Data System (ADS)

    Yang, Tao; Liu, Yue; Lu, You; Wang, Yongtian

    2009-11-01

    A new interactive virtual showcase is proposed in this paper. With the help of virtual reality technology, the user of the proposed system can watch the virtual objects floating in the air from all four sides and interact with the virtual objects by touching the four surfaces of the virtual showcase. Unlike traditional multitouch system, this system cannot only realize multi-touch on a plane to implement 2D translation, 2D scaling, and 2D rotation of the objects; it can also realize the 3D interaction of the virtual objects by recognizing and analyzing the multi-touch that can be simultaneously captured from the four planes. Experimental results show the potential of the proposed system to be applied in the exhibition of historical relics and other precious goods.

  14. A Modified Microsurgical Endoscopic-Assisted Transpedicular Corpectomy of the Thoracic Spine Based on Virtual 3-Dimensional Planning.

    PubMed

    Archavlis, Eleftherios; Schwandt, Eike; Kosterhon, Michael; Gutenberg, Angelika; Ulrich, Peter; Nimer, Amr; Giese, Alf; Kantelhardt, Sven Rainer

    2016-07-01

    The main difficulties of transpedicular corpectomies are lack of space for vertebral body replacement in the neighborhood of critical structures, the necessity for sacrifice of nerve roots in the thoracic spine. and the extent of hemorrhage due to venous epidural bleeding. We present a modified technique of transpedicular corpectomy by using an endoscopic-assisted microsurgical technique performed through a single posterior approach. A 3-dimensional (3D) preoperative reconstruction could be helpful in the planning for this complex anatomic region. Surface and volume 3D reconstruction were performed by Amira or the Dextroscope. The clinical experience of this study includes 7 cases, 2 with an unstable burst fracture and 5 with metastatic destructive vertebral body disease, all with significant retropulsion and obstruction of the spinal canal. We performed a comparison with a conventional cohort of transpedicular thoracic corpectomies. Qualitative parameters of the 3D virtual reality planning included degree of bone removal and distance from critical structures such as myelon and implant diameter. Parameters were met in each case, with demonstration of optimal positioning of the implant without neurological complications. In all patients, the endoscope was a significant help in identifying the origins of active bleeding, residual tumor, extent of bone removal, facilitating cage insertion in a minimally invasive way, and helping to avoid root sacrifice on both sides. Microsurgical endoscopic-assisted transpedicular corpectomy may prove valuable in enhancing the safety of corpectomy in destructive vertebral body disease. The 3D virtual anatomic model greatly facilitated the preoperative planning. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Application of computer virtual simulation technology in 3D animation production

    NASA Astrophysics Data System (ADS)

    Mo, Can

    2017-11-01

    In the continuous development of computer technology, the application system of virtual simulation technology has been further optimized and improved. It also has been widely used in various fields of social development, such as city construction, interior design, industrial simulation and tourism teaching etc. This paper mainly introduces the virtual simulation technology used in 3D animation. Based on analyzing the characteristics of virtual simulation technology, the application ways and means of this technology in 3D animation are researched. The purpose is to provide certain reference for the 3D effect promotion days after.

  16. Creation of a 3-dimensional virtual dental patient for computer-guided surgery and CAD-CAM interim complete removable and fixed dental prostheses: A clinical report.

    PubMed

    Harris, Bryan T; Montero, Daniel; Grant, Gerald T; Morton, Dean; Llop, Daniel R; Lin, Wei-Shao

    2017-02-01

    This clinical report proposes a digital workflow using 2-dimensional (2D) digital photographs, a 3D extraoral facial scan, and cone beam computed tomography (CBCT) volumetric data to create a 3D virtual patient with craniofacial hard tissue, remaining dentition (including surrounding intraoral soft tissue), and the realistic appearance of facial soft tissue at an exaggerated smile under static conditions. The 3D virtual patient was used to assist the virtual diagnostic tooth arrangement process, providing patient with a pleasing preoperative virtual smile design that harmonized with facial features. The 3D virtual patient was also used to gain patient's pretreatment approval (as a communication tool), design a prosthetically driven surgical plan for computer-guided implant surgery, and fabricate the computer-aided design and computer-aided manufacturing (CAD-CAM) interim prostheses. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  17. Small Business Innovations

    NASA Technical Reports Server (NTRS)

    1995-01-01

    The PER-Force Handcontroller was originally developed for the International Space Station under a Johnson Space Center Small Business Innovation Research (SBIR) contract. Produced by Cybernet Systems Corporation, the unit is a force-reflecting system that manipulates robots or objects by "feel." The Handcontroller moves in six degrees of freedom, with real and virtual reality forces simulated by a 3-D molecular modeling software package. It is used in molecular modeling in metallurgy applications, satellite docking research, and in research on military unmanned ground vehicles.

  18. Designing Virtual Museum Using Web3D Technology

    NASA Astrophysics Data System (ADS)

    Zhao, Jianghai

    VRT was born to have the potentiality of constructing an effective learning environment due to its 3I characteristics: Interaction, Immersion and Imagination. It is now applied in education in a more profound way along with the development of VRT. Virtual Museum is one of the applications. The Virtual Museum is based on the WEB3D technology and extensibility is the most important factor. Considering the advantage and disadvantage of each WEB3D technology, VRML, CULT3D AND VIEWPOINT technologies are chosen. A web chatroom based on flash and ASP technology is also been created in order to make the Virtual Museum an interactive learning environment.

  19. Get immersed in the Soil Sciences: the first community of avatars in the EGU Assembly 2015!

    NASA Astrophysics Data System (ADS)

    Castillo, Sebastian; Alarcón, Purificación; Beato, Mamen; Emilio Guerrero, José; José Martínez, Juan; Pérez, Cristina; Ortiz, Leovigilda; Taguas, Encarnación V.

    2015-04-01

    Virtual reality and immersive worlds refer to artificial computer-generated environments, with which users act and interact as in a known environment by the use of figurative virtual individuals (avatars). Virtual environments will be the technology of the early twenty-first century that will most dramatically change the way we live, particularly in the areas of training and education, product development and entertainment (Schmorrow, 2009). The usefulness of immersive worlds has been proved in different fields. They reduce geographic and social barriers between different stakeholders and create virtual social spaces which can positively impact learning and discussion outcomes (Lorenzo et al. 2012). In this work we present a series of interactive meetings in a virtual building to celebrate the International Year of Soil to promote the importance of soil functions and its conservation. In a virtual room, the avatars of different senior researchers will meet young scientist avatars to talk about: 1) what remains to be done in Soil Sciences; 2) which are their main current limitations and difficulties and 3) which are the future hot research lines. The interactive participation does not require physically attend to the EGU Assembly 2015. In addition, this virtual building inspired in Soil Sciences can be completed with different teaching resources from different locations around the world and it will be used to improve the learning of Soil Sciences in a multicultural context. REFERENCES: Lorenzo C.M., Sicilia, M.A., Sánchez S. 2012. Studying the effectiveness of multi-user immersive environments for collaborative evaluation tasks. Computers & Education 59 (2012) 1361-1376 Schmorrow D.D. 2009. "Why virtual?" Theoretical Issues in Ergonomics Science 10(3): 279-282.

  20. 3D Virtual Learning Environments in Education: A Meta-Review

    ERIC Educational Resources Information Center

    Reisoglu, I.; Topu, B.; Yilmaz, R.; Karakus Yilmaz, T.; Göktas, Y.

    2017-01-01

    The aim of this study is to investigate recent empirical research studies about 3D virtual learning environments. A total of 167 empirical studies that involve the use of 3D virtual worlds in education were examined by meta-review. Our findings show that the "Second Life" platform has been frequently used in studies. Among the reviewed…

  1. Learning in 3D Virtual Environments: Collaboration and Knowledge Spirals

    ERIC Educational Resources Information Center

    Burton, Brian G.; Martin, Barbara N.

    2010-01-01

    The purpose of this case study was to determine if learning occurred within a 3D virtual learning environment by determining if elements of collaboration and Nonaka and Takeuchi's (1995) knowledge spiral were present. A key portion of this research was the creation of a Virtual Learning Environment. This 3D VLE utilized the Torque Game Engine…

  2. An interactive three-dimensional virtual body structures system for anatomical training over the internet.

    PubMed

    Temkin, Bharti; Acosta, Eric; Malvankar, Ameya; Vaidyanath, Sreeram

    2006-04-01

    The Visible Human digital datasets make it possible to develop computer-based anatomical training systems that use virtual anatomical models (virtual body structures-VBS). Medical schools are combining these virtual training systems and classical anatomy teaching methods that use labeled images and cadaver dissection. In this paper we present a customizable web-based three-dimensional anatomy training system, W3D-VBS. W3D-VBS uses National Library of Medicine's (NLM) Visible Human Male datasets to interactively locate, explore, select, extract, highlight, label, and visualize, realistic 2D (using axial, coronal, and sagittal views) and 3D virtual structures. A real-time self-guided virtual tour of the entire body is designed to provide detailed anatomical information about structures, substructures, and proximal structures. The system thus facilitates learning of visuospatial relationships at a level of detail that may not be possible by any other means. The use of volumetric structures allows for repeated real-time virtual dissections, from any angle, at the convenience of the user. Volumetric (3D) virtual dissections are performed by adding, removing, highlighting, and labeling individual structures (and/or entire anatomical systems). The resultant virtual explorations (consisting of anatomical 2D/3D illustrations and animations), with user selected highlighting colors and label positions, can be saved and used for generating lesson plans and evaluation systems. Tracking users' progress using the evaluation system helps customize the curriculum, making W3D-VBS a powerful learning tool. Our plan is to incorporate other Visible Human segmented datasets, especially datasets with higher resolutions, that make it possible to include finer anatomical structures such as nerves and small vessels. (c) 2006 Wiley-Liss, Inc.

  3. Noninvasive computerized scanning method for the correlation between the facial soft and hard tissues for an integrated three-dimensional anthropometry and cephalometry.

    PubMed

    Galantucci, Luigi Maria; Percoco, Gianluca; Lavecchia, Fulvio; Di Gioia, Eliana

    2013-05-01

    The article describes a new methodology to scan and integrate facial soft tissue surface with dental hard tissue models in a three-dimensional (3D) virtual environment, for a novel diagnostic approach.The facial and the dental scans can be acquired using any optical scanning systems: the models are then aligned and integrated to obtain a full virtual navigable representation of the head of the patient. In this article, we report in detail and further implemented a method for integrating 3D digital cast models into a 3D facial image, to visualize the anatomic position of the dentition. This system uses several 3D technologies to scan and digitize, integrating them with traditional dentistry records. The acquisitions were mainly performed using photogrammetric scanners, suitable for clinics or hospitals, able to obtain high mesh resolution and optimal surface texture for the photorealistic rendering of the face. To increase the quality and the resolution of the photogrammetric scanning of the dental elements, the authors propose a new technique to enhance the texture of the dental surface. Three examples of the application of the proposed procedure are reported in this article, using first laser scanning and photogrammetry and then only photogrammetry. Using cheek retractors, it is possible to scan directly a great number of dental elements. The final results are good navigable 3D models that integrate facial soft tissue and dental hard tissues. The method is characterized by the complete absence of ionizing radiation, portability and simplicity, fast acquisition, easy alignment of the 3D models, and wide angle of view of the scanner. This method is completely noninvasive and can be repeated any time the physician needs new clinical records. The 3D virtual model is a precise representation both of the soft and the hard tissue scanned, and it is possible to make any dimensional measure directly in the virtual space, for a full integrated 3D anthropometry and cephalometry. Moreover, the authors propose a method completely based on close-range photogrammetric scanning, able to detect facial and dental surfaces, and reducing the time, the complexity, and the cost of the scanning operations and the numerical elaboration.

  4. Timing of three-dimensional virtual treatment planning of orthognathic surgery: a prospective single-surgeon evaluation on 350 consecutive cases.

    PubMed

    Swennen, Gwen R J

    2014-11-01

    The purpose of this article is to evaluate the timing for three-dimensional (3D) virtual treatment planning of orthognathic surgery in the daily clinical routine. A total of 350 consecutive patients were included in this study. All patients were scanned following the standardized "Triple CBCT Scan Protocol" in centric relation. Integrated 3D virtual planning and actual surgery were performed by the same surgeon in all patients. Although clinically acceptable, still software improvements especially toward 3D virtual occlusal definition are mandatory to make 3D virtual planning of orthognathic surgery less time-consuming and more user-friendly to the clinician. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. Virtual Reality: The Future of Animated Virtual Instructor, the Technology and Its Emergence to a Productive E-Learning Environment.

    ERIC Educational Resources Information Center

    Jiman, Juhanita

    This paper discusses the use of Virtual Reality (VR) in e-learning environments where an intelligent three-dimensional (3D) virtual person plays the role of an instructor. With the existence of this virtual instructor, it is hoped that the teaching and learning in the e-environment will be more effective and productive. This virtual 3D animated…

  6. Quantitative evaluation for accumulative calibration error and video-CT registration errors in electromagnetic-tracked endoscopy.

    PubMed

    Liu, Sheena Xin; Gutiérrez, Luis F; Stanton, Doug

    2011-05-01

    Electromagnetic (EM)-guided endoscopy has demonstrated its value in minimally invasive interventions. Accuracy evaluation of the system is of paramount importance to clinical applications. Previously, a number of researchers have reported the results of calibrating the EM-guided endoscope; however, the accumulated errors of an integrated system, which ultimately reflect intra-operative performance, have not been characterized. To fill this vacancy, we propose a novel system to perform this evaluation and use a 3D metric to reflect the intra-operative procedural accuracy. This paper first presents a portable design and a method for calibration of an electromagnetic (EM)-tracked endoscopy system. An evaluation scheme is then described that uses the calibration results and EM-CT registration to enable real-time data fusion between CT and endoscopic video images. We present quantitative evaluation results for estimating the accuracy of this system using eight internal fiducials as the targets on an anatomical phantom: the error is obtained by comparing the positions of these targets in the CT space, EM space and endoscopy image space. To obtain 3D error estimation, the 3D locations of the targets in the endoscopy image space are reconstructed from stereo views of the EM-tracked monocular endoscope. Thus, the accumulated errors are evaluated in a controlled environment, where the ground truth information is present and systematic performance (including the calibration error) can be assessed. We obtain the mean in-plane error to be on the order of 2 pixels. To evaluate the data integration performance for virtual navigation, target video-CT registration error (TRE) is measured as the 3D Euclidean distance between the 3D-reconstructed targets of endoscopy video images and the targets identified in CT. The 3D error (TRE) encapsulates EM-CT registration error, EM-tracking error, fiducial localization error, and optical-EM calibration error. We present in this paper our calibration method and a virtual navigation evaluation system for quantifying the overall errors of the intra-operative data integration. We believe this phantom not only offers us good insights to understand the systematic errors encountered in all phases of an EM-tracked endoscopy procedure but also can provide quality control of laboratory experiments for endoscopic procedures before the experiments are transferred from the laboratory to human subjects.

  7. VR for Mars Pathfinder

    NASA Technical Reports Server (NTRS)

    Blackmon, Theodore

    1998-01-01

    Virtual reality (VR) technology has played an integral role for Mars Pathfinder mission, operations Using an automated machine vision algorithm, the 3d topography of the Martian surface was rapidly recovered fro -a the stereo images captured. by the Tender camera to produce photo-realistic 3d models, An advanced, interface was developed for visualization and interaction with. the virtual environment of the Pathfinder landing site for mission scientists at the Space Flight Operations Facility of the Jet Propulsion Laboratory. The VR aspect of the display allowed mission scientists to navigate on Mars in Bud while remaining here on Earth, thus improving their spatial awareness of the rock field that surrounds the lenders Measurements of positions, distances and angles could be easily extracted from the topographic models, providing valuable information for science analysis and mission. planning. Moreover, the VR map of Mars has also been used to assist with the archiving and planning of activities for the Sojourner rover.

  8. ISS emergency scenarios and a virtual training simulator for Flight Controllers

    NASA Astrophysics Data System (ADS)

    Uhlig, Thomas; Roshani, Frank-Cyrus; Amodio, Ciro; Rovera, Alessandro; Zekusic, Nikola; Helmholz, Hannes; Fairchild, Matthew

    2016-11-01

    The current emergency response concept for the International Space Station (ISS) includes the support of the Flight Control Team. Therefore, the team members need to be trained in emergencies and the corresponding crew procedures to ensure a smooth collaboration between crew and ground. In the case where the astronaut and ground personnel training is not collocated it is a challenging endeavor to ensure and maintain proper knowledge and skills for the Flight Control Team. Therefore, a virtual 3D simulator at the Columbus Control Center (Col-CC) is presented, which is used for ground personnel training in the on-board emergency response. The paper briefly introduces the main ISS emergency scenarios and the corresponding response strategy, details the resulting learning objectives for the Flight Controllers and elaborates on the new simulation method, which will be used in the future. The status of the 3D simulator, first experiences and further plans are discussed.

  9. Collaborative Aerial-Drawing System for Supporting Co-Creative Communication

    NASA Astrophysics Data System (ADS)

    Osaki, Akihiro; Taniguchi, Hiroyuki; Miwa, Yoshiyuki

    This paper describes the collaborative augmented reality (AR) system with which multiple users can handwrite 3D lines in the air simultaneously and manipulate the lines directly in the real world. In addition, we propose a new technique for co-creative communication utilizing the 3D drawing activity. Up to now, the various 3D user interfaces have been proposed. Although most of them aim to solve the specific problems in the virtual environments, the possibility of the 3D drawing expression has not been explored yet. Accordingly, we paid special attention to the interaction with the real objects in daily life, and considered to manipulate real objects and 3D lines without any distinctions by the same action. The developed AR system consists of a stereoscopic head-mounted display, a drawing tool, 6DOF sensors measuring three-dimensional position and Euler angles, and the 3D user interface, which enables to push, grasp and pitch 3D lines directly by use of the drawing tool. Additionally users can pick up desired color from either a landscape or a virtual line through the direct interaction with this tool. For sharing 3D lines among multiple users at the same place, the distributed-type AR system has been developed that mutually sends and receives drawn data between systems. With the developed system, users can proceed to design jointly in the real space through arranging each 3D drawing by direct manipulation. Moreover, a new application to the entertainment has become possible to play sports like catch, fencing match, or the like.

  10. First responder tracking and visualization for command and control toolkit

    NASA Astrophysics Data System (ADS)

    Woodley, Robert; Petrov, Plamen; Meisinger, Roger

    2010-04-01

    In order for First Responder Command and Control personnel to visualize incidents at urban building locations, DHS sponsored a small business research program to develop a tool to visualize 3D building interiors and movement of First Responders on site. 21st Century Systems, Inc. (21CSI), has developed a toolkit called Hierarchical Grid Referenced Normalized Display (HiGRND). HiGRND utilizes three components to provide a full spectrum of visualization tools to the First Responder. First, HiGRND visualizes the structure in 3D. Utilities in the 3D environment allow the user to switch between views (2D floor plans, 3D spatial, evacuation routes, etc.) and manually edit fast changing environments. HiGRND accepts CAD drawings and 3D digital objects and renders these in the 3D space. Second, HiGRND has a First Responder tracker that uses the transponder signals from First Responders to locate them in the virtual space. We use the movements of the First Responder to map the interior of structures. Finally, HiGRND can turn 2D blueprints into 3D objects. The 3D extruder extracts walls, symbols, and text from scanned blueprints to create the 3D mesh of the building. HiGRND increases the situational awareness of First Responders and allows them to make better, faster decisions in critical urban situations.

  11. Inter-algorithm lesion volumetry comparison of real and 3D simulated lung lesions in CT

    NASA Astrophysics Data System (ADS)

    Robins, Marthony; Solomon, Justin; Hoye, Jocelyn; Smith, Taylor; Ebner, Lukas; Samei, Ehsan

    2017-03-01

    The purpose of this study was to establish volumetric exchangeability between real and computational lung lesions in CT. We compared the overall relative volume estimation performance of segmentation tools when used to measure real lesions in actual patient CT images and computational lesions virtually inserted into the same patient images (i.e., hybrid datasets). Pathologically confirmed malignancies from 30 thoracic patient cases from Reference Image Database to Evaluate Therapy Response (RIDER) were modeled and used as the basis for the comparison. Lesions included isolated nodules as well as those attached to the pleura or other lung structures. Patient images were acquired using a 16 detector row or 64 detector row CT scanner (Lightspeed 16 or VCT; GE Healthcare). Scans were acquired using standard chest protocols during a single breath-hold. Virtual 3D lesion models based on real lesions were developed in Duke Lesion Tool (Duke University), and inserted using a validated image-domain insertion program. Nodule volumes were estimated using multiple commercial segmentation tools (iNtuition, TeraRecon, Inc., Syngo.via, Siemens Healthcare, and IntelliSpace, Philips Healthcare). Consensus based volume comparison showed consistent trends in volume measurement between real and virtual lesions across all software. The average percent bias (+/- standard error) shows -9.2+/-3.2% for real lesions versus -6.7+/-1.2% for virtual lesions with tool A, 3.9+/-2.5% and 5.0+/-0.9% for tool B, and 5.3+/-2.3% and 1.8+/-0.8% for tool C, respectively. Virtual lesion volumes were statistically similar to those of real lesions (< 4% difference) with p >.05 in most cases. Results suggest that hybrid datasets had similar inter-algorithm variability compared to real datasets.

  12. Asteroid orbital inversion using uniform phase-space sampling

    NASA Astrophysics Data System (ADS)

    Muinonen, K.; Pentikäinen, H.; Granvik, M.; Oszkiewicz, D.; Virtanen, J.

    2014-07-01

    We review statistical inverse methods for asteroid orbit computation from a small number of astrometric observations and short time intervals of observations. With the help of Markov-chain Monte Carlo methods (MCMC), we present a novel inverse method that utilizes uniform sampling of the phase space for the orbital elements. The statistical orbital ranging method (Virtanen et al. 2001, Muinonen et al. 2001) was set out to resolve the long-lasting challenges in the initial computation of orbits for asteroids. The ranging method starts from the selection of a pair of astrometric observations. Thereafter, the topocentric ranges and angular deviations in R.A. and Decl. are randomly sampled. The two Cartesian positions allow for the computation of orbital elements and, subsequently, the computation of ephemerides for the observation dates. Candidate orbital elements are included in the sample of accepted elements if the χ^2-value between the observed and computed observations is within a pre-defined threshold. The sample orbital elements obtain weights based on a certain debiasing procedure. When the weights are available, the full sample of orbital elements allows the probabilistic assessments for, e.g., object classification and ephemeris computation as well as the computation of collision probabilities. The MCMC ranging method (Oszkiewicz et al. 2009; see also Granvik et al. 2009) replaces the original sampling algorithm described above with a proposal probability density function (p.d.f.), and a chain of sample orbital elements results in the phase space. MCMC ranging is based on a bivariate Gaussian p.d.f. for the topocentric ranges, and allows for the sampling to focus on the phase-space domain with most of the probability mass. In the virtual-observation MCMC method (Muinonen et al. 2012), the proposal p.d.f. for the orbital elements is chosen to mimic the a posteriori p.d.f. for the elements: first, random errors are simulated for each observation, resulting in a set of virtual observations; second, corresponding virtual least-squares orbital elements are derived using the Nelder-Mead downhill simplex method; third, repeating the procedure two times allows for a computation of a difference for two sets of virtual orbital elements; and, fourth, this orbital-element difference constitutes a symmetric proposal in a random-walk Metropolis-Hastings algorithm, avoiding the explicit computation of the proposal p.d.f. In a discrete approximation, the allowed proposals coincide with the differences that are based on a large number of pre-computed sets of virtual least-squares orbital elements. The virtual-observation MCMC method is thus based on the characterization of the relevant volume in the orbital-element phase space. Here we utilize MCMC to map the phase-space domain of acceptable solutions. We can make use of the proposal p.d.f.s from the MCMC ranging and virtual-observation methods. The present phase-space mapping produces, upon convergence, a uniform sampling of the solution space within a pre-defined χ^2-value. The weights of the sampled orbital elements are then computed on the basis of the corresponding χ^2-values. The present method resembles the original ranging method. On one hand, MCMC mapping is insensitive to local extrema in the phase space and efficiently maps the solution space. This is somewhat contrary to the MCMC methods described above. On the other hand, MCMC mapping can suffer from producing a small number of sample elements with small χ^2-values, in resemblance to the original ranging method. We apply the methods to example near-Earth, main-belt, and transneptunian objects, and highlight the utilization of the methods in the data processing and analysis pipeline of the ESA Gaia space mission.

  13. Conformational properties of a pyridyl-substituted cinnamic acid studied by NMR measurements and computations

    NASA Astrophysics Data System (ADS)

    Csankó, K.; Forgo, P.; Boros, K.; Hohmann, J.; Sipos, P.; Pálinkó, I.

    2013-07-01

    Following a preliminary exploration of the conformational space by the PM3 and HF/6-31 G*ab initio methods the conformational characteristics of the scarcely available Z isomer of an α-pyridyl-substituted cinnamic acid dimer [Z-2(3‧-pyridyl)-3-phenylpropanoic acid] was studied by NMR spectroscopy (NOESY measurements) in DMSO(d6), methanol(d4) and chloroform(d1). Calculations predicted that full conjugation was overruled by steric interactions and the rotation of the pyridyl ring was not restricted. NOESY measurements verified indeed that in all three solvents the pyridyl group was virtually freely rotating, while some restriction applied for that of the phenyl group.

  14. Large-scale systematic analysis of 2D fingerprint methods and parameters to improve virtual screening enrichments.

    PubMed

    Sastry, Madhavi; Lowrie, Jeffrey F; Dixon, Steven L; Sherman, Woody

    2010-05-24

    A systematic virtual screening study on 11 pharmaceutically relevant targets has been conducted to investigate the interrelation between 8 two-dimensional (2D) fingerprinting methods, 13 atom-typing schemes, 13 bit scaling rules, and 12 similarity metrics using the new cheminformatics package Canvas. In total, 157 872 virtual screens were performed to assess the ability of each combination of parameters to identify actives in a database screen. In general, fingerprint methods, such as MOLPRINT2D, Radial, and Dendritic that encode information about local environment beyond simple linear paths outperformed other fingerprint methods. Atom-typing schemes with more specific information, such as Daylight, Mol2, and Carhart were generally superior to more generic atom-typing schemes. Enrichment factors across all targets were improved considerably with the best settings, although no single set of parameters performed optimally on all targets. The size of the addressable bit space for the fingerprints was also explored, and it was found to have a substantial impact on enrichments. Small bit spaces, such as 1024, resulted in many collisions and in a significant degradation in enrichments compared to larger bit spaces that avoid collisions.

  15. A standardized set of 3-D objects for virtual reality research and applications.

    PubMed

    Peeters, David

    2018-06-01

    The use of immersive virtual reality as a research tool is rapidly increasing in numerous scientific disciplines. By combining ecological validity with strict experimental control, immersive virtual reality provides the potential to develop and test scientific theories in rich environments that closely resemble everyday settings. This article introduces the first standardized database of colored three-dimensional (3-D) objects that can be used in virtual reality and augmented reality research and applications. The 147 objects have been normed for name agreement, image agreement, familiarity, visual complexity, and corresponding lexical characteristics of the modal object names. The availability of standardized 3-D objects for virtual reality research is important, because reaching valid theoretical conclusions hinges critically on the use of well-controlled experimental stimuli. Sharing standardized 3-D objects across different virtual reality labs will allow for science to move forward more quickly.

  16. Preserving the Finger Lakes for the Future: A Prototype Decision Support System for Water Resource Management, Open Space, and Agricultural Protection

    NASA Technical Reports Server (NTRS)

    Brower, Robert

    2003-01-01

    As described herein, this project has progressed well, with the initiation or completion of a number of program facets at programmatic, technical, and inter-agency levels. The concept of the Virtual Management Operations Center has taken shape, grown, and has been well received by parties from a wide variety of agencies and organizations in the Finger Lakes region and beyond. As it has evolved in design and functionality, and to better illustrate its current focus for this project, it has been given the expanded name of Watershed Virtual Management Operations Center (W-VMOC). It offers the advanced, compelling functionality of interactive 3D visualization interfaced with 2D mapping, all accessed via Internet or virtually any kind of distributed computer network. This strong foundation will allow the development of a Decision Support System (DSS) with anticipated enhanced functionality to be applied to the myriad issues involved in the wise management of the Finger Lakes region.

  17. D Virtual Reconstruction of AN Urban Historical Space: a Consideration on the Method

    NASA Astrophysics Data System (ADS)

    Galizia, M.; Santagati, C.

    2011-09-01

    Urban historical spaces are often characterized by a variety of shapes, geometries, volumes, materials. Their virtual reconstruction requires a critical approach in terms of acquired data's density, timing optimization, final product's quality and slimness. The research team has focused its attention on the study on Francesco Neglia square (previously named Saint Thomas square) in Enna. This square is an urban space fronted by architectures which present historical and stylistic differences. For example you can find the Saint Thomas'church belfry (in aragounese-catalan stile dated XIV century) and the porch, the Anime Sante baroque's church (XVII century), Saint Mary of the Grace's nunnery (XVIII century) and as well as some civil buildings of minor importance built in the mid twentieth century. The research has compared two different modeling tools approaches: the first one is based on the construction of triangulated surfaces which are segmented and simplified; the second one is based on the detection of surfaces geometrical features, the extraction of the more significant profiles by using a software dedicated to the elaboration of cloud points and the subsequent mathematical reconstruction by using a 3d modelling software. The following step was aimed to process the virtual reconstruction of urban scene by assembling the single optimized models. This work highlighted the importance of the image of the operator and of its cultural contribution, essential to recognize geometries which generates surfaces in order to create high quality semantic models.

  18. Vexcel Spells Excellence for Earth and Space

    NASA Technical Reports Server (NTRS)

    2002-01-01

    With assistance from Stennis Space Center, Vexcel was able to strengthen the properties of its Apex Ground Station(TM), an affordable, end-to-end system that comes complete with a tracking antenna that permits coverage within an approximate 2,000-kilometer radius of its location, a high speed direct-to-disk data acquisition system that can download information from virtually any satellite, and data processing software for virtually all synthetic aperture radar and optical satellite sensors. Vexcel is using an Apex system linked to the Terra satellite to help scientists and NASA personnel measure land and ocean surface temperatures, detect fires, monitor ocean color and currents, produce global vegetation maps and data, and assess cloud characteristics and aerosol concentrations. In addition, Vexcel is providing NASA with close-range photogrammetry software for the International Space Station. The technology, commercially available as FotoG(TM), was developed with SBIR funding and support from NASA's Jet Propulsion Laboratory. Commercially, FotoG is used for demanding projects taken on by engineering firms, nuclear power plants, oil refineries, and process facilities. A version of Vexcel's close-range photo measurement system was also used to create virtual 3-D backdrops for a high-tech science fiction film.

  19. Naver: a PC-cluster-based VR system

    NASA Astrophysics Data System (ADS)

    Park, ChangHoon; Ko, HeeDong; Kim, TaiYun

    2003-04-01

    In this paper, we present a new framework NAVER for virtual reality application. The NAVER is based on a cluster of low-cost personal computers. The goal of NAVER is to provide flexible, extensible, scalable and re-configurable framework for the virtual environments defined as the integration of 3D virtual space and external modules. External modules are various input or output devices and applications on the remote hosts. From the view of system, personal computers are divided into three servers according to its specific functions: Render Server, Device Server and Control Server. While Device Server contains external modules requiring event-based communication for the integration, Control Server contains external modules requiring synchronous communication every frame. And, the Render Server consists of 5 managers: Scenario Manager, Event Manager, Command Manager, Interaction Manager and Sync Manager. These managers support the declaration and operation of virtual environment and the integration with external modules on remote servers.

  20. Envisioning the future of home care: applications of immersive virtual reality.

    PubMed

    Brennan, Patricia Flatley; Arnott Smith, Catherine; Ponto, Kevin; Radwin, Robert; Kreutz, Kendra

    2013-01-01

    Accelerating the design of technologies to support health in the home requires 1) better understanding of how the household context shapes consumer health behaviors and (2) the opportunity to afford engineers, designers, and health professionals the chance to systematically study the home environment. We developed the Living Environments Laboratory (LEL) with a fully immersive, six-sided virtual reality CAVE to enable recreation of a broad range of household environments. We have successfully developed a virtual apartment, including a kitchen, living space, and bathroom. Over 2000 people have visited the LEL CAVE. Participants use an electronic wand to activate common household affordances such as opening a refrigerator door or lifting a cup. Challenges currently being explored include creating natural gesture to interface with virtual objects, developing robust, simple procedures to capture actual living environments and rendering them in a 3D visualization, and devising systematic stable terminologies to characterize home environments.

  1. Virtual 3D planning of tracheostomy placement and clinical applicability of 3D cannula design: a three-step study.

    PubMed

    de Kleijn, Bertram J; Kraeima, Joep; Wachters, Jasper E; van der Laan, Bernard F A M; Wedman, Jan; Witjes, M J H; Halmos, Gyorgy B

    2018-02-01

    We aimed to investigate the potential of 3D virtual planning of tracheostomy tube placement and 3D cannula design to prevent tracheostomy complications due to inadequate cannula position. 3D models of commercially available cannula were positioned in 3D models of the airway. In study (1), a cohort that underwent tracheostomy between 2013 and 2015 was selected (n = 26). The cannula was virtually placed in the airway in the pre-operative CT scan and its position was compared to the cannula position on post-operative CT scans. In study (2), a cohort with neuromuscular disease (n = 14) was analyzed. Virtual cannula placing was performed in CT scans and tested if problems could be anticipated. Finally (3), for a patient with Duchenne muscular dystrophy and complications of conventional tracheostomy cannula, a patient-specific cannula was 3D designed, fabricated, and placed. (1) The 3D planned and post-operative tracheostomy position differed significantly. (2) Three groups of patients were identified: (A) normal anatomy; (B) abnormal anatomy, commercially available cannula fits; and (C) abnormal anatomy, custom-made cannula, may be necessary. (3) The position of the custom-designed cannula was optimal and the trachea healed. Virtual planning of the tracheostomy did not correlate with actual cannula position. Identifying patients with abnormal airway anatomy in whom commercially available cannula cannot be optimally positioned is advantageous. Patient-specific cannula design based on 3D virtualization of the airway was beneficial in a patient with abnormal airway anatomy.

  2. Web 3D for Public, Environmental and Occupational Health: Early Examples from Second Life®

    PubMed Central

    Kamel Boulos, Maged N.; Ramloll, Rameshsharma; Jones, Ray; Toth-Cohen, Susan

    2008-01-01

    Over the past three years (2006–2008), the medical/health and public health communities have shown a growing interest in using online 3D virtual worlds like Second Life® (http://secondlife.com/) for health education, community outreach, training and simulations purposes. 3D virtual worlds are seen as the precursors of ‘Web 3D’, the next major iteration of the Internet that will follow in the coming years. This paper provides a tour of several flagship Web 3D experiences in Second Life®, including Play2Train Islands (emergency preparedness training), the US Centers for Disease Control and Prevention—CDC Island (public health), Karuna Island (AIDS support and information), Tox Town at Virtual NLM Island (US National Library of Medicine - environmental health), and Jefferson’s Occupational Therapy Center. We also discuss the potential and future of Web 3D. These are still early days of 3D virtual worlds, and there are still many more untapped potentials and affordances of 3D virtual worlds that are yet to be explored, as the technology matures further and improves over the coming months and years. PMID:19190358

  3. Web3D Technologies in Learning, Education and Training: Motivations, Issues, Opportunities

    ERIC Educational Resources Information Center

    Chittaro, Luca; Ranon, Roberto

    2007-01-01

    Web3D open standards allow the delivery of interactive 3D virtual learning environments through the Internet, reaching potentially large numbers of learners worldwide, at any time. This paper introduces the educational use of virtual reality based on Web3D technologies. After briefly presenting the main Web3D technologies, we summarize the…

  4. Psychophysical Evaluation of Three-Dimensional Auditory Displays

    NASA Technical Reports Server (NTRS)

    Wightman, Frederic L.

    1996-01-01

    This report describes the progress made during the second year of a three-year Cooperative Research Agreement. The CRA proposed a program of applied psychophysical research designed to determine the requirements and limitations of three-dimensional (3-D) auditory display systems. These displays present synthesized stimuli to a pilot or virtual workstation operator that evoke auditory images at predetermined positions in space. The images can be either stationary or moving. In previous years, we completed a number of studies that provided data on listeners' abilities to localize stationary sound sources with 3-D displays. The current focus is on the use of 3-D displays in 'natural' listening conditions, which include listeners'head movements, moving sources, multiple sources and 'echoic' sources. The results of our research on one of these topics, the localization of multiple sources, was reported in the most recent Semi-Annual Progress Report (Appendix A). That same progress report described work on two related topics, the influence of a listener's a-priori knowledge of source characteristics and the discriminability of real and virtual sources. In the period since the last Progress Report we have conducted several new studies to evaluate the effectiveness of a new and simpler method for measuring the HRTF's that are used to synthesize virtual sources and have expanded our studies of multiple sources. The results of this research are described below.

  5. D Visualization for Virtual Museum Development

    NASA Astrophysics Data System (ADS)

    Skamantzari, M.; Georgopoulos, A.

    2016-06-01

    The interest in the development of virtual museums is nowadays rising rapidly. During the last decades there have been numerous efforts concerning the 3D digitization of cultural heritage and the development of virtual museums, digital libraries and serious games. The realistic result has always been the main concern and a real challenge when it comes to 3D modelling of monuments, artifacts and especially sculptures. This paper implements, investigates and evaluates the results of the photogrammetric methods and 3D surveys that were used for the development of a virtual museum. Moreover, the decisions, the actions, the methodology and the main elements that this kind of application should include and take into consideration are described and analysed. It is believed that the outcomes of this application will be useful to researchers who are planning to develop and further improve the attempts made on virtual museums and mass production of 3D models.

  6. Objective and subjective quality assessment of geometry compression of reconstructed 3D humans in a 3D virtual room

    NASA Astrophysics Data System (ADS)

    Mekuria, Rufael; Cesar, Pablo; Doumanis, Ioannis; Frisiello, Antonella

    2015-09-01

    Compression of 3D object based video is relevant for 3D Immersive applications. Nevertheless, the perceptual aspects of the degradation introduced by codecs for meshes and point clouds are not well understood. In this paper we evaluate the subjective and objective degradations introduced by such codecs in a state of art 3D immersive virtual room. In the 3D immersive virtual room, users are captured with multiple cameras, and their surfaces are reconstructed as photorealistic colored/textured 3D meshes or point clouds. To test the perceptual effect of compression and transmission, we render degraded versions with different frame rates in different contexts (near/far) in the scene. A quantitative subjective study with 16 users shows that negligible distortion of decoded surfaces compared to the original reconstructions can be achieved in the 3D virtual room. In addition, a qualitative task based analysis in a full prototype field trial shows increased presence, emotion, user and state recognition of the reconstructed 3D Human representation compared to animated computer avatars.

  7. Simulating Humans as Integral Parts of Spacecraft Missions

    NASA Technical Reports Server (NTRS)

    Bruins, Anthony C.; Rice, Robert; Nguyen, Lac; Nguyen, Heidi; Saito, Tim; Russell, Elaine

    2006-01-01

    The Collaborative-Virtual Environment Simulation Tool (C-VEST) software was developed for use in a NASA project entitled "3-D Interactive Digital Virtual Human." The project is oriented toward the use of a comprehensive suite of advanced software tools in computational simulations for the purposes of human-centered design of spacecraft missions and of the spacecraft, space suits, and other equipment to be used on the missions. The C-VEST software affords an unprecedented suite of capabilities for three-dimensional virtual-environment simulations with plug-in interfaces for physiological data, haptic interfaces, plug-and-play software, realtime control, and/or playback control. Mathematical models of the mechanics of the human body and of the aforementioned equipment are implemented in software and integrated to simulate forces exerted on and by astronauts as they work. The computational results can then support the iterative processes of design, building, and testing in applied systems engineering and integration. The results of the simulations provide guidance for devising measures to counteract effects of microgravity on the human body and for the rapid development of virtual (that is, simulated) prototypes of advanced space suits, cockpits, and robots to enhance the productivity, comfort, and safety of astronauts. The unique ability to implement human-in-the-loop immersion also makes the C-VEST software potentially valuable for use in commercial and academic settings beyond the original space-mission setting.

  8. Preserving the Finger Lakes for the Future: A Prototype Decision Support System for Water Resource Management, Open Space, and Agricultural Protection

    NASA Technical Reports Server (NTRS)

    Brower, Robert

    2004-01-01

    This report summarizes the activity conducted under NASA Grant NAG13-02059 entitled "Preserving the Finger Lakes for the Future" A Prototype Decision Support System for Water Resources Management, Open Space and Agricultural Protection, for the period of September 26, 2003 to September 25, 2004. The RACNE continues to utilize the services of its affiliate, the Institute for the Application of Geospatial Technology at Cayuga Community College, Inc. (IAGT), for the purposes of this project under its permanent operating agreement with IAGT. IAGT is a 501(c)(3) not-for-profit Corporation created by the RACNE for the purpose of carrying out its programmatic and administrative mission. The "Preserving the Finger Lakes for the Future" project has progressed and evolved as planned, with the continuation or initiation of a number of program facets at programmatic, technical, and inter-agency levels. The project has grown, starting with the well received core concept of the Virtual Management Operations Center (VMOC), to the functional Watershed Virtual Management Operations Center (W-VMOC) prototype, to the more advanced Finger Lakes Decision Support System (FLDSS) prototype, deployed for evaluation and assessment to a wide variety of agencies and organizations in the Finger Lakes region and beyond. This suite of tools offers the advanced, compelling functionality of interactive 3D visualization interfaced with 2D mapping, all accessed via Internet or virtually any kind of distributed computer network.

  9. Immersive Visualization of the Solid Earth

    NASA Astrophysics Data System (ADS)

    Kreylos, O.; Kellogg, L. H.

    2017-12-01

    Immersive visualization using virtual reality (VR) display technology offers unique benefits for the visual analysis of complex three-dimensional data such as tomographic images of the mantle and higher-dimensional data such as computational geodynamics models of mantle convection or even planetary dynamos. Unlike "traditional" visualization, which has to project 3D scalar data or vectors onto a 2D screen for display, VR can display 3D data in a pseudo-holographic (head-tracked stereoscopic) form, and does therefore not suffer the distortions of relative positions, sizes, distances, and angles that are inherent in 2D projection and interfere with interpretation. As a result, researchers can apply their spatial reasoning skills to 3D data in the same way they can to real objects or environments, as well as to complex objects like vector fields. 3D Visualizer is an application to visualize 3D volumetric data, such as results from mantle convection simulations or seismic tomography reconstructions, using VR display technology and a strong focus on interactive exploration. Unlike other visualization software, 3D Visualizer does not present static visualizations, such as a set of cross-sections at pre-selected positions and orientations, but instead lets users ask questions of their data, for example by dragging a cross-section through the data's domain with their hands and seeing data mapped onto that cross-section in real time, or by touching a point inside the data domain, and immediately seeing an isosurface connecting all points having the same data value as the touched point. Combined with tools allowing 3D measurements of positions, distances, and angles, and with annotation tools that allow free-hand sketching directly in 3D data space, the outcome of using 3D Visualizer is not primarily a set of pictures, but derived data to be used for subsequent analysis. 3D Visualizer works best in virtual reality, either in high-end facility-scale environments such as CAVEs, or using commodity low-cost virtual reality headsets such as HTC's Vive. The recent emergence of high-quality commodity VR means that researchers can buy a complete VR system off the shelf, install it and the 3D Visualizer software themselves, and start using it for data analysis immediately.

  10. Issues and Challenges of Teaching and Learning in 3D Virtual Worlds: Real Life Case Studies

    ERIC Educational Resources Information Center

    Pfeil, Ulrike; Ang, Chee Siang; Zaphiris, Panayiotis

    2009-01-01

    We aimed to study the characteristics and usage patterns of 3D virtual worlds in the context of teaching and learning. To achieve this, we organised a full-day workshop to explore, discuss and investigate the educational use of 3D virtual worlds. Thirty participants took part in the workshop. All conversations were recorded and transcribed for…

  11. The Learner Characteristics, Features of Desktop 3D Virtual Reality Environments, and College Chemistry Instruction: A Structural Equation Modeling Analysis

    ERIC Educational Resources Information Center

    Merchant, Zahira; Goetz, Ernest T.; Keeney-Kennicutt, Wendy; Kwok, Oi-man; Cifuentes, Lauren; Davis, Trina J.

    2012-01-01

    We examined a model of the impact of a 3D desktop virtual reality environment on the learner characteristics (i.e. perceptual and psychological variables) that can enhance chemistry-related learning achievements in an introductory college chemistry class. The relationships between the 3D virtual reality features and the chemistry learning test as…

  12. An interactive, stereoscopic virtual environment for medical imaging visualization, simulation and training

    NASA Astrophysics Data System (ADS)

    Krueger, Evan; Messier, Erik; Linte, Cristian A.; Diaz, Gabriel

    2017-03-01

    Recent advances in medical image acquisition allow for the reconstruction of anatomies with 3D, 4D, and 5D renderings. Nevertheless, standard anatomical and medical data visualization still relies heavily on the use of traditional 2D didactic tools (i.e., textbooks and slides), which restrict the presentation of image data to a 2D slice format. While these approaches have their merits beyond being cost effective and easy to disseminate, anatomy is inherently three-dimensional. By using 2D visualizations to illustrate more complex morphologies, important interactions between structures can be missed. In practice, such as in the planning and execution of surgical interventions, professionals require intricate knowledge of anatomical complexities, which can be more clearly communicated and understood through intuitive interaction with 3D volumetric datasets, such as those extracted from high-resolution CT or MRI scans. Open source, high quality, 3D medical imaging datasets are freely available, and with the emerging popularity of 3D display technologies, affordable and consistent 3D anatomical visualizations can be created. In this study we describe the design, implementation, and evaluation of one such interactive, stereoscopic visualization paradigm for human anatomy extracted from 3D medical images. A stereoscopic display was created by projecting the scene onto the lab floor using sequential frame stereo projection and viewed through active shutter glasses. By incorporating a PhaseSpace motion tracking system, a single viewer can navigate an augmented reality environment and directly manipulate virtual objects in 3D. While this paradigm is sufficiently versatile to enable a wide variety of applications in need of 3D visualization, we designed our study to work as an interactive game, which allows users to explore the anatomy of various organs and systems. In this study we describe the design, implementation, and evaluation of an interactive and stereoscopic visualization platform for exploring and understanding human anatomy. This system can present medical imaging data in three dimensions and allows for direct physical interaction and manipulation by the viewer. This should provide numerous benefits over traditional, 2D display and interaction modalities, and in our analysis, we aim to quantify and qualify users' visual and motor interactions with the virtual environment when employing this interactive display as a 3D didactic tool.

  13. Panoramic stereo sphere vision

    NASA Astrophysics Data System (ADS)

    Feng, Weijia; Zhang, Baofeng; Röning, Juha; Zong, Xiaoning; Yi, Tian

    2013-01-01

    Conventional stereo vision systems have a small field of view (FOV) which limits their usefulness for certain applications. While panorama vision is able to "see" in all directions of the observation space, scene depth information is missed because of the mapping from 3D reference coordinates to 2D panoramic image. In this paper, we present an innovative vision system which builds by a special combined fish-eye lenses module, and is capable of producing 3D coordinate information from the whole global observation space and acquiring no blind area 360°×360° panoramic image simultaneously just using single vision equipment with one time static shooting. It is called Panoramic Stereo Sphere Vision (PSSV). We proposed the geometric model, mathematic model and parameters calibration method in this paper. Specifically, video surveillance, robotic autonomous navigation, virtual reality, driving assistance, multiple maneuvering target tracking, automatic mapping of environments and attitude estimation are some of the applications which will benefit from PSSV.

  14. 3D Elevation Program—Virtual USA in 3D

    USGS Publications Warehouse

    Lukas, Vicki; Stoker, J.M.

    2016-04-14

    The U.S. Geological Survey (USGS) 3D Elevation Program (3DEP) uses a laser system called ‘lidar’ (light detection and ranging) to create a virtual reality map of the Nation that is very accurate. 3D maps have many uses with new uses being discovered all the time.  

  15. Integration of virtual and real scenes within an integral 3D imaging environment

    NASA Astrophysics Data System (ADS)

    Ren, Jinsong; Aggoun, Amar; McCormick, Malcolm

    2002-11-01

    The Imaging Technologies group at De Montfort University has developed an integral 3D imaging system, which is seen as the most likely vehicle for 3D television avoiding psychological effects. To create real fascinating three-dimensional television programs, a virtual studio that performs the task of generating, editing and integrating the 3D contents involving virtual and real scenes is required. The paper presents, for the first time, the procedures, factors and methods of integrating computer-generated virtual scenes with real objects captured using the 3D integral imaging camera system. The method of computer generation of 3D integral images, where the lens array is modelled instead of the physical camera is described. In the model each micro-lens that captures different elemental images of the virtual scene is treated as an extended pinhole camera. An integration process named integrated rendering is illustrated. Detailed discussion and deep investigation are focused on depth extraction from captured integral 3D images. The depth calculation method from the disparity and the multiple baseline method that is used to improve the precision of depth estimation are also presented. The concept of colour SSD and its further improvement in the precision is proposed and verified.

  16. A client–server framework for 3D remote visualization of radiotherapy treatment space

    PubMed Central

    Santhanam, Anand P.; Min, Yugang; Dou, Tai H.; Kupelian, Patrick; Low, Daniel A.

    2013-01-01

    Radiotherapy is safely employed for treating wide variety of cancers. The radiotherapy workflow includes a precise positioning of the patient in the intended treatment position. While trained radiation therapists conduct patient positioning, consultation is occasionally required from other experts, including the radiation oncologist, dosimetrist, or medical physicist. In many circumstances, including rural clinics and developing countries, this expertise is not immediately available, so the patient positioning concerns of the treating therapists may not get addressed. In this paper, we present a framework to enable remotely located experts to virtually collaborate and be present inside the 3D treatment room when necessary. A multi-3D camera framework was used for acquiring the 3D treatment space. A client–server framework enabled the acquired 3D treatment room to be visualized in real-time. The computational tasks that would normally occur on the client side were offloaded to the server side to enable hardware flexibility on the client side. On the server side, a client specific real-time stereo rendering of the 3D treatment room was employed using a scalable multi graphics processing units (GPU) system. The rendered 3D images were then encoded using a GPU-based H.264 encoding for streaming. Results showed that for a stereo image size of 1280 × 960 pixels, experts with high-speed gigabit Ethernet connectivity were able to visualize the treatment space at approximately 81 frames per second. For experts remotely located and using a 100 Mbps network, the treatment space visualization occurred at 8–40 frames per second depending upon the network bandwidth. This work demonstrated the feasibility of remote real-time stereoscopic patient setup visualization, enabling expansion of high quality radiation therapy into challenging environments. PMID:23440605

  17. A Virtual Campus Based on Human Factor Engineering

    ERIC Educational Resources Information Center

    Yang, Yuting; Kang, Houliang

    2014-01-01

    Three Dimensional or 3D virtual reality has become increasingly popular in many areas, especially in building a digital campus. This paper introduces a virtual campus, which is based on a 3D model of The Tourism and Culture College of Yunnan University (TCYU). Production of the virtual campus was aided by Human Factor and Ergonomics (HF&E), an…

  18. Computational techniques to enable visualizing shapes of objects of extra spatial dimensions

    NASA Astrophysics Data System (ADS)

    Black, Don Vaughn, II

    Envisioning extra dimensions beyond the three of common experience is a daunting challenge for three dimensional observers. Intuition relies on experience gained in a three dimensional environment. Gaining experience with virtual four dimensional objects and virtual three manifolds in four-space on a personal computer may provide the basis for an intuitive grasp of four dimensions. In order to enable such a capability for ourselves, it is first necessary to devise and implement a computationally tractable method to visualize, explore, and manipulate objects of dimension beyond three on the personal computer. A technology is described in this dissertation to convert a representation of higher dimensional models into a format that may be displayed in realtime on graphics cards available on many off-the-shelf personal computers. As a result, an opportunity has been created to experience the shape of four dimensional objects on the desktop computer. The ultimate goal has been to provide the user a tangible and memorable experience with mathematical models of four dimensional objects such that the user can see the model from any user selected vantage point. By use of a 4D GUI, an arbitrary convex hull or 3D silhouette of the 4D model can be rotated, panned, scrolled, and zoomed until a suitable dimensionally reduced view or Aspect is obtained. The 4D GUI then allows the user to manipulate a 3-flat hyperplane cutting tool to slice the model at an arbitrary orientation and position to extract or "pluck" an embedded 3D slice or "aspect" from the embedding four-space. This plucked 3D aspect can be viewed from all angles via a conventional 3D viewer using three multiple POV viewports, and optionally exported to a third party CAD viewer for further manipulation. Plucking and Manipulating the Aspect provides a tangible experience for the end-user in the same manner as any 3D Computer Aided Design viewing and manipulation tool does for the engineer or a 3D video game provides for the nascent student.

  19. The Efficacy of an Immersive 3D Virtual versus 2D Web Environment in Intercultural Sensitivity Acquisition

    ERIC Educational Resources Information Center

    Coffey, Amy Jo; Kamhawi, Rasha; Fishwick, Paul; Henderson, Julie

    2017-01-01

    Relatively few studies have empirically tested computer-based immersive virtual environments' efficacy in teaching or enhancing pro-social attitudes, such as intercultural sensitivity. This channel study experiment was conducted (N = 159) to compare what effects, if any, an immersive 3D virtual environment would have upon subjects' intercultural…

  20. Agreement and reliability of pelvic floor measurements during contraction using three-dimensional pelvic floor ultrasound and virtual reality.

    PubMed

    Speksnijder, L; Rousian, M; Steegers, E A P; Van Der Spek, P J; Koning, A H J; Steensma, A B

    2012-07-01

    Virtual reality is a novel method of visualizing ultrasound data with the perception of depth and offers possibilities for measuring non-planar structures. The levator ani hiatus has both convex and concave aspects. The aim of this study was to compare levator ani hiatus volume measurements obtained with conventional three-dimensional (3D) ultrasound and with a virtual reality measurement technique and to establish their reliability and agreement. 100 symptomatic patients visiting a tertiary pelvic floor clinic with a normal intact levator ani muscle diagnosed on translabial ultrasound were selected. Datasets were analyzed using a rendered volume with a slice thickness of 1.5 cm at the level of minimal hiatal dimensions during contraction. The levator area (in cm(2)) was measured and multiplied by 1.5 to get the levator ani hiatus volume in conventional 3D ultrasound (in cm(3)). Levator ani hiatus volume measurements were then measured semi-automatically in virtual reality (cm(3) ) using a segmentation algorithm. An intra- and interobserver analysis of reliability and agreement was performed in 20 randomly chosen patients. The mean difference between levator ani hiatus volume measurements performed using conventional 3D ultrasound and virtual reality was 0.10 (95% CI, - 0.15 to 0.35) cm(3). The intraclass correlation coefficient (ICC) comparing conventional 3D ultrasound with virtual reality measurements was > 0.96. Intra- and interobserver ICCs for conventional 3D ultrasound measurements were > 0.94 and for virtual reality measurements were > 0.97, indicating good reliability for both. Levator ani hiatus volume measurements performed using virtual reality were reliable and the results were similar to those obtained with conventional 3D ultrasonography. Copyright © 2012 ISUOG. Published by John Wiley & Sons, Ltd.

  1. SpectraPLOT, Visualization Package with a User-Friendly Graphical Interface

    NASA Astrophysics Data System (ADS)

    Sebald, James; Macfarlane, Joseph; Golovkin, Igor

    2017-10-01

    SPECT3D is a collisional-radiative spectral analysis package designed to compute detailed emission, absorption, or x-ray scattering spectra, filtered images, XRD signals, and other synthetic diagnostics. The spectra and images are computed for virtual detectors by post-processing the results of hydrodynamics simulations in 1D, 2D, and 3D geometries. SPECT3D can account for a variety of instrumental response effects so that direct comparisons between simulations and experimental measurements can be made. SpectraPLOT is a user-friendly graphical interface for viewing a wide variety of results from SPECT3D simulations, and applying various instrumental effects to the simulated images and spectra. We will present SpectraPLOT's ability to display a variety of data, including spectra, images, light curves, streaked spectra, space-resolved spectra, and drilldown plasma property plots, for an argon-doped capsule implosion experiment example. Future SpectraPLOT features and enhancements will also be discussed.

  2. From tissue to silicon to plastic: three-dimensional printing in comparative anatomy and physiology

    PubMed Central

    Lauridsen, Henrik; Hansen, Kasper; Nørgård, Mathias Ørum; Wang, Tobias; Pedersen, Michael

    2016-01-01

    Comparative anatomy and physiology are disciplines related to structures and mechanisms in three-dimensional (3D) space. For the past centuries, scientific reports in these fields have relied on written descriptions and two-dimensional (2D) illustrations, but in recent years 3D virtual modelling has entered the scene. However, comprehending complex anatomical structures is hampered by reproduction on flat inherently 2D screens. One way to circumvent this problem is in the production of 3D-printed scale models. We have applied computed tomography and magnetic resonance imaging to produce digital models of animal anatomy well suited to be printed on low-cost 3D printers. In this communication, we report how to apply such technology in comparative anatomy and physiology to aid discovery, description, comprehension and communication, and we seek to inspire fellow researchers in these fields to embrace this emerging technology. PMID:27069653

  3. Desktop-VR system for preflight 3D navigation training

    NASA Astrophysics Data System (ADS)

    Aoki, Hirofumi; Oman, Charles M.; Buckland, Daniel A.; Natapoff, Alan

    Crews who inhabit spacecraft with complex 3D architecture frequently report inflight disorientation and navigation problems. Preflight virtual reality (VR) training may reduce those risks. Although immersive VR techniques may better support spatial orientation training in a local environment, a non-immersive desktop (DT) system may be more convenient for navigation training in "building scale" spaces, especially if the two methods achieve comparable results. In this study trainees' orientation and navigation performance during simulated space station emergency egress tasks was compared while using immersive head-mounted display (HMD) and DT-VR systems. Analyses showed no differences in pointing angular-error or egress time among the groups. The HMD group was significantly faster than DT group when pointing from destination to start location and from start toward different destination. However, this may be attributed to differences in the input device used (a head-tracker for HMD group vs. a keyboard touchpad or a gamepad in the DT group). All other 3D navigation performance measures were similar using the immersive and non-immersive VR systems, suggesting that the simpler desktop VR system may be useful for astronaut 3D navigation training.

  4. Identification and characterization of low-mass stars and brown dwarfs using Virtual Observatory tools.

    NASA Astrophysics Data System (ADS)

    Aberasturi, M.; Solano, E.; Martín, E.

    2015-05-01

    Low-mass stars and brown dwarfs (with spectral types M, L, T and Y) are the most common objects in the Milky Way. A complete census of these objects is necessary to understand the theories about their complex structure and formation processes. In order to increase the number of known objects in the Solar neighborhood (d<30 pc), we have made use of the Virtual Observatory which allows an efficient handling of the huge amount of information available in astronomical databases. We also used the WFC3 installed in the Hubble Space Telescope to look for T5+ dwarfs binaries.

  5. Synfograms: a new generation of holographic applications

    NASA Astrophysics Data System (ADS)

    Meulien Öhlmann, Odile; Öhlmann, Dietmar; Zacharovas, Stanislovas J.

    2008-04-01

    The new synthetic Four-dimensional printing technique (Syn4D) Synfogram is introducing time (animation) into spatial configuration of the imprinted three-dimensional shapes. While lenticular solutions offer 2 to 9 stereoscopic images Syn4D offers large format, full colors true 3D visualization printing of 300 to 2500 frames imprinted as holographic dots. This past 2 years Syn4D high-resolution displays proved to be extremely efficient for museums presentation, engineering design, automobile prototyping, and advertising virtual presentation as well as, for portrait and fashion applications. The main advantages of syn4D is that it offers a very easy way of using a variety of digital media, like most of 3D Modelling programs, 3D scan system, video sequences, digital photography, tomography as well as the Syn4D camera track system for life recording of spatial scenes changing in time. The use of digital holographic printer in conjunction with Syn4D image acquiring and processing devices separates printing and imaging creation in such a way that makes four-dimensional printing similar to a conventional digital photography processes where imaging and printing are usually separated in space and time. Besides making content easy to prepare, Syn4D has also developed new display and lighting solutions for trade show, museum, POP, merchandising, etc. The introduction of Synfograms is opening new applications for real life and virtual 4D displays. In this paper we will analyse the 3D market, the properties of the Synfograms and specific applications, the problems we encounter, solutions we find, discuss about customers demand and need for new product development.

  6. Laboratory-based x-ray phase-contrast tomography enables 3D virtual histology

    NASA Astrophysics Data System (ADS)

    Töpperwien, Mareike; Krenkel, Martin; Quade, Felix; Salditt, Tim

    2016-09-01

    Due to the large penetration depth and small wavelength hard x-rays offer a unique potential for 3D biomedical and biological imaging, combining capabilities of high resolution and large sample volume. However, in classical absorption-based computed tomography, soft tissue only shows a weak contrast, limiting the actual resolution. With the advent of phase-contrast methods, the much stronger phase shift induced by the sample can now be exploited. For high resolution, free space propagation behind the sample is particularly well suited to make the phase shift visible. Contrast formation is based on the self-interference of the transmitted beam, resulting in object-induced intensity modulations in the detector plane. As this method requires a sufficiently high degree of spatial coherence, it was since long perceived as a synchrotron-based imaging technique. In this contribution we show that by combination of high brightness liquid-metal jet microfocus sources and suitable sample preparation techniques, as well as optimized geometry, detection and phase retrieval, excellent three-dimensional image quality can be obtained, revealing the anatomy of a cobweb spider in high detail. This opens up new opportunities for 3D virtual histology of small organisms. Importantly, the image quality is finally augmented to a level accessible to automatic 3D segmentation.

  7. Demonstration of three gorges archaeological relics based on 3D-visualization technology

    NASA Astrophysics Data System (ADS)

    Xu, Wenli

    2015-12-01

    This paper mainly focuses on the digital demonstration of three gorges archeological relics to exhibit the achievements of the protective measures. A novel and effective method based on 3D-visualization technology, which includes large-scaled landscape reconstruction, virtual studio, and virtual panoramic roaming, etc, is proposed to create a digitized interactive demonstration system. The method contains three stages: pre-processing, 3D modeling and integration. Firstly, abundant archaeological information is classified according to its history and geographical information. Secondly, build up a 3D-model library with the technology of digital images processing and 3D modeling. Thirdly, use virtual reality technology to display the archaeological scenes and cultural relics vividly and realistically. The present work promotes the application of virtual reality to digital projects and enriches the content of digital archaeology.

  8. Education System Using Interactive 3D Computer Graphics (3D-CG) Animation and Scenario Language for Teaching Materials

    ERIC Educational Resources Information Center

    Matsuda, Hiroshi; Shindo, Yoshiaki

    2006-01-01

    The 3D computer graphics (3D-CG) animation using a virtual actor's speaking is very effective as an educational medium. But it takes a long time to produce a 3D-CG animation. To reduce the cost of producing 3D-CG educational contents and improve the capability of the education system, we have developed a new education system using Virtual Actor.…

  9. Generic precise augmented reality guiding system and its calibration method based on 3D virtual model.

    PubMed

    Liu, Miao; Yang, Shourui; Wang, Zhangying; Huang, Shujun; Liu, Yue; Niu, Zhenqi; Zhang, Xiaoxuan; Zhu, Jigui; Zhang, Zonghua

    2016-05-30

    Augmented reality system can be applied to provide precise guidance for various kinds of manual works. The adaptability and guiding accuracy of such systems are decided by the computational model and the corresponding calibration method. In this paper, a novel type of augmented reality guiding system and the corresponding designing scheme are proposed. Guided by external positioning equipment, the proposed system can achieve high relative indication accuracy in a large working space. Meanwhile, the proposed system is realized with a digital projector and the general back projection model is derived with geometry relationship between digitized 3D model and the projector in free space. The corresponding calibration method is also designed for the proposed system to obtain the parameters of projector. To validate the proposed back projection model, the coordinate data collected by a 3D positioning equipment is used to calculate and optimize the extrinsic parameters. The final projecting indication accuracy of the system is verified with subpixel pattern projecting technique.

  10. Dynamic 3D echocardiography in virtual reality

    PubMed Central

    van den Bosch, Annemien E; Koning, Anton HJ; Meijboom, Folkert J; McGhie, Jackie S; Simoons, Maarten L; van der Spek, Peter J; Bogers, Ad JJC

    2005-01-01

    Background This pilot study was performed to evaluate whether virtual reality is applicable for three-dimensional echocardiography and if three-dimensional echocardiographic 'holograms' have the potential to become a clinically useful tool. Methods Three-dimensional echocardiographic data sets from 2 normal subjects and from 4 patients with a mitral valve pathological condition were included in the study. The three-dimensional data sets were acquired with the Philips Sonos 7500 echo-system and transferred to the BARCO (Barco N.V., Kortrijk, Belgium) I-space. Ten independent observers assessed the 6 three-dimensional data sets with and without mitral valve pathology. After 10 minutes' instruction in the I-Space, all of the observers could use the virtual pointer that is necessary to create cut planes in the hologram. Results The 10 independent observers correctly assessed the normal and pathological mitral valve in the holograms (analysis time approximately 10 minutes). Conclusion this report shows that dynamic holographic imaging of three-dimensional echocardiographic data is feasible. However, the applicability and use-fullness of this technology in clinical practice is still limited. PMID:16375768

  11. Virtual landmarks

    NASA Astrophysics Data System (ADS)

    Tong, Yubing; Udupa, Jayaram K.; Odhner, Dewey; Bai, Peirui; Torigian, Drew A.

    2017-03-01

    Much has been published on finding landmarks on object surfaces in the context of shape modeling. While this is still an open problem, many of the challenges of past approaches can be overcome by removing the restriction that landmarks must be on the object surface. The virtual landmarks we propose may reside inside, on the boundary of, or outside the object and are tethered to the object. Our solution is straightforward, simple, and recursive in nature, proceeding from global features initially to local features in later levels to detect landmarks. Principal component analysis (PCA) is used as an engine to recursively subdivide the object region. The object itself may be represented in binary or fuzzy form or with gray values. The method is illustrated in 3D space (although it generalizes readily to spaces of any dimensionality) on four objects (liver, trachea and bronchi, and outer boundaries of left and right lungs along pleura) derived from 5 patient computed tomography (CT) image data sets of the thorax and abdomen. The virtual landmark identification approach seems to work well on different structures in different subjects and seems to detect landmarks that are homologously located in different samples of the same object. The approach guarantees that virtual landmarks are invariant to translation, scaling, and rotation of the object/image. Landmarking techniques are fundamental for many computer vision and image processing applications, and we are currently exploring the use virtual landmarks in automatic anatomy recognition and object analytics.

  12. Understanding Human Perception of Building Categories in Virtual 3d Cities - a User Study

    NASA Astrophysics Data System (ADS)

    Tutzauer, P.; Becker, S.; Niese, T.; Deussen, O.; Fritsch, D.

    2016-06-01

    Virtual 3D cities are becoming increasingly important as a means of visually communicating diverse urban-related information. To get a deeper understanding of a human's cognitive experience of virtual 3D cities, this paper presents a user study on the human ability to perceive building categories (e.g. residential home, office building, building with shops etc.) from geometric 3D building representations. The study reveals various dependencies between geometric properties of the 3D representations and the perceptibility of the building categories. Knowledge about which geometries are relevant, helpful or obstructive for perceiving a specific building category is derived. The importance and usability of such knowledge is demonstrated based on a perception-guided 3D building abstraction process.

  13. [Development of a software for 3D virtual phantom design].

    PubMed

    Zou, Lian; Xie, Zhao; Wu, Qi

    2014-02-01

    In this paper, we present a 3D virtual phantom design software, which was developed based on object-oriented programming methodology and dedicated to medical physics research. This software was named Magical Phan tom (MPhantom), which is composed of 3D visual builder module and virtual CT scanner. The users can conveniently construct any complex 3D phantom, and then export the phantom as DICOM 3.0 CT images. MPhantom is a user-friendly and powerful software for 3D phantom configuration, and has passed the real scene's application test. MPhantom will accelerate the Monte Carlo simulation for dose calculation in radiation therapy and X ray imaging reconstruction algorithm research.

  14. Realistic terrain visualization based on 3D virtual world technology

    NASA Astrophysics Data System (ADS)

    Huang, Fengru; Lin, Hui; Chen, Bin; Xiao, Cai

    2009-09-01

    The rapid advances in information technologies, e.g., network, graphics processing, and virtual world, have provided challenges and opportunities for new capabilities in information systems, Internet applications, and virtual geographic environments, especially geographic visualization and collaboration. In order to achieve meaningful geographic capabilities, we need to explore and understand how these technologies can be used to construct virtual geographic environments to help to engage geographic research. The generation of three-dimensional (3D) terrain plays an important part in geographical visualization, computer simulation, and virtual geographic environment applications. The paper introduces concepts and technologies of virtual worlds and virtual geographic environments, explores integration of realistic terrain and other geographic objects and phenomena of natural geographic environment based on SL/OpenSim virtual world technologies. Realistic 3D terrain visualization is a foundation of construction of a mirror world or a sand box model of the earth landscape and geographic environment. The capabilities of interaction and collaboration on geographic information are discussed as well. Further virtual geographic applications can be developed based on the foundation work of realistic terrain visualization in virtual environments.

  15. Realistic terrain visualization based on 3D virtual world technology

    NASA Astrophysics Data System (ADS)

    Huang, Fengru; Lin, Hui; Chen, Bin; Xiao, Cai

    2010-11-01

    The rapid advances in information technologies, e.g., network, graphics processing, and virtual world, have provided challenges and opportunities for new capabilities in information systems, Internet applications, and virtual geographic environments, especially geographic visualization and collaboration. In order to achieve meaningful geographic capabilities, we need to explore and understand how these technologies can be used to construct virtual geographic environments to help to engage geographic research. The generation of three-dimensional (3D) terrain plays an important part in geographical visualization, computer simulation, and virtual geographic environment applications. The paper introduces concepts and technologies of virtual worlds and virtual geographic environments, explores integration of realistic terrain and other geographic objects and phenomena of natural geographic environment based on SL/OpenSim virtual world technologies. Realistic 3D terrain visualization is a foundation of construction of a mirror world or a sand box model of the earth landscape and geographic environment. The capabilities of interaction and collaboration on geographic information are discussed as well. Further virtual geographic applications can be developed based on the foundation work of realistic terrain visualization in virtual environments.

  16. Projecting 2D gene expression data into 3D and 4D space.

    PubMed

    Gerth, Victor E; Katsuyama, Kaori; Snyder, Kevin A; Bowes, Jeff B; Kitayama, Atsushi; Ueno, Naoto; Vize, Peter D

    2007-04-01

    Video games typically generate virtual 3D objects by texture mapping an image onto a 3D polygonal frame. The feeling of movement is then achieved by mathematically simulating camera movement relative to the polygonal frame. We have built customized scripts that adapt video game authoring software to texture mapping images of gene expression data onto b-spline based embryo models. This approach, known as UV mapping, associates two-dimensional (U and V) coordinates within images to the three dimensions (X, Y, and Z) of a b-spline model. B-spline model frameworks were built either from confocal data or de novo extracted from 2D images, once again using video game authoring approaches. This system was then used to build 3D models of 182 genes expressed in developing Xenopus embryos and to implement these in a web-accessible database. Models can be viewed via simple Internet browsers and utilize openGL hardware acceleration via a Shockwave plugin. Not only does this database display static data in a dynamic and scalable manner, the UV mapping system also serves as a method to align different images to a common framework, an approach that may make high-throughput automated comparisons of gene expression patterns possible. Finally, video game systems also have elegant methods for handling movement, allowing biomechanical algorithms to drive the animation of models. With further development, these biomechanical techniques offer practical methods for generating virtual embryos that recapitulate morphogenesis.

  17. Micro-CTvlab: A web based virtual gallery of biological specimens using X-ray microtomography (micro-CT)

    PubMed Central

    Faulwetter, Sarah; Chatzinikolaou, Eva; Michalakis, Nikitas; Filiopoulou, Irene; Minadakis, Nikos; Panteri, Emmanouela; Perantinos, George; Gougousis, Alexandros; Arvanitidis, Christos

    2016-01-01

    Abstract Background During recent years, X-ray microtomography (micro-CT) has seen an increasing use in biological research areas, such as functional morphology, taxonomy, evolutionary biology and developmental research. Micro-CT is a technology which uses X-rays to create sub-micron resolution images of external and internal features of specimens. These images can then be rendered in a three-dimensional space and used for qualitative and quantitative 3D analyses. However, the online exploration and dissemination of micro-CT datasets are rarely made available to the public due to their large size and a lack of dedicated online platforms for the interactive manipulation of 3D data. Here, the development of a virtual micro-CT laboratory (Micro-CTvlab) is described, which can be used by everyone who is interested in digitisation methods and biological collections and aims at making the micro-CT data exploration of natural history specimens freely available over the internet. New information The Micro-CTvlab offers to the user virtual image galleries of various taxa which can be displayed and downloaded through a web application. With a few clicks, accurate, detailed and three-dimensional models of species can be studied and virtually dissected without destroying the actual specimen. The data and functions of the Micro-CTvlab can be accessed either on a normal computer or through a dedicated version for mobile devices. PMID:27956848

  18. Micro-CTvlab: A web based virtual gallery of biological specimens using X-ray microtomography (micro-CT).

    PubMed

    Keklikoglou, Kleoniki; Faulwetter, Sarah; Chatzinikolaou, Eva; Michalakis, Nikitas; Filiopoulou, Irene; Minadakis, Nikos; Panteri, Emmanouela; Perantinos, George; Gougousis, Alexandros; Arvanitidis, Christos

    2016-01-01

    During recent years, X-ray microtomography (micro-CT) has seen an increasing use in biological research areas, such as functional morphology, taxonomy, evolutionary biology and developmental research. Micro-CT is a technology which uses X-rays to create sub-micron resolution images of external and internal features of specimens. These images can then be rendered in a three-dimensional space and used for qualitative and quantitative 3D analyses. However, the online exploration and dissemination of micro-CT datasets are rarely made available to the public due to their large size and a lack of dedicated online platforms for the interactive manipulation of 3D data. Here, the development of a virtual micro-CT laboratory (Micro-CT vlab ) is described, which can be used by everyone who is interested in digitisation methods and biological collections and aims at making the micro-CT data exploration of natural history specimens freely available over the internet. The Micro-CT vlab offers to the user virtual image galleries of various taxa which can be displayed and downloaded through a web application. With a few clicks, accurate, detailed and three-dimensional models of species can be studied and virtually dissected without destroying the actual specimen. The data and functions of the Micro-CT vlab can be accessed either on a normal computer or through a dedicated version for mobile devices.

  19. Brave New (Interactive) Worlds: A Review of the Design Affordances and Constraints of Two 3D Virtual Worlds as Interactive Learning Environments

    ERIC Educational Resources Information Center

    Dickey, Michele D.

    2005-01-01

    Three-dimensional virtual worlds are an emerging medium currently being used in both traditional classrooms and for distance education. Three-dimensional (3D) virtual worlds are a combination of desk-top interactive Virtual Reality within a chat environment. This analysis provides an overview of Active Worlds Educational Universe and Adobe…

  20. Virtual rough samples to test 3D nanometer-scale scanning electron microscopy stereo photogrammetry.

    PubMed

    Villarrubia, J S; Tondare, V N; Vladár, A E

    2016-01-01

    The combination of scanning electron microscopy for high spatial resolution, images from multiple angles to provide 3D information, and commercially available stereo photogrammetry software for 3D reconstruction offers promise for nanometer-scale dimensional metrology in 3D. A method is described to test 3D photogrammetry software by the use of virtual samples-mathematical samples from which simulated images are made for use as inputs to the software under test. The virtual sample is constructed by wrapping a rough skin with any desired power spectral density around a smooth near-trapezoidal line with rounded top corners. Reconstruction is performed with images simulated from different angular viewpoints. The software's reconstructed 3D model is then compared to the known geometry of the virtual sample. Three commercial photogrammetry software packages were tested. Two of them produced results for line height and width that were within close to 1 nm of the correct values. All of the packages exhibited some difficulty in reconstructing details of the surface roughness.

  1. Integrating a facial scan, virtual smile design, and 3D virtual patient for treatment with CAD-CAM ceramic veneers: A clinical report.

    PubMed

    Lin, Wei-Shao; Harris, Bryan T; Phasuk, Kamolphob; Llop, Daniel R; Morton, Dean

    2018-02-01

    This clinical report describes a digital workflow using the virtual smile design approach augmented with a static 3-dimensional (3D) virtual patient with photorealistic appearance to restore maxillary central incisors by using computer-aided design and computer-aided manufacturing (CAD-CAM) monolithic lithium disilicate ceramic veneers. Copyright © 2017 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  2. Helical CT scan with 2D and 3D reconstructions and virtual endoscopy versus conventional endoscopy in the assessment of airway disease in neonates, infants and children.

    PubMed

    Yunus, Mahira

    2012-11-01

    To study the use of helical computed tomography 2-D and 3-D images, and virtual endoscopy in the evaluation of airway disease in neonates, infants and children and its value in lesion detection, characterisation and extension. Conducted at Al-Noor Hospital, Makkah, Saudi Arabia, from January 1 to June 30, 2006, the study comprised of 40 patients with strider, having various causes of airway obstruction. They were examined by helical CT scan with 2-D and 3-D reconstructions and virtual endoscopy. The level and characterisation of lesions were carried out and results were compared with actual endoscopic findings. Conventional endoscopy was chosen as the gold standard, and the evaluation of endoscopy was done in terms of sensitivity and specificity of the procedure. For statistical purposes, SPSS version 10 was used. All CT methods detected airway stenosis or obstruction. Accuracy was 98% (n=40) for virtual endoscopy, 96% (n=48) for 3-D external rendering, 90% (n=45) for multiplanar reconstructions and 86% (n=43) for axial images. Comparing the results of 3-D internal and external volume rendering images with conventional endoscopy for detection and grading of stenosis were closer than with 2-D minimum intensity multiplanar reconstruction and axial CT slices. Even high-grade stenosis could be evaluated with virtual endoscope through which conventional endoscope cannot be passed. A case of 4-year-old patient with tracheomalacia could not be diagnosed by helical CT scan and virtual bronchoscopy which was diagriosed on conventional endoscopy and needed CT scan in inspiration and expiration. Virtual endoscopy [VE] enabled better assessment of stenosis compared to the reading of 3-D external rendering, 2-D multiplanar reconstruction [MPR] or axial slices. It can replace conventional endoscopy in the assessment of airway disease without any additional risk.

  3. Virtually fabricated guide for placement of the C-tube miniplate.

    PubMed

    Paek, Janghyun; Jeong, Do-Min; Kim, Yong; Kim, Seong-Hun; Chung, Kyu-Rhim; Nelson, Gerald

    2014-05-01

    This paper introduces a virtually planned and stereolithographically fabricated guiding system that will allow the clinician to plan carefully for the best location of the device and to achieve an accurate position without complications. The scanned data from preoperative dental casts were edited to obtain preoperative 3-dimensional (3D) virtual models of the dentition. After the 3D virtual models were repositioned, the 3D virtual surgical guide was fabricated. A surgical guide was created onscreen, and then these virtual guides were materialized into real ones using the stereolithographic technique. Whereas the previously described guide required laboratory work to be performed by the orthodontist, our technique is more convenient because the laboratory work is done remotely by computer-aided design/computer-aided manufacturing technology. Because the miniplate is firmly held in place as the patient holds his or her mandibular teeth against the occlusal pad of the surgical guide, there is no risk that the miniscrews can slide on the bone surface during placement. The software program (2.5-dimensional software) in this study combines 2-dimensional cephalograms with 3D virtual dental models. This software is an effective and efficient alternative to 3D software when 3D computed tomography data are not available. To confidently and safely place a miniplate with screw fixation, a simple customized guide for an orthodontic miniplate was introduced. The use of a custom-made, rigid guide when placing miniplates will minimize complications such as vertical mislocation or slippage of the miniplate during placement. Copyright © 2014 American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.

  4. Using a wireless motion controller for 3D medical image catheter interactions

    NASA Astrophysics Data System (ADS)

    Vitanovski, Dime; Hahn, Dieter; Daum, Volker; Hornegger, Joachim

    2009-02-01

    State-of-the-art morphological imaging techniques usually provide high resolution 3D images with a huge number of slices. In clinical practice, however, 2D slice-based examinations are still the method of choice even for these large amounts of data. Providing intuitive interaction methods for specific 3D medical visualization applications is therefore a critical feature for clinical imaging applications. For the domain of catheter navigation and surgery planning, it is crucial to assist the physician with appropriate visualization techniques, such as 3D segmentation maps, fly-through cameras or virtual interaction approaches. There has been an ongoing development and improvement for controllers that help to interact with 3D environments in the domain of computer games. These controllers are based on both motion and infrared sensors and are typically used to detect 3D position and orientation. We have investigated how a state-of-the-art wireless motion sensor controller (Wiimote), developed by Nintendo, can be used for catheter navigation and planning purposes. By default the Wiimote controller only measure rough acceleration over a range of +/- 3g with 10% sensitivity and orientation. Therefore, a pose estimation algorithm was developed for computing accurate position and orientation in 3D space regarding 4 Infrared LEDs. Current results show that for the translation it is possible to obtain a mean error of (0.38cm, 0.41cm, 4.94cm) and for the rotation (0.16, 0.28) respectively. Within this paper we introduce a clinical prototype that allows steering of a virtual fly-through camera attached to the catheter tip by the Wii controller on basis of a segmented vessel tree.

  5. Mapping the neglected space: gradients of detection revealed by virtual reality.

    PubMed

    Dvorkin, Assaf Y; Bogey, Ross A; Harvey, Richard L; Patton, James L

    2012-02-01

    Spatial neglect affects perception along different dimensions. However, there is limited availability of 3-dimensional (3D) methods that fully map out a patient's volume of deficit, although this could guide clinical management. To test whether patients with neglect exhibit simple contralesional versus complex perceptual deficits and whether deficits are best described using Cartesian (rectangular) or polar coordinates. Seventeen right-hemisphere persons with stroke (8 with a history of neglect) and 9 healthy controls were exposed to a 3D virtual environment. Targets placed in a dense array appeared one at a time in various locations. When tested using rectangular array of targets, subjects in the neglect group exhibited complex asymmetries across several dimensions in both reaction time and target detection rates. Paper-and-pencil tests only detected neglect in 4 of 8 of these patients. When tested using polar array of targets, 2 patients who initially appeared to perform poorly in both left and near space only showed a simple left-side asymmetry that depended almost entirely on the angle from the sagittal plane. A third patient exhibited left neglect irrespective of the arrangements of targets used. An idealized model with pure dependence on the polar angle demonstrated how such deficits could be misconstrued as near neglect if one uses a rectangular array. Such deficits may be poorly detected by paper-and-pencil tests and even by computerized tests that use regular screens. Assessments that incorporate 3D arrangements of targets enable precise mapping of deficient areas and detect subtle forms of neglect whose identification may be relevant to treatment strategies.

  6. Amplified Head Rotation in Virtual Reality and the Effects on 3D Search, Training Transfer, and Spatial Orientation.

    PubMed

    Ragan, Eric D; Scerbo, Siroberto; Bacim, Felipe; Bowman, Doug A

    2017-08-01

    Many types of virtual reality (VR) systems allow users to use natural, physical head movements to view a 3D environment. In some situations, such as when using systems that lack a fully surrounding display or when opting for convenient low-effort interaction, view control can be enabled through a combination of physical and virtual turns to view the environment, but the reduced realism could potentially interfere with the ability to maintain spatial orientation. One solution to this problem is to amplify head rotations such that smaller physical turns are mapped to larger virtual turns, allowing trainees to view the entire surrounding environment with small head movements. This solution is attractive because it allows semi-natural physical view control rather than requiring complete physical rotations or a fully-surrounding display. However, the effects of amplified head rotations on spatial orientation and many practical tasks are not well understood. In this paper, we present an experiment that evaluates the influence of amplified head rotation on 3D search, spatial orientation, and cybersickness. In the study, we varied the amount of amplification and also varied the type of display used (head-mounted display or surround-screen CAVE) for the VR search task. By evaluating participants first with amplification and then without, we were also able to study training transfer effects. The findings demonstrate the feasibility of using amplified head rotation to view 360 degrees of virtual space, but noticeable problems were identified when using high amplification with a head-mounted display. In addition, participants were able to more easily maintain a sense of spatial orientation when using the CAVE version of the application, which suggests that visibility of the user's body and awareness of the CAVE's physical environment may have contributed to the ability to use the amplification technique while keeping track of orientation.

  7. Reconstituted Three-Dimensional Interactive Imaging

    NASA Technical Reports Server (NTRS)

    Hamilton, Joseph; Foley, Theodore; Duncavage, Thomas; Mayes, Terrence

    2010-01-01

    A method combines two-dimensional images, enhancing the images as well as rendering a 3D, enhanced, interactive computer image or visual model. Any advanced compiler can be used in conjunction with any graphics library package for this method, which is intended to take digitized images and virtually stack them so that they can be interactively viewed as a set of slices. This innovation can take multiple image sources (film or digital) and create a "transparent" image with higher densities in the image being less transparent. The images are then stacked such that an apparent 3D object is created in virtual space for interactive review of the set of images. This innovation can be used with any application where 3D images are taken as slices of a larger object. These could include machines, materials for inspection, geological objects, or human scanning. Illuminous values were stacked into planes with different transparency levels of tissues. These transparency levels can use multiple energy levels, such as density of CT scans or radioactive density. A desktop computer with enough video memory to produce the image is capable of this work. The memory changes with the size and resolution of the desired images to be stacked and viewed.

  8. Distance Learning for Students with Special Needs through 3D Virtual Learning

    ERIC Educational Resources Information Center

    Laffey, James M.; Stichter, Janine; Galyen, Krista

    2014-01-01

    iSocial is a 3D Virtual Learning Environment (3D VLE) to develop social competency for students who have been identified with High-Functioning Autism Spectrum Disorders. The motivation for developing a 3D VLE is to improve access to special needs curriculum for students who live in rural or small school districts. The paper first describes a…

  9. Virtual Exploration of the Ring Systems Chemical Universe.

    PubMed

    Visini, Ricardo; Arús-Pous, Josep; Awale, Mahendra; Reymond, Jean-Louis

    2017-11-27

    Here, we explore the chemical space of all virtually possible organic molecules focusing on ring systems, which represent the cyclic cores of organic molecules obtained by removing all acyclic bonds and converting all remaining atoms to carbon. This approach circumvents the combinatorial explosion encountered when enumerating the molecules themselves. We report the chemical universe database GDB4c containing 916 130 ring systems up to four saturated or aromatic rings and maximum ring size of 14 atoms and GDB4c3D containing the corresponding 6 555 929 stereoisomers. Almost all (98.6%) of these ring systems are unknown and represent chiral 3D-shaped macrocycles containing small rings and quaternary centers reminiscent of polycyclic natural products. We envision that GDB4c can serve to select new ring systems from which to design analogs of such natural products. The database is available for download at www.gdb.unibe.ch together with interactive visualization and search tools as a resource for molecular design.

  10. Stereo camera based virtual cane system with identifiable distance tactile feedback for the blind.

    PubMed

    Kim, Donghun; Kim, Kwangtaek; Lee, Sangyoun

    2014-06-13

    In this paper, we propose a new haptic-assisted virtual cane system operated by a simple finger pointing gesture. The system is developed by two stages: development of visual information delivery assistant (VIDA) with a stereo camera and adding a tactile feedback interface with dual actuators for guidance and distance feedbacks. In the first stage, user's pointing finger is automatically detected using color and disparity data from stereo images and then a 3D pointing direction of the finger is estimated with its geometric and textural features. Finally, any object within the estimated pointing trajectory in 3D space is detected and the distance is then estimated in real time. For the second stage, identifiable tactile signals are designed through a series of identification experiments, and an identifiable tactile feedback interface is developed and integrated into the VIDA system. Our approach differs in that navigation guidance is provided by a simple finger pointing gesture and tactile distance feedbacks are perfectly identifiable to the blind.

  11. Stereo Camera Based Virtual Cane System with Identifiable Distance Tactile Feedback for the Blind

    PubMed Central

    Kim, Donghun; Kim, Kwangtaek; Lee, Sangyoun

    2014-01-01

    In this paper, we propose a new haptic-assisted virtual cane system operated by a simple finger pointing gesture. The system is developed by two stages: development of visual information delivery assistant (VIDA) with a stereo camera and adding a tactile feedback interface with dual actuators for guidance and distance feedbacks. In the first stage, user's pointing finger is automatically detected using color and disparity data from stereo images and then a 3D pointing direction of the finger is estimated with its geometric and textural features. Finally, any object within the estimated pointing trajectory in 3D space is detected and the distance is then estimated in real time. For the second stage, identifiable tactile signals are designed through a series of identification experiments, and an identifiable tactile feedback interface is developed and integrated into the VIDA system. Our approach differs in that navigation guidance is provided by a simple finger pointing gesture and tactile distance feedbacks are perfectly identifiable to the blind. PMID:24932864

  12. Virtual manufacturing work cell for engineering

    NASA Astrophysics Data System (ADS)

    Watanabe, Hideo; Ohashi, Kazushi; Takahashi, Nobuyuki; Kato, Kiyotaka; Fujita, Satoru

    1997-12-01

    The life cycles of products have been getting shorter. To meet this rapid turnover, manufacturing systems must be frequently changed as well. In engineering to develop manufacturing systems, there are several tasks such as process planning, layout design, programming, and final testing using actual machines. This development of manufacturing systems takes a long time and is expensive. To aid the above engineering process, we have developed the virtual manufacturing workcell (VMW). This paper describes a concept of VMW and design method through computer aided manufacturing engineering using VMW (CAME-VMW) related to the above engineering tasks. The VMW has all design data, and realizes a behavior of equipment and devices using a simulator. The simulator has logical and physical functionality. The one simulates a sequence control and the other simulates motion control, shape movement in 3D space. The simulator can execute the same control software made for actual machines. Therefore we can verify the behavior precisely before the manufacturing workcell will be constructed. The VMW creates engineering work space for several engineers and offers debugging tools such as virtual equipment and virtual controllers. We applied this VMW to development of a transfer workcell for vaporization machine in actual manufacturing system to produce plasma display panel (PDP) workcell and confirmed its effectiveness.

  13. Parallel-distributed mobile robot simulator

    NASA Astrophysics Data System (ADS)

    Okada, Hiroyuki; Sekiguchi, Minoru; Watanabe, Nobuo

    1996-06-01

    The aim of this project is to achieve an autonomous learning and growth function based on active interaction with the real world. It should also be able to autonomically acquire knowledge about the context in which jobs take place, and how the jobs are executed. This article describes a parallel distributed movable robot system simulator with an autonomous learning and growth function. The autonomous learning and growth function which we are proposing is characterized by its ability to learn and grow through interaction with the real world. When the movable robot interacts with the real world, the system compares the virtual environment simulation with the interaction result in the real world. The system then improves the virtual environment to match the real-world result more closely. This the system learns and grows. It is very important that such a simulation is time- realistic. The parallel distributed movable robot simulator was developed to simulate the space of a movable robot system with an autonomous learning and growth function. The simulator constructs a virtual space faithful to the real world and also integrates the interfaces between the user, the actual movable robot and the virtual movable robot. Using an ultrafast CG (computer graphics) system (FUJITSU AG series), time-realistic 3D CG is displayed.

  14. Design of virtual display and testing system for moving mass electromechanical actuator

    NASA Astrophysics Data System (ADS)

    Gao, Zhigang; Geng, Keda; Zhou, Jun; Li, Peng

    2015-12-01

    Aiming at the problem of control, measurement and movement virtual display of moving mass electromechanical actuator(MMEA), the virtual testing system of MMEA was developed based on the PC-DAQ architecture and the software platform of LabVIEW, and the comprehensive test task such as drive control of MMEA, tests of kinematic parameter, measurement of centroid position and virtual display of movement could be accomplished. The system could solve the alignment for acquisition time between multiple measurement channels in different DAQ cards, then on this basis, the researches were focused on the dynamic 3D virtual display by the LabVIEW, and the virtual display of MMEA were realized by the method of calling DLL and the method of 3D graph drawing controls. Considering the collaboration with the virtual testing system, including the hardware drive, the measurement software of data acquisition, and the 3D graph drawing controls method was selected, which could obtained the synchronization measurement, control and display. The system can measure dynamic centroid position and kinematic position of movable mass block while controlling the MMEA, and the interface of 3D virtual display has realistic effect and motion smooth, which can solve the problem of display and playback about MMEA in the closed shell.

  15. 3D virtual environment of Taman Mini Indonesia Indah in a web

    NASA Astrophysics Data System (ADS)

    Wardijono, B. A.; Wardhani, I. P.; Chandra, Y. I.; Pamungkas, B. U. G.

    2018-05-01

    Taman Mini Indonesia Indah known as TMII is a largest recreational park based on culture in Indonesia. This park has 250 acres that consist of houses from provinces in Indonesia. In TMII, there are traditional houses of the various provinces in Indonesia. The official website of TMII has informed the traditional houses, but the information was limited to public. To provide information more detail about TMII to the public, this research aims to create and develop virtual traditional houses as 3d graphics models and show it via website. The Virtual Reality (VR) technology was used to display the visualization of the TMII and the surrounding environment. This research used Blender software to create the 3D models and Unity3D software to make virtual reality models that can be showed on a web. This research has successfully created 33 virtual traditional houses of province in Indonesia. The texture of traditional house was taken from original to make the culture house realistic. The result of this research was the website of TMII including virtual culture houses that can be displayed through the web browser. The website consists of virtual environment scenes and internet user can walkthrough and navigates inside the scenes.

  16. Mastoid Cavity Dimensions and Shape: Method of Measurement and Virtual Fitting of Implantable Devices

    PubMed Central

    Handzel, Ophir; Wang, Haobing; Fiering, Jason; Borenstein, Jeffrey T.; Mescher, Mark J.; Leary Swan, Erin E.; Murphy, Brian A.; Chen, Zhiqiang; Peppi, Marcello; Sewell, William F.; Kujawa, Sharon G.; McKenna, Michael J.

    2009-01-01

    Temporal bone implants can be used to electrically stimulate the auditory nerve, to amplify sound, to deliver drugs to the inner ear and potentially for other future applications. The implants require storage space and access to the middle or inner ears. The most acceptable space is the cavity created by a canal wall up mastoidectomy. Detailed knowledge of the available space for implantation and pathways to access the middle and inner ears is necessary for the design of implants and successful implantation. Based on temporal bone CT scans a method for three-dimensional reconstruction of a virtual canal wall up mastoidectomy space is described. Using Amira® software the area to be removed during such surgery is marked on axial CT slices, and a three-dimensional model of that space is created. The average volume of 31 reconstructed models is 12.6 cm3 with standard deviation of 3.69 cm3, ranging from 7.97 to 23.25 cm3. Critical distances were measured directly from the model and their averages were calculated: height 3.69 cm, depth 2.43 cm, length above the external auditory canal (EAC) 4.45 cm and length posterior to EAC 3.16 cm. These linear measurements did not correlate well with volume measurements. The shape of the models was variable to a significant extent making the prediction of successful implantation for a given design based on linear and volumetric measurement unreliable. Hence, to assure successful implantation, preoperative assessment should include a virtual fitting of an implant into the intended storage space. The above-mentioned three-dimensional models were exported from Amira to a Solidworks application where virtual fitting was performed. Our results are compared to other temporal bone implant virtual fitting studies. Virtual fitting has been suggested for other human applications. PMID:19372649

  17. 3D reconstruction and spatial auralization of the "Painted Dolmen" of Antelas

    NASA Astrophysics Data System (ADS)

    Dias, Paulo; Campos, Guilherme; Santos, Vítor; Casaleiro, Ricardo; Seco, Ricardo; Sousa Santos, Beatriz

    2008-02-01

    This paper presents preliminary results on the development of a 3D audiovisual model of the Anta Pintada (painted dolmen) of Antelas, a Neolithic chamber tomb located in Oliveira de Frades and listed as Portuguese national monument. The final aim of the project is to create a highly accurate Virtual Reality (VR) model of this unique archaeological site, capable of providing not only visual but also acoustic immersion based on its actual geometry and physical properties. The project started in May 2006 with in situ data acquisition. The 3D geometry of the chamber was captured using a Laser Range Finder. In order to combine the different scans into a complete 3D visual model, reconstruction software based on the Iterative Closest Point (ICP) algorithm was developed using the Visualization Toolkit (VTK). This software computes the boundaries of the room on a 3D uniform grid and populates its interior with "free-space nodes", through an iterative algorithm operating like a torchlight illuminating a dark room. The envelope of the resulting set of "free-space nodes" is used to generate a 3D iso-surface approximating the interior shape of the chamber. Each polygon of this surface is then assigned the acoustic absorption coefficient of the corresponding boundary material. A 3D audiovisual model operating in real-time was developed for a VR Environment comprising head-mounted display (HMD) I-glasses SVGAPro, an orientation sensor (tracker) InterTrax 2 with 3 Degrees Of Freedom (3DOF) and stereo headphones. The auralisation software is based on a geometric model. This constitutes a first approach, since geometric acoustics have well-known limitations in rooms with irregular surfaces. The immediate advantage lies in their inherent computational efficiency, which allows real-time operation. The program computes the early reflections forming the initial part of the chamber's impulse response (IR), which carry the most significant cues for source localisation. These early reflections are processed through Head Related Transfer Functions (HRTF) updated in real-time according to the orientation of the user's head, so that sound waves appear to come from the correct location in space, in agreement with the visual scene. The late-reverberation tail of the IR is generated by an algorithm designed to match the reverberation time of the chamber, calculated from the actual acoustic absorption coefficients of its surfaces. The sound output to the headphones is obtained by convolving the IR with anechoic recordings of the virtual audio source.

  18. Discontinuity minimization for omnidirectional video projections

    NASA Astrophysics Data System (ADS)

    Alshina, Elena; Zakharchenko, Vladyslav

    2017-09-01

    Advances in display technologies both for head mounted devices and television panels demand resolution increase beyond 4K for source signal in virtual reality video streaming applications. This poses a problem of content delivery trough a bandwidth limited distribution networks. Considering a fact that source signal covers entire surrounding space investigation reviled that compression efficiency may fluctuate 40% in average depending on origin selection at the conversion stage from 3D space to 2D projection. Based on these knowledge the origin selection algorithm for video compression applications has been proposed. Using discontinuity entropy minimization function projection origin rotation may be defined to provide optimal compression results. Outcome of this research may be applied across various video compression solutions for omnidirectional content.

  19. Full Immersive Virtual Environment Cave[TM] in Chemistry Education

    ERIC Educational Resources Information Center

    Limniou, M.; Roberts, D.; Papadopoulos, N.

    2008-01-01

    By comparing two-dimensional (2D) chemical animations designed for computer's desktop with three-dimensional (3D) chemical animations designed for the full immersive virtual reality environment CAVE[TM] we studied how virtual reality environments could raise student's interest and motivation for learning. By using the 3ds max[TM], we can visualize…

  20. A Collaborative Virtual Environment for Situated Language Learning Using VEC3D

    ERIC Educational Resources Information Center

    Shih, Ya-Chun; Yang, Mau-Tsuen

    2008-01-01

    A 3D virtually synchronous communication architecture for situated language learning has been designed to foster communicative competence among undergraduate students who have studied English as a foreign language (EFL). We present an innovative approach that offers better e-learning than the previous virtual reality educational applications. The…

  1. Social Presence and Motivation in a Three-Dimensional Virtual World: An Explanatory Study

    ERIC Educational Resources Information Center

    Yilmaz, Rabia M.; Topu, F. Burcu; Goktas, Yuksel; Coban, Murat

    2013-01-01

    Three-dimensional (3-D) virtual worlds differ from other learning environments in their similarity to real life, providing opportunities for more effective communication and interaction. With these features, 3-D virtual worlds possess considerable potential to enhance learning opportunities. For effective learning, the users' motivation levels and…

  2. Application of two segmentation protocols during the processing of virtual images in rapid prototyping: ex vivo study with human dry mandibles.

    PubMed

    Ferraz, Eduardo Gomes; Andrade, Lucio Costa Safira; dos Santos, Aline Rode; Torregrossa, Vinicius Rabelo; Rubira-Bullen, Izabel Regina Fischer; Sarmento, Viviane Almeida

    2013-12-01

    The aim of this study was to evaluate the accuracy of virtual three-dimensional (3D) reconstructions of human dry mandibles, produced from two segmentation protocols ("outline only" and "all-boundary lines"). Twenty virtual three-dimensional (3D) images were built from computed tomography exam (CT) of 10 dry mandibles, in which linear measurements between anatomical landmarks were obtained and compared to an error probability of 5 %. The results showed no statistically significant difference among the dry mandibles and the virtual 3D reconstructions produced from segmentation protocols tested (p = 0,24). During the designing of a virtual 3D reconstruction, both "outline only" and "all-boundary lines" segmentation protocols can be used. Virtual processing of CT images is the most complex stage during the manufacture of the biomodel. Establishing a better protocol during this phase allows the construction of a biomodel with characteristics that are closer to the original anatomical structures. This is essential to ensure a correct preoperative planning and a suitable treatment.

  3. Optimizing Coverage of Three-Dimensional Wireless Sensor Networks by Means of Photon Mapping

    DTIC Science & Technology

    2013-12-01

    information if it does not display a currently valid OMB control number. 1. REPORT DATE DEC 2013 2. REPORT TYPE 3. DATES COVERED 00-00-2013 to 00-00...information about the monitored space is sensed?” Solving this formulation of the AGP relies upon the creation of a model describing how a set of...simulated photons will propagate in a 3D virtual environment. Furthermore, the photon model requires an efficient data structure with small memory

  4. [Application of 3D virtual reality technology with multi-modality fusion in resection of glioma located in central sulcus region].

    PubMed

    Chen, T N; Yin, X T; Li, X G; Zhao, J; Wang, L; Mu, N; Ma, K; Huo, K; Liu, D; Gao, B Y; Feng, H; Li, F

    2018-05-08

    Objective: To explore the clinical and teaching application value of virtual reality technology in preoperative planning and intraoperative guide of glioma located in central sulcus region. Method: Ten patients with glioma in the central sulcus region were proposed to surgical treatment. The neuro-imaging data, including CT, CTA, DSA, MRI, fMRI were input to 3dgo sczhry workstation for image fusion and 3D reconstruction. Spatial relationships between the lesions and the surrounding structures on the virtual reality image were obtained. These images were applied to the operative approach design, operation process simulation, intraoperative auxiliary decision and the training of specialist physician. Results: Intraoperative founding of 10 patients were highly consistent with preoperative simulation with virtual reality technology. Preoperative 3D reconstruction virtual reality images improved the feasibility of operation planning and operation accuracy. This technology had not only shown the advantages for neurological function protection and lesion resection during surgery, but also improved the training efficiency and effectiveness of dedicated physician by turning the abstract comprehension to virtual reality. Conclusion: Image fusion and 3D reconstruction based virtual reality technology in glioma resection is helpful for formulating the operation plan, improving the operation safety, increasing the total resection rate, and facilitating the teaching and training of the specialist physician.

  5. High-Resolution Large-Field-of-View Three-Dimensional Hologram Display System and Method Thereof

    NASA Technical Reports Server (NTRS)

    Chao, Tien-Hsin (Inventor); Mintz, Frederick W. (Inventor); Tsou, Peter (Inventor); Bryant, Nevin A. (Inventor)

    2001-01-01

    A real-time, dynamic, free space-virtual reality, 3-D image display system is enabled by using a unique form of Aerogel as the primary display media. A preferred embodiment of this system comprises a 3-D mosaic topographic map which is displayed by fusing four projected hologram images. In this embodiment, four holographic images are projected from four separate holograms. Each holographic image subtends a quadrant of the 4(pi) solid angle. By fusing these four holographic images, a static 3-D image such as a featured terrain map would be visible for 360 deg in the horizontal plane and 180 deg in the vertical plane. An input, either acquired by 3-D image sensor or generated by computer animation, is first converted into a 2-D computer generated hologram (CGH). This CGH is then downloaded into large liquid crystal (LC) panel. A laser projector illuminates the CGH-filled LC panel and generates and displays a real 3-D image in the Aerogel matrix.

  6. An Onboard ISS Virtual Reality Trainer

    NASA Technical Reports Server (NTRS)

    Miralles, Evelyn

    2013-01-01

    Prior to the retirement of the Space Shuttle, many exterior repairs on the International Space Station (ISS) were carried out by shuttle astronauts, trained on the ground and flown to the Station to perform these specific repairs. With the retirement of the shuttle, this is no longer an available option. As such, the need for ISS crew members to review scenarios while on flight, either for tasks they already trained for on the ground or for contingency operations has become a very critical issue. NASA astronauts prepare for Extra-Vehicular Activities (EVA) or Spacewalks through numerous training media, such as: self-study, part task training, underwater training in the Neutral Buoyancy Laboratory (NBL), hands-on hardware reviews and training at the Virtual Reality Laboratory (VRLab). In many situations, the time between the last session of a training and an EVA task might be 6 to 8 months. EVA tasks are critical for a mission and as time passes the crew members may lose proficiency on previously trained tasks and their options to refresh or learn a new skill while on flight are limited to reading training materials and watching videos. In addition, there is an increased need for unplanned contingency repairs to fix problems arising as the Station ages. In order to help the ISS crew members maintain EVA proficiency or train for contingency repairs during their mission, the Johnson Space Center's VRLab designed an immersive ISS Virtual Reality Trainer (VRT). The VRT incorporates a unique optical system that makes use of the already successful Dynamic On-board Ubiquitous Graphics (DOUG) software to assist crew members with procedure reviews and contingency EVAs while on board the Station. The need to train and re-train crew members for EVAs and contingency scenarios is crucial and extremely demanding. ISS crew members are now asked to perform EVA tasks for which they have not been trained and potentially have never seen before. The Virtual Reality Trainer (VRT) provides an immersive 3D environment similar to the one experienced at the VRLab crew training facility at the NASA Johnson Space Center. VRT bridges the gap by allowing crew members to experience an interactive, 3D environment to reinforce skills already learned and to explore new work sites and repair procedures outside the Station.

  7. The Development of a Virtual Company to Support the Reengineering of the NASA/Goddard Hubble Space Telescope Control Center System

    NASA Technical Reports Server (NTRS)

    Lehtonen, Ken

    1999-01-01

    This is a report to the Third Annual International Virtual Company Conference, on The Development of a Virtual Company to Support the Reengineering of the NASA/Goddard Hubble Space Telescope (HST) Control Center System. It begins with a HST Science "Commercial": Brief Tour of Our Universe showing various pictures taken from the Hubble Space Telescope. The presentation then reviews the project background and goals. Evolution of the Control Center System ("CCS Inc.") is then reviewed. Topics of Interest to "virtual companies" are reviewed: (1) "How To Choose A Team" (2) "Organizational Model" (3) "The Human Component" (4) "'Virtual Trust' Among Teaming Companies" (5) "Unique Challenges to Working Horizontally" (6) "The Cultural Impact" (7) "Lessons Learned".

  8. Lessons about Virtual-Environment Software Systems from 20 years of VE building

    PubMed Central

    Taylor, Russell M.; Jerald, Jason; VanderKnyff, Chris; Wendt, Jeremy; Borland, David; Marshburn, David; Sherman, William R.; Whitton, Mary C.

    2010-01-01

    What are desirable and undesirable features of virtual-environment (VE) software architectures? What should be present (and absent) from such systems if they are to be optimally useful? How should they be structured? To help answer these questions we present experience from application designers, toolkit designers, and VE system architects along with examples of useful features from existing systems. Topics are organized under the major headings of: 3D space management, supporting display hardware, interaction, event management, time management, computation, portability, and the observation that less can be better. Lessons learned are presented as discussion of the issues, field experiences, nuggets of knowledge, and case studies. PMID:20567602

  9. Virtual and Printed 3D Models for Teaching Crystal Symmetry and Point Groups

    ERIC Educational Resources Information Center

    Casas, Lluís; Estop, Euge`nia

    2015-01-01

    Both, virtual and printed 3D crystal models can help students and teachers deal with chemical education topics such as symmetry and point groups. In the present paper, two freely downloadable tools (interactive PDF files and a mobile app) are presented as examples of the application of 3D design to study point-symmetry. The use of 3D printing to…

  10. High-Energy 3D Calorimeter for Use in Gamma-Ray Astronomy Based on Position-Sensitive Virtual Frisch-Grid CdZnTe Detectors

    NASA Technical Reports Server (NTRS)

    Moiseev, A.; Bolotnikov, A.; DeGeronimo, G.; Hays, E.; James, R.; Thompson, D.; Vernon, E.

    2017-01-01

    We will present a concept for a calorimeter based on a novel approach of 3D position-sensitive virtual Frisch-grid CdZnTe (hereafter CZT) detectors. This calorimeter aims to measure photons with energies from approximately 100 keV to 20 - 50 MeV . The expected energy resolution at 662 keV is better than 1% FWHM, and the photon interaction position-measurement accuracy is better than 1 mm in all 3 dimensions. Each CZT bar is a rectangular prism with typical cross-section from 5 x 5 to 7 x 7 mm2 and length of 2 - 4 cm. The bars are arranged in modules of 4 x 4 bars, and the modules themselves can be assembled into a larger array. The 3D virtual voxel approach solves a long-standing problem with CZT detectors associated with material imperfections that limit the performance and usefulness of relatively thick detectors (i.e., greater than 1 cm). Also, it allows us to use the standard (unselected) grade crystals, while achieving the energy resolution of the premium detectors and thus substantially reducing the cost of the instrument. Such a calorimeter can be successfully used in space telescopes that use Compton scattering of gamma rays, such as AMEGO, serving as part of its calorimeter and providing the position and energy measurement for Compton-scattered photons (like a focal plane detector in a Compton camera). Also, it could provide suitable energy resolution to allow for spectroscopic measurements of gamma ray lines from nuclear decays.

  11. High-energy 3D calorimeter for use in gamma-ray astronomy based on position-sensitive virtual Frisch-grid CdZnTe detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moiseev, Alexander; Bolotnikov, A.; DeGeronimo, G.

    Here, we will present a concept for a calorimeter based on a novel approach of 3D position-sensitive virtual Frisch-grid CdZnTe (hereafter CZT) detectors. This calorimeter aims to measure photons with energies from ~100 keV to 20–50 MeV . The expected energy resolution at 662 keV is better than 1% FWHM, and the photon interaction position-measurement accuracy is better than 1 mm in all 3 dimensions. Each CZT bar is a rectangular prism with typical cross-section from 5×5 to 7×7 mm 2 and length of 2–4 cm. The bars are arranged in modules of 4×4 bars, and the modules themselves canmore » be assembled into a larger array. The 3D virtual voxel approach solves a long-standing problem with CZT detectors associated with material imperfections that limit the performance and usefulness of relatively thick detectors (i.e., >1 cm). Also, it allows us to use the standard (unselected) grade crystals, while achieving the energy resolution of the premium detectors and thus substantially reducing the cost of the instrument. Such a calorimeter can be successfully used in space telescopes that use Compton scattering of γ-rays, such as AMEGO, serving as part of its calorimeter and providing the position and energy measurement for Compton-scattered photons (like a focal plane detector in a Compton camera). Also, it could provide suitable energy resolution to allow for spectroscopic measurements of γ-ray lines from nuclear decays.« less

  12. High-energy 3D calorimeter for use in gamma-ray astronomy based on position-sensitive virtual Frisch-grid CdZnTe detectors

    DOE PAGES

    Moiseev, Alexander; Bolotnikov, A.; DeGeronimo, G.; ...

    2017-12-19

    Here, we will present a concept for a calorimeter based on a novel approach of 3D position-sensitive virtual Frisch-grid CdZnTe (hereafter CZT) detectors. This calorimeter aims to measure photons with energies from ~100 keV to 20–50 MeV . The expected energy resolution at 662 keV is better than 1% FWHM, and the photon interaction position-measurement accuracy is better than 1 mm in all 3 dimensions. Each CZT bar is a rectangular prism with typical cross-section from 5×5 to 7×7 mm 2 and length of 2–4 cm. The bars are arranged in modules of 4×4 bars, and the modules themselves canmore » be assembled into a larger array. The 3D virtual voxel approach solves a long-standing problem with CZT detectors associated with material imperfections that limit the performance and usefulness of relatively thick detectors (i.e., >1 cm). Also, it allows us to use the standard (unselected) grade crystals, while achieving the energy resolution of the premium detectors and thus substantially reducing the cost of the instrument. Such a calorimeter can be successfully used in space telescopes that use Compton scattering of γ-rays, such as AMEGO, serving as part of its calorimeter and providing the position and energy measurement for Compton-scattered photons (like a focal plane detector in a Compton camera). Also, it could provide suitable energy resolution to allow for spectroscopic measurements of γ-ray lines from nuclear decays.« less

  13. DHM simulation in virtual environments: a case-study on control room design.

    PubMed

    Zamberlan, M; Santos, V; Streit, P; Oliveira, J; Cury, R; Negri, T; Pastura, F; Guimarães, C; Cid, G

    2012-01-01

    This paper will present the workflow developed for the application of serious games in the design of complex cooperative work settings. The project was based on ergonomic studies and development of a control room among participative design process. Our main concerns were the 3D human virtual representation acquired from 3D scanning, human interaction, workspace layout and equipment designed considering ergonomics standards. Using Unity3D platform to design the virtual environment, the virtual human model can be controlled by users on dynamic scenario in order to evaluate the new work settings and simulate work activities. The results obtained showed that this virtual technology can drastically change the design process by improving the level of interaction between final users and, managers and human factors team.

  14. Quadcopter control in three-dimensional space using a noninvasive motor imagery-based brain-computer interface

    NASA Astrophysics Data System (ADS)

    LaFleur, Karl; Cassady, Kaitlin; Doud, Alexander; Shades, Kaleb; Rogin, Eitan; He, Bin

    2013-08-01

    Objective. At the balanced intersection of human and machine adaptation is found the optimally functioning brain-computer interface (BCI). In this study, we report a novel experiment of BCI controlling a robotic quadcopter in three-dimensional (3D) physical space using noninvasive scalp electroencephalogram (EEG) in human subjects. We then quantify the performance of this system using metrics suitable for asynchronous BCI. Lastly, we examine the impact that the operation of a real world device has on subjects' control in comparison to a 2D virtual cursor task. Approach. Five human subjects were trained to modulate their sensorimotor rhythms to control an AR Drone navigating a 3D physical space. Visual feedback was provided via a forward facing camera on the hull of the drone. Main results. Individual subjects were able to accurately acquire up to 90.5% of all valid targets presented while travelling at an average straight-line speed of 0.69 m s-1. Significance. Freely exploring and interacting with the world around us is a crucial element of autonomy that is lost in the context of neurodegenerative disease. Brain-computer interfaces are systems that aim to restore or enhance a user's ability to interact with the environment via a computer and through the use of only thought. We demonstrate for the first time the ability to control a flying robot in 3D physical space using noninvasive scalp recorded EEG in humans. Our work indicates the potential of noninvasive EEG-based BCI systems for accomplish complex control in 3D physical space. The present study may serve as a framework for the investigation of multidimensional noninvasive BCI control in a physical environment using telepresence robotics.

  15. Quadcopter control in three-dimensional space using a noninvasive motor imagery-based brain-computer interface.

    PubMed

    LaFleur, Karl; Cassady, Kaitlin; Doud, Alexander; Shades, Kaleb; Rogin, Eitan; He, Bin

    2013-08-01

    At the balanced intersection of human and machine adaptation is found the optimally functioning brain-computer interface (BCI). In this study, we report a novel experiment of BCI controlling a robotic quadcopter in three-dimensional (3D) physical space using noninvasive scalp electroencephalogram (EEG) in human subjects. We then quantify the performance of this system using metrics suitable for asynchronous BCI. Lastly, we examine the impact that the operation of a real world device has on subjects' control in comparison to a 2D virtual cursor task. Five human subjects were trained to modulate their sensorimotor rhythms to control an AR Drone navigating a 3D physical space. Visual feedback was provided via a forward facing camera on the hull of the drone. Individual subjects were able to accurately acquire up to 90.5% of all valid targets presented while travelling at an average straight-line speed of 0.69 m s(-1). Freely exploring and interacting with the world around us is a crucial element of autonomy that is lost in the context of neurodegenerative disease. Brain-computer interfaces are systems that aim to restore or enhance a user's ability to interact with the environment via a computer and through the use of only thought. We demonstrate for the first time the ability to control a flying robot in 3D physical space using noninvasive scalp recorded EEG in humans. Our work indicates the potential of noninvasive EEG-based BCI systems for accomplish complex control in 3D physical space. The present study may serve as a framework for the investigation of multidimensional noninvasive BCI control in a physical environment using telepresence robotics.

  16. What People Talk About in Virtual Worlds

    NASA Astrophysics Data System (ADS)

    Maher, Mary Lou

    This chapter examines what people talk about in virtual worlds, employing protocol analysis. Each of two scenario studies was developed to assess the impact of virtual worlds as a collaborative environment for a specific purpose: one for learning and one for designing. The first designed a place in Active Worlds for a course on Web Site Design, having group learning spaces surrounded by individual student galleries. Student text chat was analyzed through a coding scheme with four major categories: control, technology, learning, and place. The second studied expert architects in a Second Life environment called DesignWorld that combined 3D modeling and sketching tools. Video and audio recordings were coded in terms of four categories of communication content (designing, representation of the model, awareness of each other, and software features), and in terms of synthesis comparing alternative designs versus analysis of how well the proposed solution satisfies the given design task. Both studies found that people talk about their avatars, identity, and location in the virtual world. However, the discussion is chiefly about the task and not about the virtual world, implying that virtual worlds provide a viable environment for learning and designing that does not distract people from their task.

  17. Integrated bronchoscopic video tracking and 3D CT registration for virtual bronchoscopy

    NASA Astrophysics Data System (ADS)

    Higgins, William E.; Helferty, James P.; Padfield, Dirk R.

    2003-05-01

    Lung cancer assessment involves an initial evaluation of 3D CT image data followed by interventional bronchoscopy. The physician, with only a mental image inferred from the 3D CT data, must guide the bronchoscope through the bronchial tree to sites of interest. Unfortunately, this procedure depends heavily on the physician's ability to mentally reconstruct the 3D position of the bronchoscope within the airways. In order to assist physicians in performing biopsies of interest, we have developed a method that integrates live bronchoscopic video tracking and 3D CT registration. The proposed method is integrated into a system we have been devising for virtual-bronchoscopic analysis and guidance for lung-cancer assessment. Previously, the system relied on a method that only used registration of the live bronchoscopic video to corresponding virtual endoluminal views derived from the 3D CT data. This procedure only performs the registration at manually selected sites; it does not draw upon the motion information inherent in the bronchoscopic video. Further, the registration procedure is slow. The proposed method has the following advantages: (1) it tracks the 3D motion of the bronchoscope using the bronchoscopic video; (2) it uses the tracked 3D trajectory of the bronchoscope to assist in locating sites in the 3D CT "virtual world" to perform the registration. In addition, the method incorporates techniques to: (1) detect and exclude corrupted video frames (to help make the video tracking more robust); (2) accelerate the computation of the many 3D virtual endoluminal renderings (thus, speeding up the registration process). We have tested the integrated tracking-registration method on a human airway-tree phantom and on real human data.

  18. Visualization of the Left Extraperitoneal Space and Spatial Relationships to Its Related Spaces by the Visible Human Project

    PubMed Central

    Xu, Haotong; Li, Xiaoxiao; Zhang, Zhengzhi; Qiu, Mingguo; Mu, Qiwen; Wu, Yi; Tan, Liwen; Zhang, Shaoxiang; Zhang, Xiaoming

    2011-01-01

    Background The major hindrance to multidetector CT imaging of the left extraperitoneal space (LES), and the detailed spatial relationships to its related spaces, is that there is no obvious density difference between them. Traditional gross anatomy and thick-slice sectional anatomy imagery are also insufficient to show the anatomic features of this narrow space in three-dimensions (3D). To overcome these obstacles, we used a new method to visualize the anatomic features of the LES and its spatial associations with related spaces, in random sections and in 3D. Methods In conjunction with Mimics® and Amira® software, we used thin-slice cross-sectional images of the upper abdomen, retrieved from the Chinese and American Visible Human dataset and the Chinese Virtual Human dataset, to display anatomic features of the LES and spatial relationships of the LES to its related spaces, especially the gastric bare area. The anatomic location of the LES was presented on 3D sections reconstructed from CVH2 images and CT images. Principal Findings What calls for special attention of our results is the LES consists of the left sub-diaphragmatic fat space and gastric bare area. The appearance of the fat pad at the cardiac notch contributes to converting the shape of the anteroexternal surface of the LES from triangular to trapezoidal. Moreover, the LES is adjacent to the lesser omentum and the hepatic bare area in the anterointernal and right rear direction, respectively. Conclusion The LES and its related spaces were imaged in 3D using visualization technique for the first time. This technique is a promising new method for exploring detailed communication relationships among other abdominal spaces, and will promote research on the dynamic extension of abdominal diseases, such as acute pancreatitis and intra-abdominal carcinomatosis. PMID:22087259

  19. Use of a Three-Dimensional Virtual Environment to Teach Drug-Receptor Interactions

    PubMed Central

    Bracegirdle, Luke; McLachlan, Sarah I.H.; Chapman, Stephen R.

    2013-01-01

    Objective. To determine whether using 3-dimensional (3D) technology to teach pharmacy students about the molecular basis of the interactions between drugs and their targets is more effective than traditional lecture using 2-dimensional (2D) graphics. Design. Second-year students enrolled in a 4-year masters of pharmacy program in the United Kingdom were randomly assigned to attend either a 3D or 2D presentation on 3 drug targets, the β-adrenoceptor, the Na+-K+ ATPase, and the nicotinic acetylcholine receptor. Assessment. A test was administered to assess the ability of both groups of students to solve problems that required analysis of molecular interactions in 3D space. The group that participated in the 3D teaching presentation performed significantly better on the test than the group who attended the traditional lecture with 2D graphics. A questionnaire was also administered to solicit students’ perceptions about the 3D experience. The majority of students enjoyed the 3D session and agreed that the experience increased their enthusiasm for the course. Conclusions. Viewing a 3D presentation of drug-receptor interactions improved student learning compared to learning from a traditional lecture and 2D graphics. PMID:23459131

  20. Use of a three-dimensional virtual environment to teach drug-receptor interactions.

    PubMed

    Richardson, Alan; Bracegirdle, Luke; McLachlan, Sarah I H; Chapman, Stephen R

    2013-02-12

    Objective. To determine whether using 3-dimensional (3D) technology to teach pharmacy students about the molecular basis of the interactions between drugs and their targets is more effective than traditional lecture using 2-dimensional (2D) graphics.Design. Second-year students enrolled in a 4-year masters of pharmacy program in the United Kingdom were randomly assigned to attend either a 3D or 2D presentation on 3 drug targets, the β-adrenoceptor, the Na(+)-K(+) ATPase, and the nicotinic acetylcholine receptor.Assessment. A test was administered to assess the ability of both groups of students to solve problems that required analysis of molecular interactions in 3D space. The group that participated in the 3D teaching presentation performed significantly better on the test than the group who attended the traditional lecture with 2D graphics. A questionnaire was also administered to solicit students' perceptions about the 3D experience. The majority of students enjoyed the 3D session and agreed that the experience increased their enthusiasm for the course.Conclusions. Viewing a 3D presentation of drug-receptor interactions improved student learning compared to learning from a traditional lecture and 2D graphics.

  1. Integration of oncologic margins in three-dimensional virtual planning for head and neck surgery, including a validation of the software pathway.

    PubMed

    Kraeima, Joep; Schepers, Rutger H; van Ooijen, Peter M A; Steenbakkers, Roel J H M; Roodenburg, Jan L N; Witjes, Max J H

    2015-10-01

    Three-dimensional (3D) virtual planning of reconstructive surgery, after resection, is a frequently used method for improving accuracy and predictability. However, when applied to malignant cases, the planning of the oncologic resection margins is difficult due to visualisation of tumours in the current 3D planning. Embedding tumour delineation on a magnetic resonance image, similar to the routinely performed radiotherapeutic contouring of tumours, is expected to provide better margin planning. A new software pathway was developed for embedding tumour delineation on magnetic resonance imaging (MRI) within the 3D virtual surgical planning. The software pathway was validated by the use of five bovine cadavers implanted with phantom tumour objects. MRI and computed tomography (CT) images were fused and the tumour was delineated using radiation oncology software. This data was converted to the 3D virtual planning software by means of a conversion algorithm. Tumour volumes and localization were determined in both software stages for comparison analysis. The approach was applied to three clinical cases. A conversion algorithm was developed to translate the tumour delineation data to the 3D virtual plan environment. The average difference in volume of the tumours was 1.7%. This study reports a validated software pathway, providing multi-modality image fusion for 3D virtual surgical planning. Copyright © 2015 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  2. An Effective Construction Method of Modular Manipulator 3D Virtual Simulation Platform

    NASA Astrophysics Data System (ADS)

    Li, Xianhua; Lv, Lei; Sheng, Rui; Sun, Qing; Zhang, Leigang

    2018-06-01

    This work discusses about a fast and efficient method of constructing an open 3D manipulator virtual simulation platform which make it easier for teachers and students to learn about positive and inverse kinematics of a robot manipulator. The method was carried out using MATLAB. In which, the Robotics Toolbox, MATLAB GUI and 3D animation with the help of modelling using SolidWorks, were fully applied to produce a good visualization of the system. The advantages of using quickly build is its powerful function of the input and output and its ability to simulate a 3D manipulator realistically. In this article, a Schunk six DOF modular manipulator was constructed by the author's research group to be used as example. The implementation steps of this method was detailed described, and thereafter, a high-level open and realistic visualization manipulator 3D virtual simulation platform was achieved. With the graphs obtained from simulation, the test results show that the manipulator 3D virtual simulation platform can be constructed quickly with good usability and high maneuverability, and it can meet the needs of scientific research and teaching.

  3. Incidental Learning in 3D Virtual Environments: Relationships to Learning Style, Digital Literacy and Information Display

    ERIC Educational Resources Information Center

    Thomas, Wayne W.; Boechler, Patricia M.

    2014-01-01

    With teachers taking more interest in utilizing 3D virtual environments for educational purposes, research is needed to understand how learners perceive and process information within virtual environments (Eschenbrenner, Nah, & Siau, 2008). In this study, the authors sought to determine if learning style or digital literacy predict incidental…

  4. Teaching Physics to Deaf College Students in a 3-D Virtual Lab

    ERIC Educational Resources Information Center

    Robinson, Vicki

    2013-01-01

    Virtual worlds are used in many educational and business applications. At the National Technical Institute for the Deaf at Rochester Institute of Technology (NTID/RIT), deaf college students are introduced to the virtual world of Second Life, which is a 3-D immersive, interactive environment, accessed through computer software. NTID students use…

  5. GEARS a 3D Virtual Learning Environment and Virtual Social and Educational World Used in Online Secondary Schools

    ERIC Educational Resources Information Center

    Barkand, Jonathan; Kush, Joseph

    2009-01-01

    Virtual Learning Environments (VLEs) are becoming increasingly popular in online education environments and have multiple pedagogical advantages over more traditional approaches to education. VLEs include 3D worlds where students can engage in simulated learning activities such as Second Life. According to Claudia L'Amoreaux at Linden Lab, "at…

  6. Allocentric information is used for memory-guided reaching in depth: A virtual reality study.

    PubMed

    Klinghammer, Mathias; Schütz, Immo; Blohm, Gunnar; Fiehler, Katja

    2016-12-01

    Previous research has demonstrated that humans use allocentric information when reaching to remembered visual targets, but most of the studies are limited to 2D space. Here, we study allocentric coding of memorized reach targets in 3D virtual reality. In particular, we investigated the use of allocentric information for memory-guided reaching in depth and the role of binocular and monocular (object size) depth cues for coding object locations in 3D space. To this end, we presented a scene with objects on a table which were located at different distances from the observer and served as reach targets or allocentric cues. After free visual exploration of this scene and a short delay the scene reappeared, but with one object missing (=reach target). In addition, the remaining objects were shifted horizontally or in depth. When objects were shifted in depth, we also independently manipulated object size by either magnifying or reducing their size. After the scene vanished, participants reached to the remembered target location on the blank table. Reaching endpoints deviated systematically in the direction of object shifts, similar to our previous results from 2D presentations. This deviation was stronger for object shifts in depth than in the horizontal plane and independent of observer-target-distance. Reaching endpoints systematically varied with changes in object size. Our results suggest that allocentric information is used for coding targets for memory-guided reaching in depth. Thereby, retinal disparity and vergence as well as object size provide important binocular and monocular depth cues. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Virtual reality: Avatars in human spaceflight training

    NASA Astrophysics Data System (ADS)

    Osterlund, Jeffrey; Lawrence, Brad

    2012-02-01

    With the advancements in high spatial and temporal resolution graphics, along with advancements in 3D display capabilities to model, simulate, and analyze human-to-machine interfaces and interactions, the world of virtual environments is being used to develop everything from gaming, movie special affects and animations to the design of automobiles. The use of multiple object motion capture technology and digital human tools in aerospace has demonstrated to be a more cost effective alternative to the cost of physical prototypes, provides a more efficient, flexible and responsive environment to changes in the design and training, and provides early human factors considerations concerning the operation of a complex launch vehicle or spacecraft. United Space Alliance (USA) has deployed this technique and tool under Research and Development (R&D) activities on both spacecraft assembly and ground processing operations design and training on the Orion Crew Module. USA utilizes specialized products that were chosen based on functionality, including software and fixed based hardware (e.g., infrared and visible red cameras), along with cyber gloves to ensure fine motor dexterity of the hands. The key findings of the R&D were: mock-ups should be built to not obstruct cameras from markers being tracked; a mock-up toolkit be assembled to facilitate dynamic design changes; markers should be placed in accurate positions on humans and flight hardware to help with tracking; 3D models used in the virtual environment be striped of non-essential data; high computational capable workstations are required to handle the large model data sets; and Technology Interchange Meetings with vendors and other industries also utilizing virtual reality applications need to occur on a continual basis enabling USA to maintain its leading edge within this technology. Parameters of interest and benefit in human spaceflight simulation training that utilizes virtual reality technologies are to familiarize and assess operational processes, allow the ability to train virtually, experiment with "what if" scenarios, and expedite immediate changes to validate the design implementation are all parameters of interest in human spaceflight. Training benefits encompass providing 3D animation for post-training assessment, placement of avatars within 3D replicated work environments in assembling or processing hardware, offering various viewpoints of processes viewed and assessed giving the evaluators the ability to assess task feasibility and identify potential support equipment needs; and provide human factors determinations, such as reach, visibility, and accessibility. Multiple object motion capture technology provides an effective tool to train and assess ergonomic risks, simulations for determination of negative interactions between technicians and their proposed workspaces, and evaluation of spaceflight systems prior to, and as part of, the design process to contain costs and reduce schedule delays.

  8. Knowledge and Valorization of Historical Sites Through 3d Documentation and Modeling

    NASA Astrophysics Data System (ADS)

    Farella, E.; Menna, F.; Nocerino, E.; Morabito, D.; Remondino, F.; Campi, M.

    2016-06-01

    The paper presents the first results of an interdisciplinary project related to the 3D documentation, dissemination, valorization and digital access of archeological sites. Beside the mere 3D documentation aim, the project has two goals: (i) to easily explore and share via web references and results of the interdisciplinary work, including the interpretative process and the final reconstruction of the remains; (ii) to promote and valorize archaeological areas using reality-based 3D data and Virtual Reality devices. This method has been verified on the ruins of the archeological site of Pausilypon, a maritime villa of Roman period (Naples, Italy). Using Unity3D, the virtual tour of the heritage site was integrated and enriched with the surveyed 3D data, text documents, CAAD reconstruction hypotheses, drawings, photos, etc. In this way, starting from the actual appearance of the ruins (panoramic images), passing through the 3D digital surveying models and several other historical information, the user is able to access virtual contents and reconstructed scenarios, all in a single virtual, interactive and immersive environment. These contents and scenarios allow to derive documentation and geometrical information, understand the site, perform analyses, see interpretative processes, communicate historical information and valorize the heritage location.

  9. The development of a virtual 3D model of the renal corpuscle from serial histological sections for E-learning environments.

    PubMed

    Roth, Jeremy A; Wilson, Timothy D; Sandig, Martin

    2015-01-01

    Histology is a core subject in the anatomical sciences where learners are challenged to interpret two-dimensional (2D) information (gained from histological sections) to extrapolate and understand the three-dimensional (3D) morphology of cells, tissues, and organs. In gross anatomical education 3D models and learning tools have been associated with improved learning outcomes, but similar tools have not been created for histology education to visualize complex cellular structure-function relationships. This study outlines steps in creating a virtual 3D model of the renal corpuscle from serial, semi-thin, histological sections obtained from epoxy resin-embedded kidney tissue. The virtual renal corpuscle model was generated by digital segmentation to identify: Bowman's capsule, nuclei of epithelial cells in the parietal capsule, afferent arteriole, efferent arteriole, proximal convoluted tubule, distal convoluted tubule, glomerular capillaries, podocyte nuclei, nuclei of extraglomerular mesangial cells, nuclei of epithelial cells of the macula densa in the distal convoluted tubule. In addition to the imported images of the original sections the software generates, and allows for visualization of, images of virtual sections generated in any desired orientation, thus serving as a "virtual microtome". These sections can be viewed separately or with the 3D model in transparency. This approach allows for the development of interactive e-learning tools designed to enhance histology education of microscopic structures with complex cellular interrelationships. Future studies will focus on testing the efficacy of interactive virtual 3D models for histology education. © 2015 American Association of Anatomists.

  10. Familiarity from the configuration of objects in 3-dimensional space and its relation to déjà vu: a virtual reality investigation.

    PubMed

    Cleary, Anne M; Brown, Alan S; Sawyer, Benjamin D; Nomi, Jason S; Ajoku, Adaeze C; Ryals, Anthony J

    2012-06-01

    Déjà vu is the striking sense that the present situation feels familiar, alongside the realization that it has to be new. According to the Gestalt familiarity hypothesis, déjà vu results when the configuration of elements within a scene maps onto a configuration previously seen, but the previous scene fails to come to mind. We examined this using virtual reality (VR) technology. When a new immersive VR scene resembled a previously-viewed scene in its configuration but people failed to recall the previously-viewed scene, familiarity ratings and reports of déjà vu were indeed higher than for completely novel scenes. People also exhibited the contrasting sense of newness and of familiarity that is characteristic of déjà vu. Familiarity ratings and déjà vu reports among scenes recognized as new increased with increasing feature-match of a scene to one stored in memory, suggesting that feature-matching can produce familiarity and déjà vu when recall fails. Copyright © 2012 Elsevier Inc. All rights reserved.

  11. Scanning 3D full human bodies using Kinects.

    PubMed

    Tong, Jing; Zhou, Jin; Liu, Ligang; Pan, Zhigeng; Yan, Hao

    2012-04-01

    Depth camera such as Microsoft Kinect, is much cheaper than conventional 3D scanning devices, and thus it can be acquired for everyday users easily. However, the depth data captured by Kinect over a certain distance is of extreme low quality. In this paper, we present a novel scanning system for capturing 3D full human body models by using multiple Kinects. To avoid the interference phenomena, we use two Kinects to capture the upper part and lower part of a human body respectively without overlapping region. A third Kinect is used to capture the middle part of the human body from the opposite direction. We propose a practical approach for registering the various body parts of different views under non-rigid deformation. First, a rough mesh template is constructed and used to deform successive frames pairwisely. Second, global alignment is performed to distribute errors in the deformation space, which can solve the loop closure problem efficiently. Misalignment caused by complex occlusion can also be handled reasonably by our global alignment algorithm. The experimental results have shown the efficiency and applicability of our system. Our system obtains impressive results in a few minutes with low price devices, thus is practically useful for generating personalized avatars for everyday users. Our system has been used for 3D human animation and virtual try on, and can further facilitate a range of home–oriented virtual reality (VR) applications.

  12. Navigation performance in virtual environments varies with fractal dimension of landscape.

    PubMed

    Juliani, Arthur W; Bies, Alexander J; Boydston, Cooper R; Taylor, Richard P; Sereno, Margaret E

    2016-09-01

    Fractal geometry has been used to describe natural and built environments, but has yet to be studied in navigational research. In order to establish a relationship between the fractal dimension (D) of a natural environment and humans' ability to navigate such spaces, we conducted two experiments using virtual environments that simulate the fractal properties of nature. In Experiment 1, participants completed a goal-driven search task either with or without a map in landscapes that varied in D. In Experiment 2, participants completed a map-reading and location-judgment task in separate sets of fractal landscapes. In both experiments, task performance was highest at the low-to-mid range of D, which was previously reported as most preferred and discriminable in studies of fractal aesthetics and discrimination, respectively, supporting a theory of visual fluency. The applicability of these findings to architecture, urban planning, and the general design of constructed spaces is discussed.

  13. Openwebglobe 2: Visualization of Complex 3D-GEODATA in the (mobile) Webbrowser

    NASA Astrophysics Data System (ADS)

    Christen, M.

    2016-06-01

    Providing worldwide high resolution data for virtual globes consists of compute and storage intense tasks for processing data. Furthermore, rendering complex 3D-Geodata, such as 3D-City models with an extremely high polygon count and a vast amount of textures at interactive framerates is still a very challenging task, especially on mobile devices. This paper presents an approach for processing, caching and serving massive geospatial data in a cloud-based environment for large scale, out-of-core, highly scalable 3D scene rendering on a web based virtual globe. Cloud computing is used for processing large amounts of geospatial data and also for providing 2D and 3D map data to a large amount of (mobile) web clients. In this paper the approach for processing, rendering and caching very large datasets in the currently developed virtual globe "OpenWebGlobe 2" is shown, which displays 3D-Geodata on nearly every device.

  14. A Novel Approach for Efficient Pharmacophore-based Virtual Screening: Method and Applications

    PubMed Central

    Dror, Oranit; Schneidman-Duhovny, Dina; Inbar, Yuval; Nussinov, Ruth; Wolfson, Haim J.

    2009-01-01

    Virtual screening is emerging as a productive and cost-effective technology in rational drug design for the identification of novel lead compounds. An important model for virtual screening is the pharmacophore. Pharmacophore is the spatial configuration of essential features that enable a ligand molecule to interact with a specific target receptor. In the absence of a known receptor structure, a pharmacophore can be identified from a set of ligands that have been observed to interact with the target receptor. Here, we present a novel computational method for pharmacophore detection and virtual screening. The pharmacophore detection module is able to: (i) align multiple flexible ligands in a deterministic manner without exhaustive enumeration of the conformational space, (ii) detect subsets of input ligands that may bind to different binding sites or have different binding modes, (iii) address cases where the input ligands have different affinities by defining weighted pharmacophores based on the number of ligands that share them, and (iv) automatically select the most appropriate pharmacophore candidates for virtual screening. The algorithm is highly efficient, allowing a fast exploration of the chemical space by virtual screening of huge compound databases. The performance of PharmaGist was successfully evaluated on a commonly used dataset of G-Protein Coupled Receptor alpha1A. Additionally, a large-scale evaluation using the DUD (directory of useful decoys) dataset was performed. DUD contains 2950 active ligands for 40 different receptors, with 36 decoy compounds for each active ligand. PharmaGist enrichment rates are comparable with other state-of-the-art tools for virtual screening. Availability The software is available for download. A user-friendly web interface for pharmacophore detection is available at http://bioinfo3d.cs.tau.ac.il/PharmaGist. PMID:19803502

  15. Three-Dimensional Sensor Common Operating Picture (3-D Sensor COP)

    DTIC Science & Technology

    2017-01-01

    created. Additionally, a 3-D model of the sensor itself can be created. Using these 3-D models, along with emerging virtual and augmented reality tools...augmented reality 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT UU 18. NUMBER OF PAGES 20 19a...iii Contents List of Figures iv 1. Introduction 1 2. The 3-D Sensor COP 2 3. Virtual Sensor Placement 7 4. Conclusions 10 5. References 11

  16. The Effect of the Use of the 3-D Multi-User Virtual Environment "Second Life" on Student Motivation and Language Proficiency in Courses of Spanish as a Foreign Language

    ERIC Educational Resources Information Center

    Pares-Toral, Maria T.

    2013-01-01

    The ever increasing popularity of virtual worlds, also known as 3-D multi-user virtual environments (MUVEs) or simply virtual worlds provides language instructors with a new tool they can exploit in their courses. For now, "Second Life" is one of the most popular MUVEs used for teaching and learning, and although "Second Life"…

  17. Virtual Jupiter - Real Learning

    NASA Astrophysics Data System (ADS)

    Ruzhitskaya, Lanika; Speck, A.; Laffey, J.

    2010-01-01

    How many earthlings went to visit Jupiter? None. How many students visited virtual Jupiter to fulfill their introductory astronomy courses’ requirements? Within next six months over 100 students from University of Missouri will get a chance to explore the planet and its Galilean Moons using a 3D virtual environment created especially for them to learn Kepler's and Newton's laws, eclipses, parallax, and other concepts in astronomy. The virtual world of Jupiter system is a unique 3D environment that allows students to learn course material - physical laws and concepts in astronomy - while engaging them into exploration of the Jupiter's system, encouraging their imagination, curiosity, and motivation. The virtual learning environment let students to work individually or collaborate with their teammates. The 3D world is also a great opportunity for research in astronomy education to investigate impact of social interaction, gaming features, and use of manipulatives offered by a learning tool on students’ motivation and learning outcomes. Use of 3D environment is also a valuable source for exploration of how the learners’ spatial awareness can be enhanced by working in 3-dimensional environment.

  18. Monocular Stereo Measurement Using High-Speed Catadioptric Tracking

    PubMed Central

    Hu, Shaopeng; Matsumoto, Yuji; Takaki, Takeshi; Ishii, Idaku

    2017-01-01

    This paper presents a novel concept of real-time catadioptric stereo tracking using a single ultrafast mirror-drive pan-tilt active vision system that can simultaneously switch between hundreds of different views in a second. By accelerating video-shooting, computation, and actuation at the millisecond-granularity level for time-division multithreaded processing in ultrafast gaze control, the active vision system can function virtually as two or more tracking cameras with different views. It enables a single active vision system to act as virtual left and right pan-tilt cameras that can simultaneously shoot a pair of stereo images for the same object to be observed at arbitrary viewpoints by switching the direction of the mirrors of the active vision system frame by frame. We developed a monocular galvano-mirror-based stereo tracking system that can switch between 500 different views in a second, and it functions as a catadioptric active stereo with left and right pan-tilt tracking cameras that can virtually capture 8-bit color 512×512 images each operating at 250 fps to mechanically track a fast-moving object with a sufficient parallax for accurate 3D measurement. Several tracking experiments for moving objects in 3D space are described to demonstrate the performance of our monocular stereo tracking system. PMID:28792483

  19. Augmented reality glass-free three-dimensional display with the stereo camera

    NASA Astrophysics Data System (ADS)

    Pang, Bo; Sang, Xinzhu; Chen, Duo; Xing, Shujun; Yu, Xunbo; Yan, Binbin; Wang, Kuiru; Yu, Chongxiu

    2017-10-01

    An improved method for Augmented Reality (AR) glass-free three-dimensional (3D) display based on stereo camera used for presenting parallax contents from different angle with lenticular lens array is proposed. Compared with the previous implementation method of AR techniques based on two-dimensional (2D) panel display with only one viewpoint, the proposed method can realize glass-free 3D display of virtual objects and real scene with 32 virtual viewpoints. Accordingly, viewers can get abundant 3D stereo information from different viewing angles based on binocular parallax. Experimental results show that this improved method based on stereo camera can realize AR glass-free 3D display, and both of virtual objects and real scene have realistic and obvious stereo performance.

  20. Virtual environment display for a 3D audio room simulation

    NASA Astrophysics Data System (ADS)

    Chapin, William L.; Foster, Scott

    1992-06-01

    Recent developments in virtual 3D audio and synthetic aural environments have produced a complex acoustical room simulation. The acoustical simulation models a room with walls, ceiling, and floor of selected sound reflecting/absorbing characteristics and unlimited independent localizable sound sources. This non-visual acoustic simulation, implemented with 4 audio ConvolvotronsTM by Crystal River Engineering and coupled to the listener with a Poihemus IsotrakTM, tracking the listener's head position and orientation, and stereo headphones returning binaural sound, is quite compelling to most listeners with eyes closed. This immersive effect should be reinforced when properly integrated into a full, multi-sensory virtual environment presentation. This paper discusses the design of an interactive, visual virtual environment, complementing the acoustic model and specified to: 1) allow the listener to freely move about the space, a room of manipulable size, shape, and audio character, while interactively relocating the sound sources; 2) reinforce the listener's feeling of telepresence into the acoustical environment with visual and proprioceptive sensations; 3) enhance the audio with the graphic and interactive components, rather than overwhelm or reduce it; and 4) serve as a research testbed and technology transfer demonstration. The hardware/software design of two demonstration systems, one installed and one portable, are discussed through the development of four iterative configurations. The installed system implements a head-coupled, wide-angle, stereo-optic tracker/viewer and multi-computer simulation control. The portable demonstration system implements a head-mounted wide-angle, stereo-optic display, separate head and pointer electro-magnetic position trackers, a heterogeneous parallel graphics processing system, and object oriented C++ program code.

  1. Overcoming the Critical Shortage of STEM - Prepared Secondary Students Through Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Spencer, Thomas; Berry, Brandon

    2012-01-01

    In developing understanding of technological systems - modeling and simulation tools aid significantly in the learning and visualization processes. In design courses we sketch , extrude, shape, refine and animate with virtual tools in 3D. Final designs are built using a 3D printer. Aspiring architects create spaces with realistic materials and lighting schemes rendered on model surfaces to create breathtaking walk-throughs of virtual spaces. Digital Electronics students design systems that address real-world needs. Designs are simulated in virtual circuits to provide proof of concept before physical construction. This vastly increases students' ability to design and build complex systems. We find students using modeling and simulation in the learning process, assimilate information at a much faster pace and engage more deeply in learning. As Pre-Engineering educators within the Career and Technical Education program at our school division's Technology Academy our task is to help learners in their quest to develop deep understanding of complex technological systems in a variety of engineering disciplines. Today's young learners have vast opportunities to learn with tools that many of us only dreamed about a decade or so ago when we were engaged in engineering and other technical studies. Today's learner paints with a virtual brush - scenes that can aid significantly in the learning and visualization processes. Modeling and simulation systems have become the new standard tool set in the technical classroom [1-5]. Modeling and simulation systems are now applied as feedback loops in the learning environment. Much of the study of behavior change through the use of feedback loops can be attributed to Stanford Psychologist Alfred Bandura. "Drawing on several education experiments involving children, Bandura observed that giving individuals a clear goal and a means to evaluate their progress toward that goal greatly increased the likelihood that they would achieve it."

  2. A Hybrid 2D/3D User Interface for Radiological Diagnosis.

    PubMed

    Mandalika, Veera Bhadra Harish; Chernoglazov, Alexander I; Billinghurst, Mark; Bartneck, Christoph; Hurrell, Michael A; Ruiter, Niels de; Butler, Anthony P H; Butler, Philip H

    2018-02-01

    This paper presents a novel 2D/3D desktop virtual reality hybrid user interface for radiology that focuses on improving 3D manipulation required in some diagnostic tasks. An evaluation of our system revealed that our hybrid interface is more efficient for novice users and more accurate for both novice and experienced users when compared to traditional 2D only interfaces. This is a significant finding because it indicates, as the techniques mature, that hybrid interfaces can provide significant benefit to image evaluation. Our hybrid system combines a zSpace stereoscopic display with 2D displays, and mouse and keyboard input. It allows the use of 2D and 3D components interchangeably, or simultaneously. The system was evaluated against a 2D only interface with a user study that involved performing a scoliosis diagnosis task. There were two user groups: medical students and radiology residents. We found improvements in completion time for medical students, and in accuracy for both groups. In particular, the accuracy of medical students improved to match that of the residents.

  3. The Virtual Museum of Minerals and Molecules: Molecular Visualization in a Virtual Hands-On Museum

    ERIC Educational Resources Information Center

    Barak, Phillip; Nater, Edward A.

    2005-01-01

    The Virtual Museum of Minerals and Molecules (VMMM) is a web-based resource presenting interactive, 3-D, research-grade molecular models of more than 150 minerals and molecules of interest to chemical, earth, plant, and environmental sciences. User interactivity with the 3-D display allows models to be rotated, zoomed, and specific regions of…

  4. Game-Like Language Learning in 3-D Virtual Environments

    ERIC Educational Resources Information Center

    Berns, Anke; Gonzalez-Pardo, Antonio; Camacho, David

    2013-01-01

    This paper presents our recent experiences with the design of game-like applications in 3-D virtual environments as well as its impact on student motivation and learning. Therefore our paper starts with a brief analysis of the motivational aspects of videogames and virtual worlds (VWs). We then go on to explore the possible benefits of both in the…

  5. Application of 3d Model of Cultural Relics in Virtual Restoration

    NASA Astrophysics Data System (ADS)

    Zhao, S.; Hou, M.; Hu, Y.; Zhao, Q.

    2018-04-01

    In the traditional cultural relics splicing process, in order to identify the correct spatial location of the cultural relics debris, experts need to manually splice the existing debris. The repeated contact between debris can easily cause secondary damage to the cultural relics. In this paper, the application process of 3D model of cultural relic in virtual restoration is put forward, and the relevant processes and ideas are verified with the example of Terracotta Warriors data. Through the combination of traditional cultural relics restoration methods and computer virtual reality technology, virtual restoration of high-precision 3D models of cultural relics can provide a scientific reference for virtual restoration, avoiding the secondary damage to the cultural relics caused by improper restoration. The efficiency and safety of the preservation and restoration of cultural relics have been improved.

  6. Intelligent web agents for a 3D virtual community

    NASA Astrophysics Data System (ADS)

    Dave, T. M.; Zhang, Yanqing; Owen, G. S. S.; Sunderraman, Rajshekhar

    2003-08-01

    In this paper, we propose an Avatar-based intelligent agent technique for 3D Web based Virtual Communities based on distributed artificial intelligence, intelligent agent techniques, and databases and knowledge bases in a digital library. One of the goals of this joint NSF (IIS-9980130) and ACM SIGGRAPH Education Committee (ASEC) project is to create a virtual community of educators and students who have a common interest in comptuer graphics, visualization, and interactive techniqeus. In this virtual community (ASEC World) Avatars will represent the educators, students, and other visitors to the world. Intelligent agents represented as specially dressed Avatars will be available to assist the visitors to ASEC World. The basic Web client-server architecture of the intelligent knowledge-based avatars is given. Importantly, the intelligent Web agent software system for the 3D virtual community is implemented successfully.

  7. ESL Teacher Training in 3D Virtual Worlds

    ERIC Educational Resources Information Center

    Kozlova, Iryna; Priven, Dmitri

    2015-01-01

    Although language learning in 3D Virtual Worlds (VWs) has become a focus of recent research, little is known about the knowledge and skills teachers need to acquire to provide effective task-based instruction in 3D VWs and the type of teacher training that best prepares instructors for such an endeavor. This study employs a situated learning…

  8. Educational Visualizations in 3D Collaborative Virtual Environments: A Methodology

    ERIC Educational Resources Information Center

    Fominykh, Mikhail; Prasolova-Forland, Ekaterina

    2012-01-01

    Purpose: Collaborative virtual environments (CVEs) have become increasingly popular in educational settings and the role of 3D content is becoming more and more important. Still, there are many challenges in this area, such as lack of empirical studies that provide design for educational activities in 3D CVEs and lack of norms of how to support…

  9. Authoring Adaptive 3D Virtual Learning Environments

    ERIC Educational Resources Information Center

    Ewais, Ahmed; De Troyer, Olga

    2014-01-01

    The use of 3D and Virtual Reality is gaining interest in the context of academic discussions on E-learning technologies. However, the use of 3D for learning environments also has drawbacks. One way to overcome these drawbacks is by having an adaptive learning environment, i.e., an environment that dynamically adapts to the learner and the…

  10. Distributed Drug Discovery, Part 2: Global Rehearsal of Alkylating Agents for the Synthesis of Resin-Bound Unnatural Amino Acids and Virtual D3 Catalog Construction

    PubMed Central

    2008-01-01

    Distributed Drug Discovery (D3) proposes solving large drug discovery problems by breaking them into smaller units for processing at multiple sites. A key component of the synthetic and computational stages of D3 is the global rehearsal of prospective reagents and their subsequent use in the creation of virtual catalogs of molecules accessible by simple, inexpensive combinatorial chemistry. The first section of this article documents the feasibility of the synthetic component of Distributed Drug Discovery. Twenty-four alkylating agents were rehearsed in the United States, Poland, Russia, and Spain, for their utility in the synthesis of resin-bound unnatural amino acids 1, key intermediates in many combinatorial chemistry procedures. This global reagent rehearsal, coupled to virtual library generation, increases the likelihood that any member of that virtual library can be made. It facilitates the realistic integration of worldwide virtual D3 catalog computational analysis with synthesis. The second part of this article describes the creation of the first virtual D3 catalog. It reports the enumeration of 24 416 acylated unnatural amino acids 5, assembled from lists of either rehearsed or well-precedented alkylating and acylating reagents, and describes how the resulting catalog can be freely accessed, searched, and downloaded by the scientific community. PMID:19105725

  11. Advances in edge-diffraction modeling for virtual-acoustic simulations

    NASA Astrophysics Data System (ADS)

    Calamia, Paul Thomas

    In recent years there has been growing interest in modeling sound propagation in complex, three-dimensional (3D) virtual environments. With diverse applications for the military, the gaming industry, psychoacoustics researchers, architectural acousticians, and others, advances in computing power and 3D audio-rendering techniques have driven research and development aimed at closing the gap between the auralization and visualization of virtual spaces. To this end, this thesis focuses on improving the physical and perceptual realism of sound-field simulations in virtual environments through advances in edge-diffraction modeling. To model sound propagation in virtual environments, acoustical simulation tools commonly rely on geometrical-acoustics (GA) techniques that assume asymptotically high frequencies, large flat surfaces, and infinitely thin ray-like propagation paths. Such techniques can be augmented with diffraction modeling to compensate for the effect of surface size on the strength and directivity of a reflection, to allow for propagation around obstacles and into shadow zones, and to maintain soundfield continuity across reflection and shadow boundaries. Using a time-domain, line-integral formulation of the Biot-Tolstoy-Medwin (BTM) diffraction expression, this thesis explores various aspects of diffraction calculations for virtual-acoustic simulations. Specifically, we first analyze the periodic singularity of the BTM integrand and describe the relationship between the singularities and higher-order reflections within wedges with open angle less than 180°. Coupled with analytical approximations for the BTM expression, this analysis allows for accurate numerical computations and a continuous sound field in the vicinity of an arbitrary wedge geometry insonified by a point source. Second, we describe an edge-subdivision strategy that allows for fast diffraction calculations with low error relative to a numerically more accurate solution. Third, to address the considerable increase in propagation paths due to diffraction, we describe a simple procedure for identifying and culling insignificant diffraction components during a virtual-acoustic simulation. Finally, we present a novel method to find GA components using diffraction parameters that ensures continuity at reflection and shadow boundaries.

  12. SU-F-T-436: A Method to Evaluate Dosimetric Properties of SFGRT in Eclipse TPS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, M; Tobias, R; Pankuch, M

    Purpose: The objective was to develop a method for dose distribution calculation of spatially-fractionated-GRID-radiotherapy (SFGRT) in Eclipse treatment-planning-system (TPS). Methods: Patient treatment-plans with SFGRT for bulky tumors were generated in Varian Eclipse version11. A virtual structure based on the GRID pattern was created and registered to a patient CT image dataset. The virtual GRID structure was positioned on the iso-center level together with matching beam geometries to simulate a commercially available GRID block made of brass. This method overcame the difficulty in treatment-planning and dose-calculation due to the lack o-the option to insert a GRID block add-on in Eclipse TPS.more » The patient treatment-planning displayed GRID effects on the target, critical structures, and dose distribution. The dose calculations were compared to the measurement results in phantom. Results: The GRID block structure was created to follow the beam divergence to the patient CT images. The inserted virtual GRID block made it possible to calculate the dose distributions and profiles at various depths in Eclipse. The virtual GRID block was added as an option to TPS. The 3D representation of the isodose distribution of the spatially-fractionated beam was generated in axial, coronal, and sagittal planes. Physics of GRID can be different from that for fields shaped by regular blocks because the charge-particle-equilibrium cannot be guaranteed for small field openings. Output factor (OF) measurement was required to calculate the MU to deliver the prescribed dose. The calculated OF based on the virtual GRID agreed well with the measured OF in phantom. Conclusion: The method to create the virtual GRID block has been proposed for the first time in Eclipse TPS. The dosedistributions, in-plane and cross-plane profiles in PTV can be displayed in 3D-space. The calculated OF’s based on the virtual GRID model compare well to the measured OF’s for SFGRT clinical use.« less

  13. STS-132 crew during their MSS/SIMP EVA3 OPS 4 training

    NASA Image and Video Library

    2010-01-28

    JSC2010-E-014952 (28 Jan. 2010) --- NASA astronauts Michael Good (seated) and Garrett Reisman, both STS-132 mission specialists, use the virtual reality lab in the Space Vehicle Mock-up Facility at NASA's Johnson Space Center to train for some of their duties aboard the space shuttle and space station. This type of computer interface, paired with virtual reality training hardware and software, helps to prepare crew members for dealing with space station elements.

  14. 3D virtual character reconstruction from projections: a NURBS-based approach

    NASA Astrophysics Data System (ADS)

    Triki, Olfa; Zaharia, Titus B.; Preteux, Francoise J.

    2004-05-01

    This work has been carried out within the framework of the industrial project, so-called TOON, supported by the French government. TOON aims at developing tools for automating the traditional 2D cartoon content production. This paper presents preliminary results of the TOON platform. The proposed methodology concerns the issues of 2D/3D reconstruction from a limited number of drawn projections, and 2D/3D manipulation/deformation/refinement of virtual characters. Specifically, we show that the NURBS-based modeling approach developed here offers a well-suited framework for generating deformable 3D virtual characters from incomplete 2D information. Furthermore, crucial functionalities such as animation and non-rigid deformation can be also efficiently handled and solved. Note that user interaction is enabled exclusively in 2D by achieving a multiview constraint specification method. This is fully consistent and compliant with the cartoon creator traditional practice and makes it possible to avoid the use of 3D modeling software packages which are generally complex to manipulate.

  15. The use of virtual reality to reimagine two-dimensional representations of three-dimensional spaces

    NASA Astrophysics Data System (ADS)

    Fath, Elaine

    2015-03-01

    A familiar realm in the world of two-dimensional art is the craft of taking a flat canvas and creating, through color, size, and perspective, the illusion of a three-dimensional space. Using well-explored tricks of logic and sight, impossible landscapes such as those by surrealists de Chirico or Salvador Dalí seem to be windows into new and incredible spaces which appear to be simultaneously feasible and utterly nonsensical. As real-time 3D imaging becomes increasingly prevalent as an artistic medium, this process takes on an additional layer of depth: no longer is two-dimensional space restricted to strategies of light, color, line and geometry to create the impression of a three-dimensional space. A digital interactive environment is a space laid out in three dimensions, allowing the user to explore impossible environments in a way that feels very real. In this project, surrealist two-dimensional art was researched and reimagined: what would stepping into a de Chirico or a Magritte look and feel like, if the depth and distance created by light and geometry were not simply single-perspective illusions, but fully formed and explorable spaces? 3D environment-building software is allowing us to step into these impossible spaces in ways that 2D representations leave us yearning for. This art project explores what we gain--and what gets left behind--when these impossible spaces become doors, rather than windows. Using sketching, Maya 3D rendering software, and the Unity Engine, surrealist art was reimagined as a fully navigable real-time digital environment. The surrealist movement and its key artists were researched for their use of color, geometry, texture, and space and how these elements contributed to their work as a whole, which often conveys feelings of unexpectedness or uneasiness. The end goal was to preserve these feelings while allowing the viewer to actively engage with the space.

  16. Virtual reality and the unfolding of higher dimensions

    NASA Astrophysics Data System (ADS)

    Aguilera, Julieta C.

    2006-02-01

    As virtual/augmented reality evolves, the need for spaces that are responsive to structures independent from three dimensional spatial constraints, become apparent. The visual medium of computer graphics may also challenge these self imposed constraints. If one can get used to how projections affect 3D objects in two dimensions, it may also be possible to compose a situation in which to get used to the variations that occur while moving through higher dimensions. The presented application is an enveloping landscape of concave and convex forms, which are determined by the orientation and displacement of the user in relation to a grid made of tesseracts (cubes in four dimensions). The interface accepts input from tridimensional and four-dimensional transformations, and smoothly displays such interactions in real-time. The motion of the user becomes the graphic element whereas the higher dimensional grid references to his/her position relative to it. The user learns how motion inputs affect the grid, recognizing a correlation between the input and the transformations. Mapping information to complex grids in virtual reality is valuable for engineers, artists and users in general because navigation can be internalized like a dance pattern, and further engage us to maneuver space in order to know and experience.

  17. Three-dimensional compound comparison methods and their application in drug discovery.

    PubMed

    Shin, Woong-Hee; Zhu, Xiaolei; Bures, Mark Gregory; Kihara, Daisuke

    2015-07-16

    Virtual screening has been widely used in the drug discovery process. Ligand-based virtual screening (LBVS) methods compare a library of compounds with a known active ligand. Two notable advantages of LBVS methods are that they do not require structural information of a target receptor and that they are faster than structure-based methods. LBVS methods can be classified based on the complexity of ligand structure information utilized: one-dimensional (1D), two-dimensional (2D), and three-dimensional (3D). Unlike 1D and 2D methods, 3D methods can have enhanced performance since they treat the conformational flexibility of compounds. In this paper, a number of 3D methods will be reviewed. In addition, four representative 3D methods were benchmarked to understand their performance in virtual screening. Specifically, we tested overall performance in key aspects including the ability to find dissimilar active compounds, and computational speed.

  18. Visual stimulus presentation using fiber optics in the MRI scanner.

    PubMed

    Huang, Ruey-Song; Sereno, Martin I

    2008-03-30

    Imaging the neural basis of visuomotor actions using fMRI is a topic of increasing interest in the field of cognitive neuroscience. One challenge is to present realistic three-dimensional (3-D) stimuli in the subject's peripersonal space inside the MRI scanner. The stimulus generating apparatus must be compatible with strong magnetic fields and must not interfere with image acquisition. Virtual 3-D stimuli can be generated with a stereo image pair projected onto screens or via binocular goggles. Here, we describe designs and implementations for automatically presenting physical 3-D stimuli (point-light targets) in peripersonal and near-face space using fiber optics in the MRI scanner. The feasibility of fiber-optic based displays was demonstrated in two experiments. The first presented a point-light array along a slanted surface near the body, and the second presented multiple point-light targets around the face. Stimuli were presented using phase-encoded paradigms in both experiments. The results suggest that fiber-optic based displays can be a complementary approach for visual stimulus presentation in the MRI scanner.

  19. Immersive Interaction, Manipulation and Analysis of Large 3D Datasets for Planetary and Earth Sciences

    NASA Astrophysics Data System (ADS)

    Pariser, O.; Calef, F.; Manning, E. M.; Ardulov, V.

    2017-12-01

    We will present implementation and study of several use-cases of utilizing Virtual Reality (VR) for immersive display, interaction and analysis of large and complex 3D datasets. These datasets have been acquired by the instruments across several Earth, Planetary and Solar Space Robotics Missions. First, we will describe the architecture of the common application framework that was developed to input data, interface with VR display devices and program input controllers in various computing environments. Tethered and portable VR technologies will be contrasted and advantages of each highlighted. We'll proceed to presenting experimental immersive analytics visual constructs that enable augmentation of 3D datasets with 2D ones such as images and statistical and abstract data. We will conclude by presenting comparative analysis with traditional visualization applications and share the feedback provided by our users: scientists and engineers.

  20. Discovery of new GSK-3β inhibitors through structure-based virtual screening.

    PubMed

    Dou, Xiaodong; Jiang, Lan; Wang, Yanxing; Jin, Hongwei; Liu, Zhenming; Zhang, Liangren

    2018-01-15

    Glycogen synthase kinase-3β (GSK-3β) is an attractive therapeutic target for human diseases, such as diabetes, cancer, neurodegenerative diseases, and inflammation. Thus, structure-based virtual screening was performed to identify novel scaffolds of GSK-3β inhibitors, and we observed that conserved water molecules of GSK-3β were suitable for virtual screening. We found 14 hits and D1 (IC 50 of 0.71 μM) were identified. Furthermore, the neuroprotection activity of D1-D3 was validated on a cellular level. 2D similarity searches were used to find derivatives of high inhibitory compounds and an enriched structure-activity relationship suggested that these skeletons were worthy of study as potent GSK-3β inhibitors. Copyright © 2017. Published by Elsevier Ltd.

  1. Innovative virtual reality measurements for embryonic growth and development.

    PubMed

    Verwoerd-Dikkeboom, C M; Koning, A H J; Hop, W C; van der Spek, P J; Exalto, N; Steegers, E A P

    2010-06-01

    Innovative imaging techniques, using up-to-date ultrasonic equipment, necessitate specific biometry. The aim of our study was to test the possibility of detailed human embryonic biometry using a virtual reality (VR) technique. In a longitudinal study, three-dimensional (3D) measurements were performed from 6 to 14 weeks gestational age in 32 pregnancies (n = 16 spontaneous conception, n = 16 IVF/ICSI). A total of 125 3D volumes were analysed in the I-Space VR system, which allows binocular depth perception, providing a realistic 3D illusion. Crown-rump length (CRL), biparietal diameter (BPD), occipito-frontal diameter (OFD), head circumference (HC) and abdominal circumference (AC) were measured as well as arm length, shoulder width, elbow width, hip width and knee width. CRL, BPD, OFD and HC could be measured in more than 96% of patients, and AC in 78%. Shoulder width, elbow width, hip width and knee width could be measured in more than 95% of cases, and arm length in 82% of cases. Growth curves were constructed for all variables. Ear and foot measurements were only possible beyond 9 weeks gestation. This study provides a detailed, longitudinal description of normal human embryonic growth, facilitated by a VR system. Growth curves were created for embryonic biometry of the CRL, BPD, HC and AC early in pregnancy and also of several 'new' biometric measurements. Applying virtual embryoscopy will enable us to diagnose growth and/or developmental delay earlier and more accurately. This is especially important for pregnancies at risk of severe complications, such as recurrent late miscarriage and early growth restriction.

  2. Psychophysical Evaluation of Three-Dimensional Auditory Displays

    NASA Technical Reports Server (NTRS)

    Wightman, Frederic L. (Principal Investigator)

    1995-01-01

    This report describes the process made during the first year of a three-year Cooperative Research Agreement (CRA NCC2-542). The CRA proposed a program of applied of psychophysical research designed to determine the requirements and limitations of three-dimensional (3-D) auditory display systems. These displays present synthesized stimuli to a pilot or virtual workstation operator that evoke auditory images at predetermined positions in space. The images can be either stationary or moving. In previous years. we completed a number of studies that provided data on listeners' abilities to localize stationary sound sources with 3-D displays. The current focus is on the use of 3-D displays in 'natural' listening conditions, which include listeners' head movements, moving sources, multiple sources and 'echoic' sources. The results of our research on two of these topics, the role of head movements and the role of echoes and reflections, were reported in the most recent Semi-Annual Pro-ress Report (Appendix A). In the period since the last Progress Report we have been studying a third topic, the localizability of moving sources. The results of this research are described. The fidelity of a virtual auditory display is critically dependent on precise measurement of the listener''s Head-Related Transfer Functions (HRTFs), which are used to produce the virtual auditory images. We continue to explore methods for improving our HRTF measurement technique. During this reporting period we compared HRTFs measured using our standard open-canal probe tube technique and HRTFs measured with the closed-canal insert microphones from the Crystal River Engineering Snapshot system.

  3. CasCADe: A Novel 4D Visualization System for Virtual Construction Planning.

    PubMed

    Ivson, Paulo; Nascimento, Daniel; Celes, Waldemar; Barbosa, Simone Dj

    2018-01-01

    Building Information Modeling (BIM) provides an integrated 3D environment to manage large-scale engineering projects. The Architecture, Engineering and Construction (AEC) industry explores 4D visualizations over these datasets for virtual construction planning. However, existing solutions lack adequate visual mechanisms to inspect the underlying schedule and make inconsistencies readily apparent. The goal of this paper is to apply best practices of information visualization to improve 4D analysis of construction plans. We first present a review of previous work that identifies common use cases and limitations. We then consulted with AEC professionals to specify the main design requirements for such applications. These guided the development of CasCADe, a novel 4D visualization system where task sequencing and spatio-temporal simultaneity are immediately apparent. This unique framework enables the combination of diverse analytical features to create an information-rich analysis environment. We also describe how engineering collaborators used CasCADe to review the real-world construction plans of an Oil & Gas process plant. The system made evident schedule uncertainties, identified work-space conflicts and helped analyze other constructability issues. The results and contributions of this paper suggest new avenues for future research in information visualization for the AEC industry.

  4. Addendum to: Modelling duality between bound and resonant meson spectra by means of free quantum motions on the de Sitter space-time dS4

    NASA Astrophysics Data System (ADS)

    Kirchbach, M.; Compean, C. B.

    2017-04-01

    In the article under discussion the analysis of the spectra of the unflavored mesons lead us to some intriguing insights into the possible geometry of space-time outside the causal Minkowski light cone and into the nature of strong interactions. In applying the potential theory concept of geometrization of interactions, we showed that the meson masses are best described by a confining potential composed by the centrifugal barrier on the three-dimensional spherical space, S3, and of a charge-dipole potential constructed from the Green function to the S3 Laplacian. The dipole potential emerged in view of the fact that S3 does not support single-charges without violation of the Gauss theorem and the superposition principle, thus providing a natural stage for the description of the general phenomenon of confined charge-neutral systems. However, in the original article we did not relate the charge-dipoles on S3 to the color neutral mesons, and did not express the magnitude of the confining dipole potential in terms of the strong coupling αS and the number of colors, Nc, the subject of the addendum. To the amount S3 can be thought of as the unique closed space-like geodesic of a four-dimensional de Sitter space-time, dS4, we hypothesized the space-like region outside the causal Einsteinian light cone (it describes virtual processes, among them interactions) as the (1+4)-dimensional subspace of the conformal (2+4) space-time, foliated with dS4 hyperboloids, and in this way assumed relevance of dS4 special relativity for strong interaction processes. The potential designed in this way predicted meson spectra of conformal degeneracy patterns, and in accord with the experimental observations. We now extract the αs values in the infrared from data on meson masses. The results obtained are compatible with the αs estimates provided by other approaches.

  5. Valorisation of Cultural Heritage Through Virtual Visit and Augmented Reality: the Case of the Abbey of Epau (france)

    NASA Astrophysics Data System (ADS)

    Simonetto, E.; Froment, C.; Labergerie, E.; Ferré, G.; Séchet, B.; Chédorge, H.; Cali, J.; Polidori, L.

    2013-07-01

    Terrestrial Laser Scanning (TLS), 3-D modeling and its Web visualization are the three key steps needed to perform storage and grant-free and wide access to cultural heritage, as highlighted in many recent examples. The goal of this study is to set up 3-D Web resources for "virtually" visiting the exterior of the Abbaye de l'Epau, an old French abbey which has both a rich history and delicate architecture. The virtuality is considered in two ways: the flowing navigation in a virtual reality environment around the abbey and a game activity using augmented reality. First of all, the data acquisition consists in GPS and tacheometry survey, terrestrial laser scanning and photography acquisition. After data pre-processing, the meshed and textured 3-D model is generated using 3-D Reshaper commercial software. The virtual reality visit and augmented reality animation are then created using Unity software. This work shows the interest of such tools in bringing out the regional cultural heritage and making it attractive to the public.

  6. Application of advanced virtual reality and 3D computer assisted technologies in tele-3D-computer assisted surgery in rhinology.

    PubMed

    Klapan, Ivica; Vranjes, Zeljko; Prgomet, Drago; Lukinović, Juraj

    2008-03-01

    The real-time requirement means that the simulation should be able to follow the actions of the user that may be moving in the virtual environment. The computer system should also store in its memory a three-dimensional (3D) model of the virtual environment. In that case a real-time virtual reality system will update the 3D graphic visualization as the user moves, so that up-to-date visualization is always shown on the computer screen. Upon completion of the tele-operation, the surgeon compares the preoperative and postoperative images and models of the operative field, and studies video records of the procedure itself Using intraoperative records, animated images of the real tele-procedure performed can be designed. Virtual surgery offers the possibility of preoperative planning in rhinology. The intraoperative use of computer in real time requires development of appropriate hardware and software to connect medical instrumentarium with the computer and to operate the computer by thus connected instrumentarium and sophisticated multimedia interfaces.

  7. Vroom: designing an augmented environment for remote collaboration in digital cinema production

    NASA Astrophysics Data System (ADS)

    Margolis, Todd; Cornish, Tracy

    2013-03-01

    As media technologies become increasingly affordable, compact and inherently networked, new generations of telecollaborative platforms continue to arise which integrate these new affordances. Virtual reality has been primarily concerned with creating simulations of environments that can transport participants to real or imagined spaces that replace the "real world". Meanwhile Augmented Reality systems have evolved to interleave objects from Virtual Reality environments into the physical landscape. Perhaps now there is a new class of systems that reverse this precept to enhance dynamic media landscapes and immersive physical display environments to enable intuitive data exploration through collaboration. Vroom (Virtual Room) is a next-generation reconfigurable tiled display environment in development at the California Institute for Telecommunications and Information Technology (Calit2) at the University of California, San Diego. Vroom enables freely scalable digital collaboratories, connecting distributed, high-resolution visualization resources for collaborative work in the sciences, engineering and the arts. Vroom transforms a physical space into an immersive media environment with large format interactive display surfaces, video teleconferencing and spatialized audio built on a highspeed optical network backbone. Vroom enables group collaboration for local and remote participants to share knowledge and experiences. Possible applications include: remote learning, command and control, storyboarding, post-production editorial review, high resolution video playback, 3D visualization, screencasting and image, video and multimedia file sharing. To support these various scenarios, Vroom features support for multiple user interfaces (optical tracking, touch UI, gesture interface, etc.), support for directional and spatialized audio, giga-pixel image interactivity, 4K video streaming, 3D visualization and telematic production. This paper explains the design process that has been utilized to make Vroom an accessible and intuitive immersive environment for remote collaboration specifically for digital cinema production.

  8. Reduced Mental Load in Learning a Motor Visual Task with Virtual 3D Method

    ERIC Educational Resources Information Center

    Dan, A.; Reiner, M.

    2018-01-01

    Distance learning is expanding rapidly, fueled by the novel technologies for shared recorded teaching sessions on the Web. Here, we ask whether 3D stereoscopic (3DS) virtual learning environment teaching sessions are more compelling than typical two-dimensional (2D) video sessions and whether this type of teaching results in superior learning. The…

  9. Employing Virtual Humans for Education and Training in X3D/VRML Worlds

    ERIC Educational Resources Information Center

    Ieronutti, Lucio; Chittaro, Luca

    2007-01-01

    Web-based education and training provides a new paradigm for imparting knowledge; students can access the learning material anytime by operating remotely from any location. Web3D open standards, such as X3D and VRML, support Web-based delivery of Educational Virtual Environments (EVEs). EVEs have a great potential for learning and training…

  10. Virtual surgical planning and 3D printing in prosthetic orbital reconstruction with percutaneous implants: a technical case report

    PubMed Central

    Huang, Yu-Hui; Seelaus, Rosemary; Zhao, Linping; Patel, Pravin K; Cohen, Mimis

    2016-01-01

    Osseointegrated titanium implants to the cranial skeleton for retention of facial prostheses have proven to be a reliable replacement for adhesive systems. However, improper placement of the implants can jeopardize prosthetic outcomes, and long-term success of an implant-retained prosthesis. Three-dimensional (3D) computer imaging, virtual planning, and 3D printing have become accepted components of the preoperative planning and design phase of treatment. Computer-aided design and computer-assisted manufacture that employ cone-beam computed tomography data offer benefits to patient treatment by contributing to greater predictability and improved treatment efficiencies with more reliable outcomes in surgical and prosthetic reconstruction. 3D printing enables transfer of the virtual surgical plan to the operating room by fabrication of surgical guides. Previous studies have shown that accuracy improves considerably with guided implantation when compared to conventional template or freehand implant placement. This clinical case report demonstrates the use of a 3D technological pathway for preoperative virtual planning through prosthesis fabrication, utilizing 3D printing, for a patient with an acquired orbital defect that was restored with an implant-retained silicone orbital prosthesis. PMID:27843356

  11. Virtual surgical planning and 3D printing in prosthetic orbital reconstruction with percutaneous implants: a technical case report.

    PubMed

    Huang, Yu-Hui; Seelaus, Rosemary; Zhao, Linping; Patel, Pravin K; Cohen, Mimis

    2016-01-01

    Osseointegrated titanium implants to the cranial skeleton for retention of facial prostheses have proven to be a reliable replacement for adhesive systems. However, improper placement of the implants can jeopardize prosthetic outcomes, and long-term success of an implant-retained prosthesis. Three-dimensional (3D) computer imaging, virtual planning, and 3D printing have become accepted components of the preoperative planning and design phase of treatment. Computer-aided design and computer-assisted manufacture that employ cone-beam computed tomography data offer benefits to patient treatment by contributing to greater predictability and improved treatment efficiencies with more reliable outcomes in surgical and prosthetic reconstruction. 3D printing enables transfer of the virtual surgical plan to the operating room by fabrication of surgical guides. Previous studies have shown that accuracy improves considerably with guided implantation when compared to conventional template or freehand implant placement. This clinical case report demonstrates the use of a 3D technological pathway for preoperative virtual planning through prosthesis fabrication, utilizing 3D printing, for a patient with an acquired orbital defect that was restored with an implant-retained silicone orbital prosthesis.

  12. Four Bed Molecular Sieve - Exploration (4BMS-X) Virtual Heater Design and Optimization

    NASA Technical Reports Server (NTRS)

    Schunk, R. Gregory; Peters, Warren T.; Thomas, John T., Jr.

    2017-01-01

    A 4BMS-X (Four Bed Molecular Sieve - Exploration) design and heater optimization study for CO2 sorbent beds in proposed exploration system architectures is presented. The primary objectives of the study are to reduce heater power and thermal gradients within the CO2 sorbent beds while minimizing channeling effects. Some of the notable changes from the ISS (International Space Station) CDRA (Carbon Dioxide Removal Assembly) to the proposed exploration system architecture include cylindrical beds, alternate sorbents and an improved heater core. Results from both 2D and 3D sorbent bed thermal models with integrated heaters are presented. The 2D sorbent bed models are used to optimize heater power and fin geometry while the 3D models address end effects in the beds for more realistic thermal gradient and heater power predictions.

  13. STS-132 crew during their MSS/SIMP EVA3 OPS 4 training

    NASA Image and Video Library

    2010-01-28

    JSC2010-E-014953 (28 Jan. 2010) --- NASA astronauts Piers Sellers, STS-132 mission specialist; and Tracy Caldwell Dyson, Expedition 23/24 flight engineer, use the virtual reality lab in the Space Vehicle Mock-up Facility at NASA's Johnson Space Center to train for some of their duties aboard the space shuttle and space station. This type of computer interface, paired with virtual reality training hardware and software, helps to prepare crew members for dealing with space station elements.

  14. STS-132 crew during their MSS/SIMP EVA3 OPS 4 training

    NASA Image and Video Library

    2010-01-28

    JSC2010-E-014949 (28 Jan. 2010) --- NASA astronauts Piers Sellers, STS-132 mission specialist; and Tracy Caldwell Dyson, Expedition 23/24 flight engineer, use the virtual reality lab in the Space Vehicle Mock-up Facility at NASA's Johnson Space Center to train for some of their duties aboard the space shuttle and space station. This type of computer interface, paired with virtual reality training hardware and software, helps to prepare crew members for dealing with space station elements.

  15. STS-132 crew during their MSS/SIMP EVA3 OPS 4 training

    NASA Image and Video Library

    2010-01-28

    JSC2010-E-014956 (28 Jan. 2010) --- NASA astronauts Ken Ham (left foreground), STS-132 commander; Michael Good, mission specialist; and Tony Antonelli (right), pilot, use the virtual reality lab in the Space Vehicle Mock-up Facility at NASA's Johnson Space Center to train for some of their duties aboard the space shuttle and space station. This type of computer interface, paired with virtual reality training hardware and software, helps to prepare crew members for dealing with space station elements.

  16. STS-131 crew during VR Lab MSS/EVAB SUPT3 Team 91016 training

    NASA Image and Video Library

    2009-09-25

    JSC2009-E-214346 (25 Sept. 2009) --- Japan Aerospace Exploration Agency (JAXA) astronaut Naoko Yamazaki, STS-131 mission specialist, uses the virtual reality lab in the Space Vehicle Mock-up Facility at NASA's Johnson Space Center to train for some of her duties aboard the space shuttle and space station. This type of computer interface, paired with virtual reality training hardware and software, helps to prepare the entire team for dealing with space station elements.

  17. STS-131 crew during VR Lab MSS/EVAB SUPT3 Team 91016 training

    NASA Image and Video Library

    2009-09-25

    JSC2009-E-214328 (25 Sept. 2009) --- Japan Aerospace Exploration Agency (JAXA) astronaut Naoko Yamazaki, STS-131 mission specialist, uses the virtual reality lab in the Space Vehicle Mock-up Facility at NASA's Johnson Space Center to train for some of her duties aboard the space shuttle and space station. This type of computer interface, paired with virtual reality training hardware and software, helps to prepare the entire team for dealing with space station elements.

  18. STS-132 crew during their MSS/SIMP EVA3 OPS 4 training

    NASA Image and Video Library

    2010-01-28

    JSC2010-E-014951 (28 Jan. 2010) --- NASA astronauts Michael Good (seated), Garrett Reisman (right foreground), both STS-132 mission specialists; and Tony Antonelli, pilot, use the virtual reality lab in the Space Vehicle Mock-up Facility at NASA's Johnson Space Center to train for some of their duties aboard the space shuttle and space station. This type of computer interface, paired with virtual reality training hardware and software, helps to prepare crew members for dealing with space station elements.

  19. STS-131 crew during VR Lab MSS/EVAB SUPT3 Team 91016 training

    NASA Image and Video Library

    2009-09-25

    JSC2009-E-214321 (25 Sept. 2009) --- NASA astronauts James P. Dutton Jr., STS-131 pilot; and Stephanie Wilson, mission specialist, use the virtual reality lab in the Space Vehicle Mock-up Facility at NASA's Johnson Space Center to train for some of their duties aboard the space shuttle and space station. This type of computer interface, paired with virtual reality training hardware and software, helps to prepare the entire team for dealing with space station elements.

  20. Research on virtual Guzheng based on Kinect

    NASA Astrophysics Data System (ADS)

    Li, Shuyao; Xu, Kuangyi; Zhang, Heng

    2018-05-01

    There are a lot of researches on virtual instruments, but there are few on classical Chinese instruments, and the techniques used are very limited. This paper uses Unity 3D and Kinect camera combined with virtual reality technology and gesture recognition method to design a virtual playing system of Guzheng, a traditional Chinese musical instrument, with demonstration function. In this paper, the real scene obtained by Kinect camera is fused with virtual Guzheng in Unity 3D. The depth data obtained by Kinect and the Suzuki85 algorithm are used to recognize the relative position of the user's right hand and the virtual Guzheng, and the hand gesture of the user is recognized by Kinect.

  1. Facilitating 3D Virtual World Learning Environments Creation by Non-Technical End Users through Template-Based Virtual World Instantiation

    ERIC Educational Resources Information Center

    Liu, Chang; Zhong, Ying; Ozercan, Sertac; Zhu, Qing

    2013-01-01

    This paper presents a template-based solution to overcome technical barriers non-technical computer end users face when developing functional learning environments in three-dimensional virtual worlds (3DVW). "iVirtualWorld," a prototype of a platform-independent 3DVW creation tool that implements the proposed solution, facilitates 3DVW…

  2. From Vesalius to virtual reality: How embodied cognition facilitates the visualization of anatomy

    NASA Astrophysics Data System (ADS)

    Jang, Susan

    This study examines the facilitative effects of embodiment of a complex internal anatomical structure through three-dimensional ("3-D") interactivity in a virtual reality ("VR") program. Since Shepard and Metzler's influential 1971 study, it has been known that 3-D objects (e.g., multiple-armed cube or external body parts) are visually and motorically embodied in our minds. For example, people take longer to rotate mentally an image of their hand not only when there is a greater degree of rotation, but also when the images are presented in a manner incompatible with their natural body movement (Parsons, 1987a, 1994; Cooper & Shepard, 1975; Sekiyama, 1983). Such findings confirm the notion that our mental images and rotations of those images are in fact confined by the laws of physics and biomechanics, because we perceive, think and reason in an embodied fashion. With the advancement of new technologies, virtual reality programs for medical education now enable users to interact directly in a 3-D environment with internal anatomical structures. Given that such structures are not readily viewable to users and thus not previously susceptible to embodiment, coupled with the VR environment also affording all possible degrees of rotation, how people learn from these programs raises new questions. If we embody external anatomical parts we can see, such as our hands and feet, can we embody internal anatomical parts we cannot see? Does manipulating the anatomical part in virtual space facilitate the user's embodiment of that structure and therefore the ability to visualize the structure mentally? Medical students grouped in yoked-pairs were tasked with mastering the spatial configuration of an internal anatomical structure; only one group was allowed to manipulate the images of this anatomical structure in a 3-D VR environment, whereas the other group could only view the manipulation. The manipulation group outperformed the visual group, suggesting that the interactivity that took place among the manipulation group promoted visual and motoric embodiment, which in turn enhanced learning. Moreover, when accounting for spatial ability, it was found that manipulation benefits students with low spatial ability more than students with high spatial ability.

  3. Direct Visuo-Haptic 4D Volume Rendering Using Respiratory Motion Models.

    PubMed

    Fortmeier, Dirk; Wilms, Matthias; Mastmeyer, Andre; Handels, Heinz

    2015-01-01

    This article presents methods for direct visuo-haptic 4D volume rendering of virtual patient models under respiratory motion. Breathing models are computed based on patient-specific 4D CT image data sequences. Virtual patient models are visualized in real-time by ray casting based rendering of a reference CT image warped by a time-variant displacement field, which is computed using the motion models at run-time. Furthermore, haptic interaction with the animated virtual patient models is provided by using the displacements computed at high rendering rates to translate the position of the haptic device into the space of the reference CT image. This concept is applied to virtual palpation and the haptic simulation of insertion of a virtual bendable needle. To this aim, different motion models that are applicable in real-time are presented and the methods are integrated into a needle puncture training simulation framework, which can be used for simulated biopsy or vessel puncture in the liver. To confirm real-time applicability, a performance analysis of the resulting framework is given. It is shown that the presented methods achieve mean update rates around 2,000 Hz for haptic simulation and interactive frame rates for volume rendering and thus are well suited for visuo-haptic rendering of virtual patients under respiratory motion.

  4. [Fabrication and accuracy research on 3D printing dental model based on cone beam computed tomography digital modeling].

    PubMed

    Zhang, Hui-Rong; Yin, Le-Feng; Liu, Yan-Li; Yan, Li-Yi; Wang, Ning; Liu, Gang; An, Xiao-Li; Liu, Bin

    2018-04-01

    The aim of this study is to build a digital dental model with cone beam computed tomography (CBCT), to fabricate a virtual model via 3D printing, and to determine the accuracy of 3D printing dental model by comparing the result with a traditional dental cast. CBCT of orthodontic patients was obtained to build a digital dental model by using Mimics 10.01 and Geomagic studio software. The 3D virtual models were fabricated via fused deposition modeling technique (FDM). The 3D virtual models were compared with the traditional cast models by using a Vernier caliper. The measurements used for comparison included the width of each tooth, the length and width of the maxillary and mandibular arches, and the length of the posterior dental crest. 3D printing models had higher accuracy compared with the traditional cast models. The results of the paired t-test of all data showed that no statistically significant difference was observed between the two groups (P>0.05). Dental digital models built with CBCT realize the digital storage of patients' dental condition. The virtual dental model fabricated via 3D printing avoids traditional impression and simplifies the clinical examination process. The 3D printing dental models produced via FDM show a high degree of accuracy. Thus, these models are appropriate for clinical practice.

  5. A collaborative virtual reality environment for neurosurgical planning and training.

    PubMed

    Kockro, Ralf A; Stadie, Axel; Schwandt, Eike; Reisch, Robert; Charalampaki, Cleopatra; Ng, Ivan; Yeo, Tseng Tsai; Hwang, Peter; Serra, Luis; Perneczky, Axel

    2007-11-01

    We have developed a highly interactive virtual environment that enables collaborative examination of stereoscopic three-dimensional (3-D) medical imaging data for planning, discussing, or teaching neurosurgical approaches and strategies. The system consists of an interactive console with which the user manipulates 3-D data using hand-held and tracked devices within a 3-D virtual workspace and a stereoscopic projection system. The projection system displays the 3-D data on a large screen while the user is working with it. This setup allows users to interact intuitively with complex 3-D data while sharing this information with a larger audience. We have been using this system on a routine clinical basis and during neurosurgical training courses to collaboratively plan and discuss neurosurgical procedures with 3-D reconstructions of patient-specific magnetic resonance and computed tomographic imaging data or with a virtual model of the temporal bone. Working collaboratively with the 3-D information of a large, interactive, stereoscopic projection provides an unambiguous way to analyze and understand the anatomic spatial relationships of different surgical corridors. In our experience, the system creates a unique forum for open and precise discussion of neurosurgical approaches. We believe the system provides a highly effective way to work with 3-D data in a group, and it significantly enhances teaching of neurosurgical anatomy and operative strategies.

  6. Permuting input for more effective sampling of 3D conformer space

    NASA Astrophysics Data System (ADS)

    Carta, Giorgio; Onnis, Valeria; Knox, Andrew J. S.; Fayne, Darren; Lloyd, David G.

    2006-03-01

    SMILES strings and other classic 2D structural formats offer a convenient way to represent molecules as a simplistic connection table, with the inherent advantages of ease of handling and storage. In the context of virtual screening, chemical databases to be screened are often initially represented by canonicalised SMILES strings that can be filtered and pre-processed in a number of ways, resulting in molecules that occupy similar regions of chemical space to active compounds of a therapeutic target. A wide variety of software exists to convert molecules into SMILES format, namely, Mol2smi (Daylight Inc.), MOE (Chemical Computing Group) and Babel (Openeye Scientific Software). Depending on the algorithm employed, the atoms of a SMILES string defining a molecule can be ordered differently. Upon conversion to 3D coordinates they result in the production of ostensibly the same molecule. In this work we show how different permutations of a SMILES string can affect conformer generation, affecting reliability and repeatability of the results. Furthermore, we propose a novel procedure for the generation of conformers, taking advantage of the permutation of the input strings—both SMILES and other 2D formats, leading to more effective sampling of conformation space in output, and also implementing fingerprint and principal component analyses step to post process and visualise the results.

  7. A Novel Method of Orbital Floor Reconstruction Using Virtual Planning, 3-Dimensional Printing, and Autologous Bone.

    PubMed

    Vehmeijer, Maarten; van Eijnatten, Maureen; Liberton, Niels; Wolff, Jan

    2016-08-01

    Fractures of the orbital floor are often a result of traffic accidents or interpersonal violence. To date, numerous materials and methods have been used to reconstruct the orbital floor. However, simple and cost-effective 3-dimensional (3D) printing technologies for the treatment of orbital floor fractures are still sought. This study describes a simple, precise, cost-effective method of treating orbital fractures using 3D printing technologies in combination with autologous bone. Enophthalmos and diplopia developed in a 64-year-old female patient with an orbital floor fracture. A virtual 3D model of the fracture site was generated from computed tomography images of the patient. The fracture was virtually closed using spline interpolation. Furthermore, a virtual individualized mold of the defect site was created, which was manufactured using an inkjet printer. The tangible mold was subsequently used during surgery to sculpture an individualized autologous orbital floor implant. Virtual reconstruction of the orbital floor and the resulting mold enhanced the overall accuracy and efficiency of the surgical procedure. The sculptured autologous orbital floor implant showed an excellent fit in vivo. The combination of virtual planning and 3D printing offers an accurate and cost-effective treatment method for orbital floor fractures. Copyright © 2016 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  8. Effects of 3D Virtual Simulators in the Introductory Wind Energy Course: A Tool for Teaching Engineering Concepts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Do, Phuong T.; Moreland, John R.; Delgado, Catherine

    Our research provides an innovative solution for optimizing learning effectiveness and improving postsecondary education through the development of virtual simulators that can be easily used and integrated into existing wind energy curriculum. Two 3D virtual simulators are developed in our laboratory for use in an immersive 3D virtual reality (VR) system or for 3D display on a 2D screen. Our goal is to apply these prototypical simulators to train postsecondary students and professionals in wind energy education; and to offer experiential learning opportunities in 3D modeling, simulation, and visualization. The issue of transferring learned concepts to practical applications is amore » widespread problem in postsecondary education. Related to this issue is a critical demand to educate and train a generation of professionals for the wind energy industry. With initiatives such as the U.S. Department of Energy's “20% Wind Energy by 2030” outlining an exponential increase of wind energy capacity over the coming years, revolutionary educational reform is needed to meet the demand for education in the field of wind energy. These developments and implementation of Virtual Simulators and accompanying curriculum will propel national reforms, meeting the needs of the wind energy industrial movement and addressing broader educational issues that affect a number of disciplines.« less

  9. Effects of 3D Virtual Simulators in the Introductory Wind Energy Course: A Tool for Teaching Engineering Concepts

    DOE PAGES

    Do, Phuong T.; Moreland, John R.; Delgado, Catherine; ...

    2013-01-01

    Our research provides an innovative solution for optimizing learning effectiveness and improving postsecondary education through the development of virtual simulators that can be easily used and integrated into existing wind energy curriculum. Two 3D virtual simulators are developed in our laboratory for use in an immersive 3D virtual reality (VR) system or for 3D display on a 2D screen. Our goal is to apply these prototypical simulators to train postsecondary students and professionals in wind energy education; and to offer experiential learning opportunities in 3D modeling, simulation, and visualization. The issue of transferring learned concepts to practical applications is amore » widespread problem in postsecondary education. Related to this issue is a critical demand to educate and train a generation of professionals for the wind energy industry. With initiatives such as the U.S. Department of Energy's “20% Wind Energy by 2030” outlining an exponential increase of wind energy capacity over the coming years, revolutionary educational reform is needed to meet the demand for education in the field of wind energy. These developments and implementation of Virtual Simulators and accompanying curriculum will propel national reforms, meeting the needs of the wind energy industrial movement and addressing broader educational issues that affect a number of disciplines.« less

  10. Fully Three-Dimensional Virtual-Reality System

    NASA Technical Reports Server (NTRS)

    Beckman, Brian C.

    1994-01-01

    Proposed virtual-reality system presents visual displays to simulate free flight in three-dimensional space. System, virtual space pod, is testbed for control and navigation schemes. Unlike most virtual-reality systems, virtual space pod would not depend for orientation on ground plane, which hinders free flight in three dimensions. Space pod provides comfortable seating, convenient controls, and dynamic virtual-space images for virtual traveler. Controls include buttons plus joysticks with six degrees of freedom.

  11. High-Energy 3D Calorimeter based on position-sensitive virtual Frisch-grid CdZnTe detectors for use in Gamma-ray Astronomy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bolotnikov, Alexey; De Geronimo, GianLuigi; Vernon, Emerson

    We present a concept for a calorimeter based on a novel approach of 3D position-sensitive virtual Frischgrid CZT detectors. This calorimeter aims to measure photons with energies from ~100 keV to 10 (goal 50) MeV. The expected energy resolution at 662 keV is ~1% FWHM, and the photon interaction positionmeasurement accuracy is ~1 mm in all 3 dimensions. Each CZT bar is a rectangular prism with typical cross-section of 6x6 mm 2 and length of 2-4 cm. The bars are arranged in modules of 4 x 4 bars, and the modules themselves can be assembled into a larger array. Themore » 3D virtual voxel approach solves a long-standing problem with CZT detectors associated with material imperfections that limit the performance and usefulness of relatively thick detectors (i.e., > 1 cm). Also, it allows us to relax the requirements on the quality of the crystals, maintaining good energy resolution and significantly reducing the instrument cost. Such a calorimeter can be successfully used in space telescopes that use Compton scattering of γ rays, such as AMEGO, serving as part of its calorimeter and providing the position and energy measurement for Compton-scattered photons. Also, it could provide suitable energy resolution to allow for spectroscopic measurements of γ-ray lines from nuclear decays. Another viable option is to use this calorimeter as a focal plane to conduct spectroscopic measurements of cosmic γ-ray events. In combination with a coded-aperture mask, it potentially could provide mapping of the 511-keV radiation from the Galactic Center region.« less

  12. Development of a Virtual Museum Including a 4d Presentation of Building History in Virtual Reality

    NASA Astrophysics Data System (ADS)

    Kersten, T. P.; Tschirschwitz, F.; Deggim, S.

    2017-02-01

    In the last two decades the definition of the term "virtual museum" changed due to rapid technological developments. Using today's available 3D technologies a virtual museum is no longer just a presentation of collections on the Internet or a virtual tour of an exhibition using panoramic photography. On one hand, a virtual museum should enhance a museum visitor's experience by providing access to additional materials for review and knowledge deepening either before or after the real visit. On the other hand, a virtual museum should also be used as teaching material in the context of museum education. The laboratory for Photogrammetry & Laser Scanning of the HafenCity University Hamburg has developed a virtual museum (VM) of the museum "Alt-Segeberger Bürgerhaus", a historic town house. The VM offers two options for visitors wishing to explore the museum without travelling to the city of Bad Segeberg, Schleswig-Holstein, Germany. Option a, an interactive computer-based, tour for visitors to explore the exhibition and to collect information of interest or option b, to immerse into virtual reality in 3D with the HTC Vive Virtual Reality System.

  13. Cognitive Aspects of Collaboration in 3d Virtual Environments

    NASA Astrophysics Data System (ADS)

    Juřík, V.; Herman, L.; Kubíček, P.; Stachoň, Z.; Šašinka, Č.

    2016-06-01

    Human-computer interaction has entered the 3D era. The most important models representing spatial information — maps — are transferred into 3D versions regarding the specific content to be displayed. Virtual worlds (VW) become promising area of interest because of possibility to dynamically modify content and multi-user cooperation when solving tasks regardless to physical presence. They can be used for sharing and elaborating information via virtual images or avatars. Attractiveness of VWs is emphasized also by possibility to measure operators' actions and complex strategies. Collaboration in 3D environments is the crucial issue in many areas where the visualizations are important for the group cooperation. Within the specific 3D user interface the operators' ability to manipulate the displayed content is explored regarding such phenomena as situation awareness, cognitive workload and human error. For such purpose, the VWs offer a great number of tools for measuring the operators' responses as recording virtual movement or spots of interest in the visual field. Study focuses on the methodological issues of measuring the usability of 3D VWs and comparing them with the existing principles of 2D maps. We explore operators' strategies to reach and interpret information regarding the specific type of visualization and different level of immersion.

  14. The Infrastructure of an Integrated Virtual Reality Environment for International Space Welding Experiment

    NASA Technical Reports Server (NTRS)

    Wang, Peter Hor-Ching

    1996-01-01

    This study is a continuation of the summer research of 1995 NASA/ASEE Summer Faculty Fellowship Program. This effort is to provide the infrastructure of an integrated Virtual Reality (VR) environment for the International Space Welding Experiment (ISWE) Analytical Tool and Trainer and the Microgravity Science Glovebox (MSG) Analytical Tool study. Due to the unavailability of the MSG CAD files and the 3D-CAD converter, little was done to the MSG study. However, the infrastructure of the integrated VR environment for ISWE is capable of performing the MSG study when the CAD files become available. Two primary goals are established for this research. First, the essential peripheral devices for an integrated VR environment will be studied and developed for the ISWE and MSG studies. Secondly, the training of the flight crew (astronaut) in general orientation, procedures, and location, orientation, and sequencing of the welding samples and tools are built into the VR system for studying the welding process and training the astronaut.

  15. A virtual reality oriented clinical experiment on post-stroke rehabilitation: performance and preference comparison among different stereoscopic displays­

    NASA Astrophysics Data System (ADS)

    Yeh, Shih-Ching; Rizzo, Albert; Sawchuk, Alexander A.

    2007-02-01

    We have developed a novel VR task: the Dynamic Reaching Test, that measures human forearm movement in 3D space. In this task, three different stereoscopic displays: autostereoscopic (AS), shutter glasses (SG) and head mounted display (HMD), are used in tests in which subjects must catch a virtual ball thrown at them. Parameters such as percentage of successful catches, movement efficiency (subject path length compared to minimal path length), and reaction time are measured to evaluate differences in 3D perception among the three stereoscopic displays. The SG produces the highest percentage of successful catches, though the difference between the three displays is small, implying that users can perform the VR task with any of the displays. The SG and HMD produced the best movement efficiency, while the AS was slightly less efficient. Finally, the AS and HMD produced similar reaction times that were slightly higher (by 0.1 s) than the SG. We conclude that SG and HMD displays were the most effective, but only slightly better than the AS display.

  16. Real-Time 3D Visualization

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Butler Hine, former director of the Intelligent Mechanism Group (IMG) at Ames Research Center, and five others partnered to start Fourth Planet, Inc., a visualization company that specializes in the intuitive visual representation of dynamic, real-time data over the Internet and Intranet. Over a five-year period, the then NASA researchers performed ten robotic field missions in harsh climes to mimic the end- to-end operations of automated vehicles trekking across another world under control from Earth. The core software technology for these missions was the Virtual Environment Vehicle Interface (VEVI). Fourth Planet has released VEVI4, the fourth generation of the VEVI software, and NetVision. VEVI4 is a cutting-edge computer graphics simulation and remote control applications tool. The NetVision package allows large companies to view and analyze in virtual 3D space such things as the health or performance of their computer network or locate a trouble spot on an electric power grid. Other products are forthcoming. Fourth Planet is currently part of the NASA/Ames Technology Commercialization Center, a business incubator for start-up companies.

  17. Inspiring Equal Contribution and Opportunity in a 3D Multi-User Virtual Environment: Bringing Together Men Gamers and Women Non-Gamers in Second Life[R

    ERIC Educational Resources Information Center

    deNoyelles, Aimee; Seo, Kay Kyeong-Ju

    2012-01-01

    A 3D multi-user virtual environment holds promise to support and enhance student online learning communities due to its ability to promote global synchronous interaction and collaboration, rich multisensory experience and expression, and elaborate design capabilities. Second Life[R], a multi-user virtual environment intended for adult users 18 and…

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Markidis, S.; Rizwan, U.

    The use of virtual nuclear control room can be an effective and powerful tool for training personnel working in the nuclear power plants. Operators could experience and simulate the functioning of the plant, even in critical situations, without being in a real power plant or running any risk. 3D models can be exported to Virtual Reality formats and then displayed in the Virtual Reality environment providing an immersive 3D experience. However, two major limitations of this approach are that 3D models exhibit static textures, and they are not fully interactive and therefore cannot be used effectively in training personnel. Inmore » this paper we first describe a possible solution for embedding the output of a computer application in a 3D virtual scene, coupling real-world applications and VR systems. The VR system reported here grabs the output of an application running on an X server; creates a texture with the output and then displays it on a screen or a wall in the virtual reality environment. We then propose a simple model for providing interaction between the user in the VR system and the running simulator. This approach is based on the use of internet-based application that can be commanded by a laptop or tablet-pc added to the virtual environment. (authors)« less

  19. A second life for eHealth: prospects for the use of 3-D virtual worlds in clinical psychology.

    PubMed

    Gorini, Alessandra; Gaggioli, Andrea; Vigna, Cinzia; Riva, Giuseppe

    2008-08-05

    The aim of the present paper is to describe the role played by three-dimensional (3-D) virtual worlds in eHealth applications, addressing some potential advantages and issues related to the use of this emerging medium in clinical practice. Due to the enormous diffusion of the World Wide Web (WWW), telepsychology, and telehealth in general, have become accepted and validated methods for the treatment of many different health care concerns. The introduction of the Web 2.0 has facilitated the development of new forms of collaborative interaction between multiple users based on 3-D virtual worlds. This paper describes the development and implementation of a form of tailored immersive e-therapy called p-health whose key factor is interreality, that is, the creation of a hybrid augmented experience merging physical and virtual worlds. We suggest that compared with conventional telehealth applications such as emails, chat, and videoconferences, the interaction between real and 3-D virtual worlds may convey greater feelings of presence, facilitate the clinical communication process, positively influence group processes and cohesiveness in group-based therapies, and foster higher levels of interpersonal trust between therapists and patients. However, challenges related to the potentially addictive nature of such virtual worlds and questions related to privacy and personal safety will also be discussed.

  20. Utilising a Collaborative Macro-Script to Enhance Student Engagement: A Mixed Method Study in a 3D Virtual Environment

    ERIC Educational Resources Information Center

    Bouta, Hara; Retalis, Symeon; Paraskeva, Fotini

    2012-01-01

    This study examines the effect of using an online 3D virtual environment in teaching Mathematics in Primary Education. In particular, it explores the extent to which student engagement--behavioral, affective and cognitive--is fostered by such tools in order to enhance collaborative learning. For the study we used a purpose-created 3D virtual…

  1. Matching tire tracks on the head using forensic photogrammetry.

    PubMed

    Thali, M J; Braun, M; Brüschweiler, W; Dirnhofer, R

    2000-09-11

    In the field of the documentation of forensics-relevant injuries, from the reconstructive point of view, the forensic, CAD-supported photogrammetry plays an important role; particularly so when a detailed 3-D reconstruction is vital. This is demonstrated with a soft-tissue injury to the face caused by being run over by a car tire. Since the objects (injury and surface of the tire) to be investigated will be evaluated in virtual space, they must be series photographed. These photo sequences are then evaluated with the RolleiMetric multi-image evaluation system. This system measures and calculates the spatial location of points shown in the photo sequences, and creates 3-D data models of the objects. In a 3-D CAD program, the model of the injury is then compared against the model of the possible injury-causing instrument. The validation of the forensic, CAD-supported photogrammetry, as shown by the perfect 3-D match between the tire tread and the facial injury, demonstrates how greatly this 3-D method surpasses the classic 2-D overlay method (one-to-one photography).

  2. A foundation for savantism? Visuo-spatial synaesthetes present with cognitive benefits.

    PubMed

    Simner, Julia; Mayo, Neil; Spiller, Mary-Jane

    2009-01-01

    Individuals with 'time-space' synaesthesia have conscious awareness of mappings between time and space (e.g., they may see months arranged in an ellipse, or years as columns or spirals). These mappings exist in the 3D space around the body or in a virtual space within the mind's eye. Our study shows that these extra-ordinary mappings derive from, or give rise to, superior abilities in the two domains linked by this cross-modal phenomenon (i.e., abilities relating to time, and visualised space). We tested ten time-space synaesthetes with a battery of temporal and visual/spatial tests. Our temporal battery (the Edinburgh [Public and Autobiographical] Events Battery - EEB) assessed both autobiographical and non-autobiographical memory for events. Our visual/spatial tests assessed the ability to manipulate real or imagined objects in 3D space (the Three Dimensional Constructional Praxis test; Visual Object and Space Perception Battery, University of Southern California Mental Rotation Test) as well as assessing visual memory recall (Visual Patterns Test - VPT). Synaesthetes' performance was superior to the control population in every assessment, but was not superior in tasks that do not draw upon abilities related to their mental calendars. Our paper discusses the implications of this temporal-spatial advantage as it relates to normal processing, synaesthetic processing, and to the savant-like condition of hyperthymestic syndrome (Parker et al., 2006).

  3. 3-D localization of virtual sound sources: effects of visual environment, pointing method, and training.

    PubMed

    Majdak, Piotr; Goupell, Matthew J; Laback, Bernhard

    2010-02-01

    The ability to localize sound sources in three-dimensional space was tested in humans. In Experiment 1, naive subjects listened to noises filtered with subject-specific head-related transfer functions. The tested conditions included the pointing method (head or manual pointing) and the visual environment (VE; darkness or virtual VE). The localization performance was not significantly different between the pointing methods. The virtual VE significantly improved the horizontal precision and reduced the number of front-back confusions. These results show the benefit of using a virtual VE in sound localization tasks. In Experiment 2, subjects were provided with sound localization training. Over the course of training, the performance improved for all subjects, with the largest improvements occurring during the first 400 trials. The improvements beyond the first 400 trials were smaller. After the training, there was still no significant effect of pointing method, showing that the choice of either head- or manual-pointing method plays a minor role in sound localization performance. The results of Experiment 2 reinforce the importance of perceptual training for at least 400 trials in sound localization studies.

  4. Glasses-free large size high-resolution three-dimensional display based on the projector array

    NASA Astrophysics Data System (ADS)

    Sang, Xinzhu; Wang, Peng; Yu, Xunbo; Zhao, Tianqi; Gao, Xing; Xing, Shujun; Yu, Chongxiu; Xu, Daxiong

    2014-11-01

    Normally, it requires a huge amount of spatial information to increase the number of views and to provide smooth motion parallax for natural three-dimensional (3D) display similar to real life. To realize natural 3D video display without eye-wears, a huge amount of 3D spatial information is normal required. However, minimum 3D information for eyes should be used to reduce the requirements for display devices and processing time. For the 3D display with smooth motion parallax similar to the holographic stereogram, the size the virtual viewing slit should be smaller than the pupil size of eye at the largest viewing distance. To increase the resolution, two glass-free 3D display systems rear and front projection are presented based on the space multiplexing with the micro-projector array and the special designed 3D diffuse screens with the size above 1.8 m× 1.2 m. The displayed clear depths are larger 1.5m. The flexibility in terms of digitized recording and reconstructed based on the 3D diffuse screen relieves the limitations of conventional 3D display technologies, which can realize fully continuous, natural 3-D display. In the display system, the aberration is well suppressed and the low crosstalk is achieved.

  5. Development of real-time motion capture system for 3D on-line games linked with virtual character

    NASA Astrophysics Data System (ADS)

    Kim, Jong Hyeong; Ryu, Young Kee; Cho, Hyung Suck

    2004-10-01

    Motion tracking method is being issued as essential part of the entertainment, medical, sports, education and industry with the development of 3-D virtual reality. Virtual human character in the digital animation and game application has been controlled by interfacing devices; mouse, joysticks, midi-slider, and so on. Those devices could not enable virtual human character to move smoothly and naturally. Furthermore, high-end human motion capture systems in commercial market are expensive and complicated. In this paper, we proposed a practical and fast motion capturing system consisting of optic sensors, and linked the data with 3-D game character with real time. The prototype experiment setup is successfully applied to a boxing game which requires very fast movement of human character.

  6. Simulation of mirror surfaces for virtual estimation of visibility lines for 3D motor vehicle collision reconstruction.

    PubMed

    Leipner, Anja; Dobler, Erika; Braun, Marcel; Sieberth, Till; Ebert, Lars

    2017-10-01

    3D reconstructions of motor vehicle collisions are used to identify the causes of these events and to identify potential violations of traffic regulations. Thus far, the reconstruction of mirrors has been a problem since they are often based on approximations or inaccurate data. Our aim with this paper was to confirm that structured light scans of a mirror improve the accuracy of simulating the field of view of mirrors. We analyzed the performances of virtual mirror surfaces based on structured light scans using real mirror surfaces and their reflections as references. We used an ATOS GOM III scanner to scan the mirrors and processed the 3D data using Geomagic Wrap. For scene reconstruction and to generate virtual images, we used 3ds Max. We compared the simulated virtual images and photographs of real scenes using Adobe Photoshop. Our results showed that we achieved clear and even mirror results and that the mirrors behaved as expected. The greatest measured deviation between an original photo and the corresponding virtual image was 20 pixels in the transverse direction for an image width of 4256 pixels. We discussed the influences of data processing and alignment of the 3D models on the results. The study was limited to a distance of 1.6m, and the method was not able to simulate an interior mirror. In conclusion, structured light scans of mirror surfaces can be used to simulate virtual mirror surfaces with regard to 3D motor vehicle collision reconstruction. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Algorithms for extraction of structural attitudes from 3D outcrop models

    NASA Astrophysics Data System (ADS)

    Duelis Viana, Camila; Endlein, Arthur; Ademar da Cruz Campanha, Ginaldo; Henrique Grohmann, Carlos

    2016-05-01

    The acquisition of geological attitudes on rock cuts using traditional field compass survey can be a time consuming, dangerous, or even impossible task depending on the conditions and location of outcrops. The importance of this type of data in rock-mass classifications and structural geology has led to the development of new techniques, in which the application of photogrammetric 3D digital models has had an increasing use. In this paper we present two algorithms for extraction of attitudes of geological discontinuities from virtual outcrop models: ply2atti and scanline, implemented with the Python programming language. The ply2atti algorithm allows for the virtual sampling of planar discontinuities appearing on the 3D model as individual exposed surfaces, while the scanline algorithm allows the sampling of discontinuities (surfaces and traces) along a virtual scanline. Application to digital models of a simplified test setup and a rock cut demonstrated a good correlation between the surveys undertaken using traditional field compass reading and virtual sampling on 3D digital models.

  8. iSocial: delivering the Social Competence Intervention for Adolescents (SCI-A) in a 3D virtual learning environment for youth with high functioning autism.

    PubMed

    Stichter, Janine P; Laffey, James; Galyen, Krista; Herzog, Melissa

    2014-02-01

    One consistent area of need for students with autism spectrum disorders is in the area of social competence. However, the increasing need to provide qualified teachers to deliver evidence-based practices in areas like social competence leave schools, such as those found in rural areas, in need of support. Distance education and in particular, 3D Virtual Learning, holds great promise for supporting schools and youth to gain social competence through knowledge and social practice in context. iSocial, a distance education, 3D virtual learning environment implemented the 31-lesson social competence intervention for adolescents across three small cohorts totaling 11 students over a period of 4 months. Results demonstrated that the social competence curriculum was delivered with fidelity in the 3D virtual learning environment. Moreover, learning outcomes suggest that the iSocial approach shows promise for social competence benefits for youth.

  9. Holographic space: presence and absence in time

    NASA Astrophysics Data System (ADS)

    Chang, Yin-Ren; Richardson, Martin

    2017-03-01

    In terms of contemporary art, time-based media generally refers to artworks that have duration as a dimension and unfold to the viewer over time, that could be a video, slide, film, computer-based technologies or audio. As part of this category, holography pushes this visual-oriented narrative a step further, which brings a real 3D image to invite and allow audiences revisiting the scene of the past, at the moment of recording in space and time. Audiences could also experience the kinetic holographic aesthetics through constantly moving the viewing point or illumination source, which creates dynamic visual effects. In other words, when the audience and hologram remain still, the holographic image can only be perceived statically. This unique form of expression is not created by virtual simulation; the principal of wavefront reconstruction process made holographic art exceptional from other time-based media. This project integrates 3D printing technology to explore the nature of material aesthetics, transiting between material world and holographic space. In addition, this series of creation also reveals the unique temporal quality of a hologram's presence and absence, an ambiguous relationship existing in this media.

  10. Implementation of virtual models from sheet metal forming simulation into physical 3D colour models using 3D printing

    NASA Astrophysics Data System (ADS)

    Junk, S.

    2016-08-01

    Today the methods of numerical simulation of sheet metal forming offer a great diversity of possibilities for optimization in product development and in process design. However, the results from simulation are only available as virtual models. Because there are any forming tools available during the early stages of product development, physical models that could serve to represent the virtual results are therefore lacking. Physical 3D-models can be created using 3D-printing and serve as an illustration and present a better understanding of the simulation results. In this way, the results from the simulation can be made more “comprehensible” within a development team. This paper presents the possibilities of 3D-colour printing with particular consideration of the requirements regarding the implementation of sheet metal forming simulation. Using concrete examples of sheet metal forming, the manufacturing of 3D colour models will be expounded upon on the basis of simulation results.

  11. Interactive 3D visualization for theoretical virtual observatories

    NASA Astrophysics Data System (ADS)

    Dykes, T.; Hassan, A.; Gheller, C.; Croton, D.; Krokos, M.

    2018-06-01

    Virtual observatories (VOs) are online hubs of scientific knowledge. They encompass a collection of platforms dedicated to the storage and dissemination of astronomical data, from simple data archives to e-research platforms offering advanced tools for data exploration and analysis. Whilst the more mature platforms within VOs primarily serve the observational community, there are also services fulfilling a similar role for theoretical data. Scientific visualization can be an effective tool for analysis and exploration of data sets made accessible through web platforms for theoretical data, which often contain spatial dimensions and properties inherently suitable for visualization via e.g. mock imaging in 2D or volume rendering in 3D. We analyse the current state of 3D visualization for big theoretical astronomical data sets through scientific web portals and virtual observatory services. We discuss some of the challenges for interactive 3D visualization and how it can augment the workflow of users in a virtual observatory context. Finally we showcase a lightweight client-server visualization tool for particle-based data sets, allowing quantitative visualization via data filtering, highlighting two example use cases within the Theoretical Astrophysical Observatory.

  12. X3DOM as Carrier of the Virtual Heritage

    NASA Astrophysics Data System (ADS)

    Jung, Y.; Behr, J.; Graf, H.

    2011-09-01

    Virtual Museums (VM) are a new model of communication that aims at creating a personalized, immersive, and interactive way to enhance our understanding of the world around us. The term "VM" is a short-cut that comprehends various types of digital creations. One of the carriers for the communication of the virtual heritage at future internet level as de-facto standard is browser front-ends presenting the content and assets of museums. A major driving technology for the documentation and presentation of heritage driven media is real-time 3D content, thus imposing new strategies for a web inclusion. 3D content must become a first class web media that can be created, modified, and shared in the same way as text, images, audio and video are handled on the web right now. A new integration model based on a DOM integration into the web browsers' architecture opens up new possibilities for declarative 3 D content on the web and paves the way for new application scenarios for the virtual heritage at future internet level. With special regards to the X3DOM project as enabling technology for declarative 3D in HTML, this paper describes application scenarios and analyses its technological requirements for an efficient presentation and manipulation of virtual heritage assets on the web.

  13. Recent advances in 3D computed tomography techniques for simulation and navigation in hepatobiliary pancreatic surgery.

    PubMed

    Uchida, Masafumi

    2014-04-01

    A few years ago it could take several hours to complete a 3D image using a 3D workstation. Thanks to advances in computer science, obtaining results of interest now requires only a few minutes. Many recent 3D workstations or multimedia computers are equipped with onboard 3D virtual patient modeling software, which enables patient-specific preoperative assessment and virtual planning, navigation, and tool positioning. Although medical 3D imaging can now be conducted using various modalities, including computed tomography (CT), magnetic resonance imaging (MRI), positron emission tomography (PET), and ultrasonography (US) among others, the highest quality images are obtained using CT data, and CT images are now the most commonly used source of data for 3D simulation and navigation image. If the 2D source image is bad, no amount of 3D image manipulation in software will provide a quality 3D image. In this exhibition, the recent advances in CT imaging technique and 3D visualization of the hepatobiliary and pancreatic abnormalities are featured, including scan and image reconstruction technique, contrast-enhanced techniques, new application of advanced CT scan techniques, and new virtual reality simulation and navigation imaging. © 2014 Japanese Society of Hepato-Biliary-Pancreatic Surgery.

  14. The Components of Effective Teacher Training in the Use of Three-Dimensional Immersive Virtual Worlds for Learning and Instruction Purposes: A Literature Review

    ERIC Educational Resources Information Center

    Nussli, Natalie; Oh, Kevin

    2014-01-01

    The overarching question that guides this review is to identify the key components of effective teacher training in virtual schooling, with a focus on three-dimensional (3D) immersive virtual worlds (IVWs). The process of identifying the essential components of effective teacher training in the use of 3D IVWs will be described step-by-step. First,…

  15. Cross-Cultural Discussions in a 3D Virtual Environment and Their Affordances for Learners' Motivation and Foreign Language Discussion Skills

    ERIC Educational Resources Information Center

    Jauregi, Kristi; Kuure, Leena; Bastian, Pim; Reinhardt, Dennis; Koivisto, Tuomo

    2015-01-01

    Within the European TILA project a case study was carried out where pupils from schools in Finland and the Netherlands engaged in debating sessions using the 3D virtual world of OpenSim once a week for a period of 5 weeks. The case study had two main objectives: (1) to study the impact that the discussion tasks undertaken in a virtual environment…

  16. jsc2005e04513

    NASA Image and Video Library

    2005-02-03

    JSC2005-E-04513 (3 Feb. 2005) --- European Space Agency (ESA) astronaut Christer Fuglesang, STS-116 mission specialist, uses virtual reality hardware in the Space Vehicle Mockup Facility at the Johnson Space Center to rehearse some of his duties on the upcoming mission to the international space station. This type of virtual reality training allows the astronauts to wear a helmet and special gloves while looking at computer displays simulating actual movements around the various locations on the station hardware with which they will be working.

  17. Simulating 3D deformation using connected polygons

    NASA Astrophysics Data System (ADS)

    Tarigan, J. T.; Jaya, I.; Hardi, S. M.; Zamzami, E. M.

    2018-03-01

    In modern 3D application, interaction between user and the virtual world is one of an important factor to increase the realism. This interaction can be visualized in many forms; one of them is object deformation. There are many ways to simulate object deformation in virtual 3D world; each comes with different level of realism and performance. Our objective is to present a new method to simulate object deformation by using a graph-connected polygon. In this solution, each object contains multiple level of polygons in different level of volume. The proposed solution focusses on performance rather while maintaining the acceptable level of realism. In this paper, we present the design and implementation of our solution and show that this solution is usable in performance sensitive 3D application such as games and virtual reality.

  18. Interreality: A New Paradigm for E-health.

    PubMed

    Riva, Giuseppe

    2009-01-01

    "Interreality" is a personalized immersive e-therapy whose main novelty is a hybrid, closed-loop empowering experience bridging physical and virtual worlds. The main feature of interreality is a twofold link between the virtual and the real world: (a) behavior in the physical world influences the experience in the virtual one; (b) behavior in the virtual world influences the experience in the real one. This is achieved through: (1) 3D Shared Virtual Worlds: role-playing experiences in which one or more users interact with one another within a 3D world; (2) Bio and Activity Sensors (From the Real to the Virtual World): They are used to track the emotional/health/activity status of the user and to influence his/her experience in the virtual world (aspect, activity and access); (3) Mobile Internet Appliances (From the Virtual to the Real One): In interreality, the social and individual user activity in the virtual world has a direct link with the users' life through a mobile phone/digital assistant. The different technologies that are involved in the interreality vision and its clinical rationale are addressed and discussed.

  19. Virtual Representations in 3D Learning Environments

    ERIC Educational Resources Information Center

    Shonfeld, Miri; Kritz, Miki

    2013-01-01

    This research explores the extent to which virtual worlds can serve as online collaborative learning environments for students by increasing social presence and engagement. 3D environments enable learning, which simulates face-to-face encounters while retaining the advantages of online learning. Students in Education departments created avatars…

  20. Comparison of 3D dynamic virtual model to link segment model for estimation of net L4/L5 reaction moments during lifting.

    PubMed

    Abdoli-Eramaki, Mohammad; Stevenson, Joan M; Agnew, Michael J; Kamalzadeh, Amin

    2009-04-01

    The purpose of this study was to validate a 3D dynamic virtual model for lifting tasks against a validated link segment model (LSM). A face validation study was conducted by collecting x, y, z coordinate data and using them in both virtual and LSM models. An upper body virtual model was needed to calculate the 3D torques about human joints for use in simulated lifting styles and to estimate the effect of external mechanical devices on human body. Firstly, the model had to be validated to be sure it provided accurate estimates of 3D moments in comparison to a previously validated LSM. Three synchronised Fastrak units with nine sensors were used to record data from one male subject who completed dynamic box lifting under 27 different load conditions (box weights (3), lifting techniques (3) and rotations (3)). The external moments about three axes of L4/L5 were compared for both models. A pressure switch on the box was used to denote the start and end of the lift. An excellent agreement [image omitted] was found between the two models for dynamic lifting tasks, especially for larger moments in flexion and extension. This virtual model was considered valid for use in a complete simulation of the upper body skeletal system. This biomechanical virtual model of the musculoskeletal system can be used by researchers and practitioners to give a better tool to study the causes of LBP and the effect of intervention strategies, by permitting the researcher to see and control a virtual subject's motions.

  1. fVisiOn: glasses-free tabletop 3D display to provide virtual 3D media naturally alongside real media

    NASA Astrophysics Data System (ADS)

    Yoshida, Shunsuke

    2012-06-01

    A novel glasses-free tabletop 3D display, named fVisiOn, floats virtual 3D objects on an empty, flat, tabletop surface and enables multiple viewers to observe raised 3D images from any angle at 360° Our glasses-free 3D image reproduction method employs a combination of an optical device and an array of projectors and produces continuous horizontal parallax in the direction of a circular path located above the table. The optical device shapes a hollow cone and works as an anisotropic diffuser. The circularly arranged projectors cast numerous rays into the optical device. Each ray represents a particular ray that passes a corresponding point on a virtual object's surface and orients toward a viewing area around the table. At any viewpoint on the ring-shaped viewing area, both eyes collect fractional images from different projectors, and all the viewers around the table can perceive the scene as 3D from their perspectives because the images include binocular disparity. The entire principle is installed beneath the table, so the tabletop area remains clear. No ordinary tabletop activities are disturbed. Many people can naturally share the 3D images displayed together with real objects on the table. In our latest prototype, we employed a handmade optical device and an array of over 100 tiny projectors. This configuration reproduces static and animated 3D scenes for a 130° viewing area and allows 5-cm-tall virtual characters to play soccer and dance on the table.

  2. Re-Dimensional Thinking in Earth Science: From 3-D Virtual Reality Panoramas to 2-D Contour Maps

    ERIC Educational Resources Information Center

    Park, John; Carter, Glenda; Butler, Susan; Slykhuis, David; Reid-Griffin, Angelia

    2008-01-01

    This study examines the relationship of gender and spatial perception on student interactivity with contour maps and non-immersive virtual reality. Eighteen eighth-grade students elected to participate in a six-week activity-based course called "3-D GeoMapping." The course included nine days of activities related to topographic mapping.…

  3. IYA2009 in Second Life

    NASA Astrophysics Data System (ADS)

    Gauthier, Adrienne J.

    2009-05-01

    Highlights from the first 6 months of the IYA2009 island in the multi-user 3D virtual world called Second Life ® will be shown. Future plans for exhibits and events will be discussed. You can find the 'Astronomy 2009' island by visiting this URL: http://secondastronomy.org/Astronomy2009/ which will trigger a teleport to our space. Keep up with our project at http://secondastronomy.org. Special thanks go to our primary sponsors: 400 Years of the Telescope/Interstellar Studios and The University of Arizona Department of Astronomy.

  4. [The virtual reality simulation research of China Mechanical Virtual Human based on the Creator/Vega].

    PubMed

    Wei, Gaofeng; Tang, Gang; Fu, Zengliang; Sun, Qiuming; Tian, Feng

    2010-10-01

    The China Mechanical Virtual Human (CMVH) is a human musculoskeletal biomechanical simulation platform based on China Visible Human slice images; it has great realistic application significance. In this paper is introduced the construction method of CMVH 3D models. Then a simulation system solution based on Creator/Vega is put forward for the complex and gigantic data characteristics of the 3D models. At last, combined with MFC technology, the CMVH simulation system is developed and a running simulation scene is given. This paper provides a new way for the virtual reality application of CMVH.

  5. Virtual Reality Website of Indonesia National Monument and Its Environment

    NASA Astrophysics Data System (ADS)

    Wardijono, B. A.; Hendajani, F.; Sudiro, S. A.

    2017-02-01

    National Monument (Monumen Nasional) is an Indonesia National Monument building where located in Jakarta. This monument is a symbol of Jakarta and it is a pride monument of the people in Jakarta and Indonesia country. This National Monument also has a museum about the history of the Indonesian country. To provide information to the general public, in this research we created and developed models of 3D graphics from the National Monument and the surrounding environment. Virtual Reality technology was used to display the visualization of the National Monument and the surrounding environment in 3D graphics form. Latest programming technology makes it possible to display 3D objects via the internet browser. This research used Unity3D and WebGL to make virtual reality models that can be implemented and showed on a Website. The result from this research is the development of 3-dimensional Website of the National Monument and its objects surrounding the environment that can be displayed through the Web browser. The virtual reality of whole objects was divided into a number of scenes, so that it can be displayed in real time visualization.

  6. Estimating Three-Dimensional Orientation of Human Body Parts by Inertial/Magnetic Sensing

    PubMed Central

    Sabatini, Angelo Maria

    2011-01-01

    User-worn sensing units composed of inertial and magnetic sensors are becoming increasingly popular in various domains, including biomedical engineering, robotics, virtual reality, where they can also be applied for real-time tracking of the orientation of human body parts in the three-dimensional (3D) space. Although they are a promising choice as wearable sensors under many respects, the inertial and magnetic sensors currently in use offer measuring performance that are critical in order to achieve and maintain accurate 3D-orientation estimates, anytime and anywhere. This paper reviews the main sensor fusion and filtering techniques proposed for accurate inertial/magnetic orientation tracking of human body parts; it also gives useful recipes for their actual implementation. PMID:22319365

  7. Estimating three-dimensional orientation of human body parts by inertial/magnetic sensing.

    PubMed

    Sabatini, Angelo Maria

    2011-01-01

    User-worn sensing units composed of inertial and magnetic sensors are becoming increasingly popular in various domains, including biomedical engineering, robotics, virtual reality, where they can also be applied for real-time tracking of the orientation of human body parts in the three-dimensional (3D) space. Although they are a promising choice as wearable sensors under many respects, the inertial and magnetic sensors currently in use offer measuring performance that are critical in order to achieve and maintain accurate 3D-orientation estimates, anytime and anywhere. This paper reviews the main sensor fusion and filtering techniques proposed for accurate inertial/magnetic orientation tracking of human body parts; it also gives useful recipes for their actual implementation.

  8. User Interface Technology Transfer to NASA's Virtual Wind Tunnel System

    NASA Technical Reports Server (NTRS)

    vanDam, Andries

    1998-01-01

    Funded by NASA grants for four years, the Brown Computer Graphics Group has developed novel 3D user interfaces for desktop and immersive scientific visualization applications. This past grant period supported the design and development of a software library, the 3D Widget Library, which supports the construction and run-time management of 3D widgets. The 3D Widget Library is a mechanism for transferring user interface technology from the Brown Graphics Group to the Virtual Wind Tunnel system at NASA Ames as well as the public domain.

  9. A Second Life for eHealth: Prospects for the Use of 3-D Virtual Worlds in Clinical Psychology

    PubMed Central

    Gaggioli, Andrea; Vigna, Cinzia; Riva, Giuseppe

    2008-01-01

    The aim of the present paper is to describe the role played by three-dimensional (3-D) virtual worlds in eHealth applications, addressing some potential advantages and issues related to the use of this emerging medium in clinical practice. Due to the enormous diffusion of the World Wide Web (WWW), telepsychology, and telehealth in general, have become accepted and validated methods for the treatment of many different health care concerns. The introduction of the Web 2.0 has facilitated the development of new forms of collaborative interaction between multiple users based on 3-D virtual worlds. This paper describes the development and implementation of a form of tailored immersive e-therapy called p-health whose key factor is interreality, that is, the creation of a hybrid augmented experience merging physical and virtual worlds. We suggest that compared with conventional telehealth applications such as emails, chat, and videoconferences, the interaction between real and 3-D virtual worlds may convey greater feelings of presence, facilitate the clinical communication process, positively influence group processes and cohesiveness in group-based therapies, and foster higher levels of interpersonal trust between therapists and patients. However, challenges related to the potentially addictive nature of such virtual worlds and questions related to privacy and personal safety will also be discussed. PMID:18678557

  10. Virtual working systems to support R&D groups

    NASA Astrophysics Data System (ADS)

    Dew, Peter M.; Leigh, Christine; Drew, Richard S.; Morris, David; Curson, Jayne

    1995-03-01

    The paper reports on the progress at Leeds University to build a Virtual Science Park (VSP) to enhance the University's ability to interact with industry, grow its applied research and workplace learning activities. The VSP exploits the advances in real time collaborative computing and networking to provide an environment that meets the objectives of physically based science parks without the need for the organizations to relocate. It provides an integrated set of services (e.g. virtual consultancy, workbased learning) built around a structured person- centered information model. This model supports the integration of tools for: (a) navigating around the information space; (b) browsing information stored within the VSP database; (c) communicating through a variety of Person-to-Person collaborative tools; and (d) the ability to the information stored in the VSP including the relationships to other information that support the underlying model. The paper gives an overview of a generic virtual working system based on X.500 directory services and the World-Wide Web that can be used to support the Virtual Science Park. Finally the paper discusses some of the research issues that need to be addressed to fully realize a Virtual Science Park.

  11. A randomized, double-blind evaluation of D-cycloserine or alprazolam combined with virtual reality exposure therapy for posttraumatic stress disorder in Iraq and Afghanistan War veterans.

    PubMed

    Rothbaum, Barbara Olasov; Price, Matthew; Jovanovic, Tanja; Norrholm, Seth D; Gerardi, Maryrose; Dunlop, Boadie; Davis, Michael; Bradley, Bekh; Duncan, Erica J; Rizzo, Albert; Ressler, Kerry J

    2014-06-01

    The authors examined the effectiveness of virtual reality exposure augmented with D-cycloserine or alprazolam, compared with placebo, in reducing posttraumatic stress disorder (PTSD) due to military trauma. After an introductory session, five sessions of virtual reality exposure were augmented with D-cycloserine (50 mg) or alprazolam (0.25 mg) in a double-blind, placebo-controlled randomized clinical trial for 156 Iraq and Afghanistan war veterans with PTSD. PTSD symptoms significantly improved from pre- to posttreatment across all conditions and were maintained at 3, 6, and 12 months. There were no overall differences in symptoms between D-cycloserine and placebo at any time. Alprazolam and placebo differed significantly on the Clinician-Administered PTSD Scale score at posttreatment and PTSD diagnosis at 3 months posttreatment; the alprazolam group showed a higher rate of PTSD (82.8%) than the placebo group (47.8%). Between-session extinction learning was a treatment-specific enhancer of outcome for the D-cycloserine group only. At posttreatment, the D-cycloserine group had the lowest cortisol reactivity and smallest startle response during virtual reality scenes. A six-session virtual reality treatment was associated with reduction in PTSD diagnoses and symptoms in Iraq and Afghanistan veterans, although there was no control condition for the virtual reality exposure. There was no advantage of D-cycloserine for PTSD symptoms in primary analyses. In secondary analyses, alprazolam impaired recovery and D-cycloserine enhanced virtual reality outcome in patients who demonstrated within-session learning. D-cycloserine augmentation reduced cortisol and startle reactivity more than did alprazolam or placebo, findings that are consistent with those in the animal literature.

  12. Magical Stories: Blending Virtual Reality and Artificial Intelligence.

    ERIC Educational Resources Information Center

    McLellan, Hilary

    Artificial intelligence (AI) techniques and virtual reality (VR) make possible powerful interactive stories, and this paper focuses on examples of virtual characters in three dimensional (3-D) worlds. Waldern, a virtual reality game designer, has theorized about and implemented software design of virtual teammates and opponents that incorporate AI…

  13. Systematic literature review of digital three-dimensional superimposition techniques to create virtual dental patients.

    PubMed

    Joda, Tim; Brägger, Urs; Gallucci, German

    2015-01-01

    Digital developments have led to the opportunity to compose simulated patient models based on three-dimensional (3D) skeletal, facial, and dental imaging. The aim of this systematic review is to provide an update on the current knowledge, to report on the technical progress in the field of 3D virtual patient science, and to identify further research needs to accomplish clinical translation. Searches were performed electronically (MEDLINE and OVID) and manually up to March 2014 for studies of 3D fusion imaging to create a virtual dental patient. Inclusion criteria were limited to human studies reporting on the technical protocol for superimposition of at least two different 3D data sets and medical field of interest. Of the 403 titles originally retrieved, 51 abstracts and, subsequently, 21 full texts were selected for review. Of the 21 full texts, 18 studies were included in the systematic review. Most of the investigations were designed as feasibility studies. Three different types of 3D data were identified for simulation: facial skeleton, extraoral soft tissue, and dentition. A total of 112 patients were investigated in the development of 3D virtual models. Superimposition of data on the facial skeleton, soft tissue, and/or dentition is a feasible technique to create a virtual patient under static conditions. Three-dimensional image fusion is of interest and importance in all fields of dental medicine. Future research should focus on the real-time replication of a human head, including dynamic movements, capturing data in a single step.

  14. A new approach towards image based virtual 3D city modeling by using close range photogrammetry

    NASA Astrophysics Data System (ADS)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2014-05-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing day to day for various engineering and non-engineering applications. Generally three main image based approaches are using for virtual 3D city models generation. In first approach, researchers used Sketch based modeling, second method is Procedural grammar based modeling and third approach is Close range photogrammetry based modeling. Literature study shows that till date, there is no complete solution available to create complete 3D city model by using images. These image based methods also have limitations This paper gives a new approach towards image based virtual 3D city modeling by using close range photogrammetry. This approach is divided into three sections. First, data acquisition process, second is 3D data processing, and third is data combination process. In data acquisition process, a multi-camera setup developed and used for video recording of an area. Image frames created from video data. Minimum required and suitable video image frame selected for 3D processing. In second section, based on close range photogrammetric principles and computer vision techniques, 3D model of area created. In third section, this 3D model exported to adding and merging of other pieces of large area. Scaling and alignment of 3D model was done. After applying the texturing and rendering on this model, a final photo-realistic textured 3D model created. This 3D model transferred into walk-through model or in movie form. Most of the processing steps are automatic. So this method is cost effective and less laborious. Accuracy of this model is good. For this research work, study area is the campus of department of civil engineering, Indian Institute of Technology, Roorkee. This campus acts as a prototype for city. Aerial photography is restricted in many country and high resolution satellite images are costly. In this study, proposed method is based on only simple video recording of area. Thus this proposed method is suitable for 3D city modeling. Photo-realistic, scalable, geo-referenced virtual 3D city model is useful for various kinds of applications such as for planning in navigation, tourism, disasters management, transportations, municipality, urban and environmental managements, real-estate industry. Thus this study will provide a good roadmap for geomatics community to create photo-realistic virtual 3D city model by using close range photogrammetry.

  15. Using virtual ridge augmentation and 3D printing to fabricate a titanium mesh positioning device: A novel technique letter.

    PubMed

    Al-Ardah, Aladdin; Alqahtani, Nasser; AlHelal, Abdulaziz; Goodacre, Brian; Swamidass, Rajesh; Garbacea, Antoanela; Lozada, Jaime

    2018-05-02

    This technique describes a novel approach for planning and augmenting a large bony defect using a titanium mesh (TiMe). A 3-dimensional (3D) surgical model was virtually created from a cone beam computed tomography (CBCT) and wax-pattern of the final prosthetic outcome. The required bone volume (horizontally and vertically) was digitally augmented and then 3D printed to create a bone model. The 3D model was then used to contour the TiMe in accordance with the digital augmentation. With the contoured / preformed TiMe on the 3D printed model a positioning jig was made to aid the placement of the TiMe as planned during surgery. Although this technique does not impact the final outcome of the augmentation procedure, it allows the clinician to virtually design the augmentation, preform and contour the TiMe, and create a positioning jig reducing surgical time and error.

  16. INSA Virtual Labs: a new R+D framework for innovative space science and technology

    NASA Astrophysics Data System (ADS)

    Cardesin Moinelo, Alejandro; Sanchez Portal, Miguel

    2012-10-01

    The company INSA (Ingeniería y Servicios Aeroespaciales) has given support to ESA Scientific missions for more than 20 years and is one of the main companies present in the European Space Astronomy Centre (ESAC) in Madrid since its creation. INSA personnel at ESAC provide high level technical and scientific support to ESA for all Astronomy and Solar System missions. In order to improve and maintain the scientific and technical competences among the employees, a research group has been created with the name "INSA Virtual Labs". This group coordinates all the R+D activities carried out by INSA personnel at ESAC and aims to establish collaborations and improve synergies with other research groups, institutes and universities. This represents a great means to improve the visibility of these activities towards the scientific community and serves as breeding ground for new innovative ideas and future commercial products.

  17. Operator vision aids for space teleoperation assembly and servicing

    NASA Technical Reports Server (NTRS)

    Brooks, Thurston L.; Ince, Ilhan; Lee, Greg

    1992-01-01

    This paper investigates concepts for visual operator aids required for effective telerobotic control. Operator visual aids, as defined here, mean any operational enhancement that improves man-machine control through the visual system. These concepts were derived as part of a study of vision issues for space teleoperation. Extensive literature on teleoperation, robotics, and human factors was surveyed to definitively specify appropriate requirements. This paper presents these visual aids in three general categories of camera/lighting functions, display enhancements, and operator cues. In the area of camera/lighting functions concepts are discussed for: (1) automatic end effector or task tracking; (2) novel camera designs; (3) computer-generated virtual camera views; (4) computer assisted camera/lighting placement; and (5) voice control. In the technology area of display aids, concepts are presented for: (1) zone displays, such as imminent collision or indexing limits; (2) predictive displays for temporal and spatial location; (3) stimulus-response reconciliation displays; (4) graphical display of depth cues such as 2-D symbolic depth, virtual views, and perspective depth; and (5) view enhancements through image processing and symbolic representations. Finally, operator visual cues (e.g., targets) that help identify size, distance, shape, orientation and location are discussed.

  18. 3D models as a platform for urban analysis and studies on human perception of space

    NASA Astrophysics Data System (ADS)

    Fisher-Gewirtzman, D.

    2012-10-01

    The objective of this work is to develop an integrated visual analysis and modelling for environmental and urban systems in respect to interior space layout and functionality. This work involves interdisciplinary research efforts that focus primarily on architecture design discipline, yet incorporates experts from other and different disciplines, such as Geoinformatics, computer sciences and environment-behavior studies. This work integrates an advanced Spatial Openness Index (SOI) model within realistic geovisualized Geographical Information System (GIS) environment and assessment using subjective residents' evaluation. The advanced SOI model measures the volume of visible space at any required view point practically, for every room or function. This model enables accurate 3D simulation of the built environment regarding built structure and surrounding vegetation. This paper demonstrates the work on a case study. A 3D model of Neve-Shaanan neighbourhood in Haifa was developed. Students that live in this neighbourhood had participated in this research. Their apartments were modelled in details and inserted into a general model, representing topography and the volumes of buildings. The visual space for each room in every apartment was documented and measured and at the same time the students were asked to answer questions regarding their perception of space and view from their residence. The results of this research work had shown potential contribution to professional users, such as researchers, designers and city planners. This model can be easily used by professionals and by non-professionals such as city dwellers, contractors and developers. This work continues with additional case studies having different building typologies and functions variety, using virtual reality tools.

  19. 3D chromosome rendering from Hi-C data using virtual reality

    NASA Astrophysics Data System (ADS)

    Zhu, Yixin; Selvaraj, Siddarth; Weber, Philip; Fang, Jennifer; Schulze, Jürgen P.; Ren, Bing

    2015-01-01

    Most genome browsers display DNA linearly, using single-dimensional depictions that are useful to examine certain epigenetic mechanisms such as DNA methylation. However, these representations are insufficient to visualize intrachromosomal interactions and relationships between distal genome features. Relationships between DNA regions may be difficult to decipher or missed entirely if those regions are distant in one dimension but could be spatially proximal when mapped to three-dimensional space. For example, the visualization of enhancers folding over genes is only fully expressed in three-dimensional space. Thus, to accurately understand DNA behavior during gene expression, a means to model chromosomes is essential. Using coordinates generated from Hi-C interaction frequency data, we have created interactive 3D models of whole chromosome structures and its respective domains. We have also rendered information on genomic features such as genes, CTCF binding sites, and enhancers. The goal of this article is to present the procedure, findings, and conclusions of our models and renderings.

  20. "torino 1911" Project: a Contribution of a Slam-Based Survey to Extensive 3d Heritage Modeling

    NASA Astrophysics Data System (ADS)

    Chiabrando, F.; Della Coletta, C.; Sammartano, G.; Spanò, A.; Spreafico, A.

    2018-05-01

    In the framework of the digital documentation of complex environments the advanced Geomatics researches offers integrated solution and multi-sensor strategies for the 3D accurate reconstruction of stratified structures and articulated volumes in the heritage domain. The use of handheld devices for rapid mapping, both image- and range-based, can help the production of suitable easy-to use and easy-navigable 3D model for documentation projects. These types of reality-based modelling could support, with their tailored integrated geometric and radiometric aspects, valorisation and communication projects including virtual reconstructions, interactive navigation settings, immersive reality for dissemination purposes and evoking past places and atmospheres. The aim of this research is localized within the "Torino 1911" project, led by the University of San Diego (California) in cooperation with the PoliTo. The entire project is conceived for multi-scale reconstruction of the real and no longer existing structures in the whole park space of more than 400,000 m2, for a virtual and immersive visualization of the Turin 1911 International "Fabulous Exposition" event, settled in the Valentino Park. Particularly, in the presented research, a 3D metric documentation workflow is proposed and validated in order to integrate the potentialities of LiDAR mapping by handheld SLAM-based device, the ZEB REVO Real Time instrument by GeoSLAM (2017 release), instead of TLS consolidated systems. Starting from these kind of models, the crucial aspects of the trajectories performances in the 3D reconstruction and the radiometric content from imaging approaches are considered, specifically by means of compared use of common DSLR cameras and portable sensors.

  1. Virtual embryology: a 3D library reconstructed from human embryo sections and animation of development process.

    PubMed

    Komori, M; Miura, T; Shiota, K; Minato, K; Takahashi, T

    1995-01-01

    The volumetric shape of a human embryo and its development is hard to comprehend as they have been viewed as a 2D schemes in a textbook or microscopic sectional image. In this paper, a CAI and research support system for human embryology using multimedia presentation techniques is described. In this system, 3D data is acquired from a series of sliced specimens. Its 3D structure can be viewed interactively by rotating, extracting, and truncating its whole body or organ. Moreover, the development process of embryos can be animated using a morphing technique applied to the specimen in several stages. The system is intended to be used interactively, like a virtual reality system. Hence, the system is called Virtual Embryology.

  2. Enhancing Scientific Collaboration, Transparency, and Public Access: Utilizing the Second Life Platform to Convene a Scientific Conference in 3-D Virtual Space

    NASA Astrophysics Data System (ADS)

    McGee, B. W.

    2006-12-01

    Recent studies reveal a general mistrust of science as well as a distorted perception of the scientific method by the public at-large. Concurrently, the number of science undergraduate and graduate students is in decline. By taking advantage of emergent technologies not only for direct public outreach but also to enhance public accessibility to the science process, it may be possible to both begin a reversal of popular scientific misconceptions and to engage a new generation of scientists. The Second Life platform is a 3-D virtual world produced and operated by Linden Research, Inc., a privately owned company instituted to develop new forms of immersive entertainment. Free and downloadable to the public, Second Life offers an imbedded physics engine, streaming audio and video capability, and unlike other "multiplayer" software, the objects and inhabitants of Second Life are entirely designed and created by its users, providing an open-ended experience without the structure of a traditional video game. Already, educational institutions, virtual museums, and real-world businesses are utilizing Second Life for teleconferencing, pre-visualization, and distance education, as well as to conduct traditional business. However, the untapped potential of Second Life lies in its versatility, where the limitations of traditional scientific meeting venues do not exist, and attendees need not be restricted by prohibitive travel costs. It will be shown that the Second Life system enables scientific authors and presenters at a "virtual conference" to display figures and images at full resolution, employ audio-visual content typically not available to conference organizers, and to perform demonstrations or premier three-dimensional renderings of objects, processes, or information. An enhanced presentation like those possible with Second Life would be more engaging to non- scientists, and such an event would be accessible to the general users of Second Life, who could have an uprecedented opportunity to witness an example of scientific collaboration typically reserved for members of a particular field or focus group. With a minimal investment in advertising or promotion both in real and virtual space, the possibility exists for scientific information and interaction to reach a far broader audience through Second Life than with any other currently available means for comparable cost.

  3. Virtual museum of Japanese Buddhist temple features for intercultural communication

    NASA Astrophysics Data System (ADS)

    Kawai, Takashi; Takao, Hidenobu; Inoue, Tetsuri; Miyamoto, Hiroyuki; Noro, Kageyu

    1998-04-01

    This paper describes the production and presentation of an experimental virtual museum of Japanese Buddhist art. This medium can provide an easy way to introduce a cultural heritage to people of different cultures. The virtual museum consisted of a multimedia program that included stereoscopic 3D movies of Buddhist statues; binaural 3D sounds of Buddhist ceremonies and the fragrance of incense from the Buddhist temple. The aim was to reproduce both the Buddhist artifacts and atmosphere as realistically as possible.

  4. Photographic coverage of STS-112 during EVA 3 in VR Lab.

    NASA Image and Video Library

    2002-08-21

    JSC2002-E-34622 (21 August 2002) --- Astronaut David A. Wolf, STS-112 mission specialist, uses the virtual reality lab at the Johnson Space Center (JSC) to train for his duties aboard the Space Shuttle Atlantis. This type of computer interface paired with virtual reality training hardware and software helps to prepare the entire team for dealing with ISS elements.

  5. Pharmacophore Based 3D-QSAR, Virtual Screening and Docking Studies on Novel Series of HDAC Inhibitors with Thiophen Linker as Anticancer Agents.

    PubMed

    Patel, Preeti; Singh, Avineesh; Patel, Vijay K; Jain, Deepak K; Veerasamy, Ravichandran; Rajak, Harish

    2016-01-01

    Histone deacetylase (HDAC) inhibitors can reactivate gene expression and inhibit the growth and survival of cancer cells. To identify the important pharmacophoric features and correlate 3Dchemical structure with biological activity using 3D-QSAR and Pharmacophore modeling studies. The pharmacophore hypotheses were developed using e-pharmacophore script and phase module. Pharmacophore hypothesis represents the 3D arrangement of molecular features necessary for activity. A series of 55 compounds with wellassigned HDAC inhibitory activity were used for 3D-QSAR model development. Best 3D-QSAR model, which is a five partial least square (PLS) factor model with good statistics and predictive ability, acquired Q2 (0.7293), R2 (0.9811), cross-validated coefficient rcv 2=0.9807 and R2 pred=0.7147 with low standard deviation (0.0952). Additionally, the selected pharmacophore model DDRRR.419 was used as a 3D query for virtual screening against the ZINC database. In the virtual screening workflow, docking studies (HTVS, SP and XP) were carried out by selecting multiple receptors (PDB ID: 1T69, 1T64, 4LXZ, 4LY1, 3MAX, 2VQQ, 3C10, 1W22). Finally, six compounds were obtained based on high scoring function (dock score -11.2278-10.2222 kcal/mol) and diverse structures. The structure activity correlation was established using virtual screening, docking, energetic based pharmacophore modelling, pharmacophore, atom based 3D QSAR models and their validation. The outcomes of these studies could be further employed for the design of novel HDAC inhibitors for anticancer activity.

  6. Multi-Level Adaptation in End-User Development of 3D Virtual Chemistry Experiments

    ERIC Educational Resources Information Center

    Liu, Chang; Zhong, Ying

    2014-01-01

    Multi-level adaptation in end-user development (EUD) is an effective way to enable non-technical end users such as educators to gradually introduce more functionality with increasing complexity to 3D virtual learning environments developed by themselves using EUD approaches. Parameterization, integration, and extension are three levels of…

  7. Adaptive 3D Virtual Learning Environments--A Review of the Literature

    ERIC Educational Resources Information Center

    Scott, Ezequiel; Soria, Alvaro; Campo, Marcelo

    2017-01-01

    New ways of learning have emerged in the last years by using computers in education. For instance, many Virtual Learning Environments have been widely adopted by educators, obtaining promising outcomes. Recently, these environments have evolved into more advanced ones using 3D technologies and taking into account the individual learner needs and…

  8. From Panoramic Photos to a Low-Cost Photogrammetric Workflow for Cultural Heritage 3d Documentation

    NASA Astrophysics Data System (ADS)

    D'Annibale, E.; Tassetti, A. N.; Malinverni, E. S.

    2013-07-01

    The research aims to optimize a workflow of architecture documentation: starting from panoramic photos, tackling available instruments and technologies to propose an integrated, quick and low-cost solution of Virtual Architecture. The broader research background shows how to use spherical panoramic images for the architectural metric survey. The input data (oriented panoramic photos), the level of reliability and Image-based Modeling methods constitute an integrated and flexible 3D reconstruction approach: from the professional survey of cultural heritage to its communication in virtual museum. The proposed work results from the integration and implementation of different techniques (Multi-Image Spherical Photogrammetry, Structure from Motion, Imagebased Modeling) with the aim to achieve high metric accuracy and photorealistic performance. Different documentation chances are possible within the proposed workflow: from the virtual navigation of spherical panoramas to complex solutions of simulation and virtual reconstruction. VR tools make for the integration of different technologies and the development of new solutions for virtual navigation. Image-based Modeling techniques allow 3D model reconstruction with photo realistic and high-resolution texture. High resolution of panoramic photo and algorithms of panorama orientation and photogrammetric restitution vouch high accuracy and high-resolution texture. Automated techniques and their following integration are subject of this research. Data, advisably processed and integrated, provide different levels of analysis and virtual reconstruction joining the photogrammetric accuracy to the photorealistic performance of the shaped surfaces. Lastly, a new solution of virtual navigation is tested. Inside the same environment, it proposes the chance to interact with high resolution oriented spherical panorama and 3D reconstructed model at once.

  9. Using Virtual Reality Computer Models to Support Student Understanding of Astronomical Concepts

    ERIC Educational Resources Information Center

    Barnett, Michael; Yamagata-Lynch, Lisa; Keating, Tom; Barab, Sasha A.; Hay, Kenneth E.

    2005-01-01

    The purpose of this study was to examine how 3-dimensional (3-D) models of the Solar System supported student development of conceptual understandings of various astronomical phenomena that required a change in frame of reference. In the course described in this study, students worked in teams to design and construct 3-D virtual reality computer…

  10. Surviving sepsis--a 3D integrative educational simulator.

    PubMed

    Ježek, Filip; Tribula, Martin; Kulhánek, Tomáš; Mateják, Marek; Privitzer, Pavol; Šilar, Jan; Kofránek, Jiří; Lhotská, Lenka

    2015-08-01

    Computer technology offers greater educational possibilities, notably simulation and virtual reality. This paper presents a technology which serves to integrate multiple modalities, namely 3D virtual reality, node-based simulator, Physiomodel explorer and explanatory physiological simulators employing Modelica language and Unity3D platform. This emerging tool chain should allow the authors to concentrate more on educational content instead of application development. The technology is demonstrated through Surviving sepsis educational scenario, targeted on Microsoft Windows Store platform.

  11. Photographic coverage of STS-112 during EVA 3 in VR Lab.

    NASA Image and Video Library

    2002-08-21

    JSC2002-E-34618 (21 August 2002) --- Astronaut Piers J. Sellers, STS-112 mission specialist, uses virtual reality hardware in the Space Vehicle Mockup Facility at the Johnson Space Center (JSC) to rehearse some of his duties on the upcoming mission to the International Space Station (ISS). This type of virtual reality training allows the astronauts to wear a helmet and special gloves while looking at computer displays simulating actual movements around the various locations on the International Space Station (ISS) hardware with which they will be working.

  12. Control of vertical posture while elevating one foot to avoid a real or virtual obstacle.

    PubMed

    Ida, Hirofumi; Mohapatra, Sambit; Aruin, Alexander

    2017-06-01

    The purpose of this study is to investigate the control of vertical posture during obstacle avoidance in a real versus a virtual reality (VR) environment. Ten healthy participants stood upright and lifted one leg to avoid colliding with a real obstacle sliding on the floor toward a participant and with its virtual image. Virtual obstacles were delivered by a head mounted display (HMD) or a 3D projector. The acceleration of the foot, center of pressure, and electrical activity of the leg and trunk muscles were measured and analyzed during the time intervals typical for early postural adjustments (EPAs), anticipatory postural adjustments (APAs), and compensatory postural adjustments (CPAs). The results showed that the peak acceleration of foot elevation in the HMD condition decreased significantly when compared with that of the real and 3D projector conditions. Reduced activity of the leg and trunk muscles was seen when dealing with virtual obstacles (HMD and 3D projector) as compared with that seen when dealing with real obstacles. These effects were more pronounced during APAs and CPAs. The onsets of muscle activities in the supporting limb were seen during EPAs and APAs. The observed modulation of muscle activity and altered patterns of movement seen while avoiding a virtual obstacle should be considered when designing virtual rehabilitation protocols.

  13. Building a virtual archive using brain architecture and Web 3D to deliver neuropsychopharmacology content over the Internet.

    PubMed

    Mongeau, R; Casu, M A; Pani, L; Pillolla, G; Lianas, L; Giachetti, A

    2008-05-01

    The vast amount of heterogeneous data generated in various fields of neurosciences such as neuropsychopharmacology can hardly be classified using traditional databases. We present here the concept of a virtual archive, spatially referenced over a simplified 3D brain map and accessible over the Internet. A simple prototype (available at http://aquatics.crs4.it/neuropsydat3d) has been realized using current Web-based virtual reality standards and technologies. It illustrates how primary literature or summary information can easily be retrieved through hyperlinks mapped onto a 3D schema while navigating through neuroanatomy. Furthermore, 3D navigation and visualization techniques are used to enhance the representation of brain's neurotransmitters, pathways and the involvement of specific brain areas in any particular physiological or behavioral functions. The system proposed shows how the use of a schematic spatial organization of data, widely exploited in other fields (e.g. Geographical Information Systems) can be extremely useful to develop efficient tools for research and teaching in neurosciences.

  14. Web-based interactive 3D visualization as a tool for improved anatomy learning.

    PubMed

    Petersson, Helge; Sinkvist, David; Wang, Chunliang; Smedby, Orjan

    2009-01-01

    Despite a long tradition, conventional anatomy education based on dissection is declining. This study tested a new virtual reality (VR) technique for anatomy learning based on virtual contrast injection. The aim was to assess whether students value this new three-dimensional (3D) visualization method as a learning tool and what value they gain from its use in reaching their anatomical learning objectives. Several 3D vascular VR models were created using an interactive segmentation tool based on the "virtual contrast injection" method. This method allows users, with relative ease, to convert computer tomography or magnetic resonance images into vivid 3D VR movies using the OsiriX software equipped with the CMIV CTA plug-in. Once created using the segmentation tool, the image series were exported in Quick Time Virtual Reality (QTVR) format and integrated within a web framework of the Educational Virtual Anatomy (EVA) program. A total of nine QTVR movies were produced encompassing most of the major arteries of the body. These movies were supplemented with associated information, color keys, and notes. The results indicate that, in general, students' attitudes towards the EVA-program were positive when compared with anatomy textbooks, but results were not the same with dissections. Additionally, knowledge tests suggest a potentially beneficial effect on learning.

  15. Accuracy of Three-Dimensional Planning in Surgery-First Orthognathic Surgery: Planning Versus Outcome.

    PubMed

    Tran, Ngoc Hieu; Tantidhnazet, Syrina; Raocharernporn, Somchart; Kiattavornchareon, Sirichai; Pairuchvej, Verasak; Wongsirichat, Natthamet

    2018-05-01

    The benefit of computer-assisted planning in orthognathic surgery (OGS) has been extensively documented over the last decade. This study aimed to evaluate the accuracy of three-dimensional (3D) virtual planning in surgery-first OGS. Fifteen patients with skeletal class III malocclusion who underwent bimaxillary OGS with surgery-first approach were included. A composite skull model was reconstructed using data from cone-beam computed tomography and stereolithography from a scanned dental cast. Surgical procedures were simulated using Simplant O&O software, and the virtual plan was transferred to the operation room using 3D-printed splints. Differences of the 3D measurements between the virtual plan and postoperative results were evaluated, and the accuracy was reported using root mean square deviation (RMSD) and the Bland-Altman method. The virtual planning was successfully transferred to surgery. The overall mean linear difference was 0.88 mm (0.79 mm for the maxilla and 1 mm for the mandible), and the overall mean angular difference was 1.16°. The RMSD ranged from 0.86 to 1.46 mm and 1.27° to 1.45°, within the acceptable clinical criteria. In this study, virtual surgical planning and 3D-printed surgical splints facilitated the diagnosis and treatment planning, and offered an accurate outcome in surgery-first OGS.

  16. See-through 3D technology for augmented reality

    NASA Astrophysics Data System (ADS)

    Lee, Byoungho; Lee, Seungjae; Li, Gang; Jang, Changwon; Hong, Jong-Young

    2017-06-01

    Augmented reality is recently attracting a lot of attention as one of the most spotlighted next-generation technologies. In order to get toward realization of ideal augmented reality, we need to integrate 3D virtual information into real world. This integration should not be noticed by users blurring the boundary between the virtual and real worlds. Thus, ultimate device for augmented reality can reconstruct and superimpose 3D virtual information on the real world so that they are not distinguishable, which is referred to as see-through 3D technology. Here, we introduce our previous researches to combine see-through displays and 3D technologies using emerging optical combiners: holographic optical elements and index matched optical elements. Holographic optical elements are volume gratings that have angular and wavelength selectivity. Index matched optical elements are partially reflective elements using a compensation element for index matching. Using these optical combiners, we could implement see-through 3D displays based on typical methodologies including integral imaging, digital holographic displays, multi-layer displays, and retinal projection. Some of these methods are expected to be optimized and customized for head-mounted or wearable displays. We conclude with demonstration and analysis of fundamental researches for head-mounted see-through 3D displays.

  17. Faster acquisition of laparoscopic skills in virtual reality with haptic feedback and 3D vision.

    PubMed

    Hagelsteen, Kristine; Langegård, Anders; Lantz, Adam; Ekelund, Mikael; Anderberg, Magnus; Bergenfelz, Anders

    2017-10-01

    The study investigated whether 3D vision and haptic feedback in combination in a virtual reality environment leads to more efficient learning of laparoscopic skills in novices. Twenty novices were allocated to two groups. All completed a training course in the LapSim ® virtual reality trainer consisting of four tasks: 'instrument navigation', 'grasping', 'fine dissection' and 'suturing'. The study group performed with haptic feedback and 3D vision and the control group without. Before and after the LapSim ® course, the participants' metrics were recorded when tying a laparoscopic knot in the 2D video box trainer Simball ® Box. The study group completed the training course in 146 (100-291) minutes compared to 215 (175-489) minutes in the control group (p = .002). The number of attempts to reach proficiency was significantly lower. The study group had significantly faster learning of skills in three out of four individual tasks; instrument navigation, grasping and suturing. Using the Simball ® Box, no difference in laparoscopic knot tying after the LapSim ® course was noted when comparing the groups. Laparoscopic training in virtual reality with 3D vision and haptic feedback made training more time efficient and did not negatively affect later video box-performance in 2D. [Formula: see text].

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    None, None

    Coulomb interaction between charged particles inside a bunch is one of the most importance collective effects in beam dynamics, becoming even more significant as the energy of the particle beam is lowered to accommodate analytical and low-Z material imaging purposes such as in the time resolved Ultrafast Electron Microscope (UEM) development currently underway at Michigan State University. In addition, space charge effects are the key limiting factor in the development of ultrafast atomic resolution electron imaging and diffraction technologies and are also correlated with an irreversible growth in rms beam emittance due to fluctuating components of the nonlinear electron dynamics.more » In the short pulse regime used in the UEM, space charge effects also lead to virtual cathode formation in which the negative charge of the electrons emitted at earlier times, combined with the attractive surface field, hinders further emission of particles and causes a degradation of the pulse properties. Space charge and virtual cathode effects and their remediation are core issues for the development of the next generation of high-brightness UEMs. Since the analytical models are only applicable for special cases, numerical simulations, in addition to experiments, are usually necessary to accurately understand the space charge effect. In this paper we will introduce a grid-free differential algebra based multiple level fast multipole algorithm, which calculates the 3D space charge field for n charged particles in arbitrary distribution with an efficiency of O(n), and the implementation of the algorithm to a simulation code for space charge dominated photoemission processes.« less

  19. Use of cues in virtual reality depends on visual feedback.

    PubMed

    Fulvio, Jacqueline M; Rokers, Bas

    2017-11-22

    3D motion perception is of central importance to daily life. However, when tested in laboratory settings, sensitivity to 3D motion signals is found to be poor, leading to the view that heuristics and prior assumptions are critical for 3D motion perception. Here we explore an alternative: sensitivity to 3D motion signals is context-dependent and must be learned based on explicit visual feedback in novel environments. The need for action-contingent visual feedback is well-established in the developmental literature. For example, young kittens that are passively moved through an environment, but unable to move through it themselves, fail to develop accurate depth perception. We find that these principles also obtain in adult human perception. Observers that do not experience visual consequences of their actions fail to develop accurate 3D motion perception in a virtual reality environment, even after prolonged exposure. By contrast, observers that experience the consequences of their actions improve performance based on available sensory cues to 3D motion. Specifically, we find that observers learn to exploit the small motion parallax cues provided by head jitter. Our findings advance understanding of human 3D motion processing and form a foundation for future study of perception in virtual and natural 3D environments.

  20. Three-dimensional virtual planning in orthognathic surgery enhances the accuracy of soft tissue prediction.

    PubMed

    Van Hemelen, Geert; Van Genechten, Maarten; Renier, Lieven; Desmedt, Maria; Verbruggen, Elric; Nadjmi, Nasser

    2015-07-01

    Throughout the history of computing, shortening the gap between the physical and digital world behind the screen has always been strived for. Recent advances in three-dimensional (3D) virtual surgery programs have reduced this gap significantly. Although 3D assisted surgery is now widely available for orthognathic surgery, one might still argue whether a 3D virtual planning approach is a better alternative to a conventional two-dimensional (2D) planning technique. The purpose of this study was to compare the accuracy of a traditional 2D technique and a 3D computer-aided prediction method. A double blind randomised prospective study was performed to compare the prediction accuracy of a traditional 2D planning technique versus a 3D computer-aided planning approach. The accuracy of the hard and soft tissue profile predictions using both planning methods was investigated. There was a statistically significant difference between 2D and 3D soft tissue planning (p < 0.05). The statistically significant difference found between 2D and 3D planning and the actual soft tissue outcome was not confirmed by a statistically significant difference between methods. The 3D planning approach provides more accurate soft tissue planning. However, the 2D orthognathic planning is comparable to 3D planning when it comes to hard tissue planning. This study provides relevant results for choosing between 3D and 2D planning in clinical practice. Copyright © 2015 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  1. Image-Based Virtual Tours and 3d Modeling of Past and Current Ages for the Enhancement of Archaeological Parks: the Visualversilia 3d Project

    NASA Astrophysics Data System (ADS)

    Castagnetti, C.; Giannini, M.; Rivola, R.

    2017-05-01

    The research project VisualVersilia 3D aims at offering a new way to promote the territory and its heritage by matching the traditional reading of the document and the potential use of modern communication technologies for the cultural tourism. Recently, the research on the use of new technologies applied to cultural heritage have turned their attention mainly to technologies to reconstruct and narrate the complexity of the territory and its heritage, including 3D scanning, 3D printing and augmented reality. Some museums and archaeological sites already exploit the potential of digital tools to preserve and spread their heritage but interactive services involving tourists in an immersive and more modern experience are still rare. The innovation of the project consists in the development of a methodology for documenting current and past historical ages and integrating their 3D visualizations with rendering capable of returning an immersive virtual reality for a successful enhancement of the heritage. The project implements the methodology in the archaeological complex of Massaciuccoli, one of the best preserved roman site of the Versilia Area (Tuscany, Italy). The activities of the project briefly consist in developing: 1. the virtual tour of the site in its current configuration on the basis of spherical images then enhanced by texts, graphics and audio guides in order to enable both an immersive and remote tourist experience; 2. 3D reconstruction of the evidences and buildings in their current condition for documentation and conservation purposes on the basis of a complete metric survey carried out through laser scanning; 3. 3D virtual reconstructions through the main historical periods on the basis of historical investigation and the analysis of data acquired.

  2. Lift-Off: Using Reference Imagery and Freehand Sketching to Create 3D Models in VR.

    PubMed

    Jackson, Bret; Keefe, Daniel F

    2016-04-01

    Three-dimensional modeling has long been regarded as an ideal application for virtual reality (VR), but current VR-based 3D modeling tools suffer from two problems that limit creativity and applicability: (1) the lack of control for freehand modeling, and (2) the difficulty of starting from scratch. To address these challenges, we present Lift-Off, an immersive 3D interface for creating complex models with a controlled, handcrafted style. Artists start outside of VR with 2D sketches, which are then imported and positioned in VR. Then, using a VR interface built on top of image processing algorithms, 2D curves within the sketches are selected interactively and "lifted" into space to create a 3D scaffolding for the model. Finally, artists sweep surfaces along these curves to create 3D models. Evaluations are presented for both long-term users and for novices who each created a 3D sailboat model from the same starting sketch. Qualitative results are positive, with the visual style of the resulting models of animals and other organic subjects as well as architectural models matching what is possible with traditional fine art media. In addition, quantitative data from logging features built into the software are used to characterize typical tool use and suggest areas for further refinement of the interface.

  3. Design of 3D simulation engine for oilfield safety training

    NASA Astrophysics Data System (ADS)

    Li, Hua-Ming; Kang, Bao-Sheng

    2015-03-01

    Aiming at the demand for rapid custom development of 3D simulation system for oilfield safety training, this paper designs and implements a 3D simulation engine based on script-driven method, multi-layer structure, pre-defined entity objects and high-level tools such as scene editor, script editor, program loader. A scripting language been defined to control the system's progress, events and operating results. Training teacher can use this engine to edit 3D virtual scenes, set the properties of entity objects, define the logic script of task, and produce a 3D simulation training system without any skills of programming. Through expanding entity class, this engine can be quickly applied to other virtual training areas.

  4. Visualization of spatial-temporal data based on 3D virtual scene

    NASA Astrophysics Data System (ADS)

    Wang, Xianghong; Liu, Jiping; Wang, Yong; Bi, Junfang

    2009-10-01

    The main purpose of this paper is to realize the expression of the three-dimensional dynamic visualization of spatialtemporal data based on three-dimensional virtual scene, using three-dimensional visualization technology, and combining with GIS so that the people's abilities of cognizing time and space are enhanced and improved by designing dynamic symbol and interactive expression. Using particle systems, three-dimensional simulation, virtual reality and other visual means, we can simulate the situations produced by changing the spatial location and property information of geographical entities over time, then explore and analyze its movement and transformation rules by changing the interactive manner, and also replay history and forecast of future. In this paper, the main research object is the vehicle track and the typhoon path and spatial-temporal data, through three-dimensional dynamic simulation of its track, and realize its timely monitoring its trends and historical track replaying; according to visualization techniques of spatialtemporal data in Three-dimensional virtual scene, providing us with excellent spatial-temporal information cognitive instrument not only can add clarity to show spatial-temporal information of the changes and developments in the situation, but also be used for future development and changes in the prediction and deduction.

  5. Virtual screening and rational drug design method using structure generation system based on 3D-QSAR and docking.

    PubMed

    Chen, H F; Dong, X C; Zen, B S; Gao, K; Yuan, S G; Panaye, A; Doucet, J P; Fan, B T

    2003-08-01

    An efficient virtual and rational drug design method is presented. It combines virtual bioactive compound generation with 3D-QSAR model and docking. Using this method, it is possible to generate a lot of highly diverse molecules and find virtual active lead compounds. The method was validated by the study of a set of anti-tumor drugs. With the constraints of pharmacophore obtained by DISCO implemented in SYBYL 6.8, 97 virtual bioactive compounds were generated, and their anti-tumor activities were predicted by CoMFA. Eight structures with high activity were selected and screened by the 3D-QSAR model. The most active generated structure was further investigated by modifying its structure in order to increase the activity. A comparative docking study with telomeric receptor was carried out, and the results showed that the generated structures could form more stable complexes with receptor than the reference compound selected from experimental data. This investigation showed that the proposed method was a feasible way for rational drug design with high screening efficiency.

  6. 3D workflow for HDR image capture of projection systems and objects for CAVE virtual environments authoring with wireless touch-sensitive devices

    NASA Astrophysics Data System (ADS)

    Prusten, Mark J.; McIntyre, Michelle; Landis, Marvin

    2006-02-01

    A 3D workflow pipeline is presented for High Dynamic Range (HDR) image capture of projected scenes or objects for presentation in CAVE virtual environments. The methods of HDR digital photography of environments vs. objects are reviewed. Samples of both types of virtual authoring being the actual CAVE environment and a sculpture are shown. A series of software tools are incorporated into a pipeline called CAVEPIPE, allowing for high-resolution objects and scenes to be composited together in natural illumination environments [1] and presented in our CAVE virtual reality environment. We also present a way to enhance the user interface for CAVE environments. The traditional methods of controlling the navigation through virtual environments include: glove, HUD's and 3D mouse devices. By integrating a wireless network that includes both WiFi (IEEE 802.11b/g) and Bluetooth (IEEE 802.15.1) protocols the non-graphical input control device can be eliminated. Therefore wireless devices can be added that would include: PDA's, Smart Phones, TabletPC's, Portable Gaming consoles, and PocketPC's.

  7. Design and Implementation of a Self-Directed Stereochemistry Lesson Using Embedded Virtual Three-Dimensional Images in a Portable Document Format

    ERIC Educational Resources Information Center

    Cody, Jeremy A.; Craig, Paul A.; Loudermilk, Adam D.; Yacci, Paul M.; Frisco, Sarah L.; Milillo, Jennifer R.

    2012-01-01

    A novel stereochemistry lesson was prepared that incorporated both handheld molecular models and embedded virtual three-dimensional (3D) images. The images are fully interactive and eye-catching for the students; methods for preparing 3D molecular images in Adobe Acrobat are included. The lesson was designed and implemented to showcase the 3D…

  8. Evaluating the Usability of Pinchigator, a system for Navigating Virtual Worlds using Pinch Gloves

    NASA Technical Reports Server (NTRS)

    Hamilton, George S.; Brookman, Stephen; Dumas, Joseph D. II; Tilghman, Neal

    2003-01-01

    Appropriate design of two dimensional user interfaces (2D U/I) utilizing the well known WIMP (Window, Icon, Menu, Pointing device) environment for computer software is well studied and guidance can be found in several standards. Three-dimensional U/I design is not nearly so mature as 2D U/I, and standards bodies have not reached consensus on what makes a usable interface. This is especially true when the tools for interacting with the virtual environment may include stereo viewing, real time trackers and pinch gloves instead of just a mouse & keyboard. Over the last several years the authors have created a 3D U/I system dubbed Pinchigator for navigating virtual worlds based on the dVise dV/Mockup visualization software, Fakespace Pinch Gloves and Pohlemus trackers. The current work is to test the usability of the system on several virtual worlds, suggest improvements to increase Pinchigator s usability, and then to generalize about what was learned and how those lessons might be applied to improve other 3D U/I systems.

  9. Model Manipulation and Learning: Fostering Representational Competence with Virtual and Concrete Models

    ERIC Educational Resources Information Center

    Stull, Andrew T.; Hegarty, Mary

    2016-01-01

    This study investigated the development of representational competence among organic chemistry students by using 3D (concrete and virtual) models as aids for teaching students to translate between multiple 2D diagrams. In 2 experiments, students translated between different diagrams of molecules and received verbal feedback in 1 of the following 3…

  10. SciEthics Interactive: Science and Ethics Learning in a Virtual Environment

    ERIC Educational Resources Information Center

    Nadolny, Larysa; Woolfrey, Joan; Pierlott, Matthew; Kahn, Seth

    2013-01-01

    Learning in immersive 3D environments allows students to collaborate, build, and interact with difficult course concepts. This case study examines the design and development of the TransGen Island within the SciEthics Interactive project, a National Science Foundation-funded, 3D virtual world emphasizing learning science content in the context of…

  11. From Vesalius to Virtual Reality: How Embodied Cognition Facilitates the Visualization of Anatomy

    ERIC Educational Resources Information Center

    Jang, Susan

    2010-01-01

    This study examines the facilitative effects of embodiment of a complex internal anatomical structure through three-dimensional ("3-D") interactivity in a virtual reality ("VR") program. Since Shepard and Metzler's influential 1971 study, it has been known that 3-D objects (e.g., multiple-armed cube or external body parts) are visually and…

  12. A Head in Virtual Reality: Development of A Dynamic Head and Neck Model

    ERIC Educational Resources Information Center

    Nguyen, Ngan; Wilson, Timothy D.

    2009-01-01

    Advances in computer and interface technologies have made it possible to create three-dimensional (3D) computerized models of anatomical structures for visualization, manipulation, and interaction in a virtual 3D environment. In the past few decades, a multitude of digital models have been developed to facilitate complex spatial learning of the…

  13. ICCE/ICCAI 2000 Full & Short Papers (Virtual Reality in Education).

    ERIC Educational Resources Information Center

    2000

    This document contains the full text of the following full and short papers on virtual reality in education from ICCE/ICCAI 2000 (International Conference on Computers in Education/International Conference on Computer-Assisted Instruction): (1) "A CAL System for Appreciation of 3D Shapes by Surface Development (C3D-SD)" (Stephen C. F. Chan, Andy…

  14. Exploring 3-D Virtual Reality Technology for Spatial Ability and Chemistry Achievement

    ERIC Educational Resources Information Center

    Merchant, Z.; Goetz, E. T.; Keeney-Kennicutt, W.; Cifuentes, L.; Kwok, O.; Davis, T. J.

    2013-01-01

    We investigated the potential of Second Life® (SL), a three-dimensional (3-D) virtual world, to enhance undergraduate students' learning of a vital chemistry concept. A quasi-experimental pre-posttest control group design was used to conduct the study. A total of 387 participants completed three assignment activities either in SL or using…

  15. Web-Based Interactive 3D Visualization as a Tool for Improved Anatomy Learning

    ERIC Educational Resources Information Center

    Petersson, Helge; Sinkvist, David; Wang, Chunliang; Smedby, Orjan

    2009-01-01

    Despite a long tradition, conventional anatomy education based on dissection is declining. This study tested a new virtual reality (VR) technique for anatomy learning based on virtual contrast injection. The aim was to assess whether students value this new three-dimensional (3D) visualization method as a learning tool and what value they gain…

  16. Teaching Digital Natives: 3-D Virtual Science Lab in the Middle School Science Classroom

    ERIC Educational Resources Information Center

    Franklin, Teresa J.

    2008-01-01

    This paper presents the development of a 3-D virtual environment in Second Life for the delivery of standards-based science content for middle school students in the rural Appalachian region of Southeast Ohio. A mixed method approach in which quantitative results of improved student learning and qualitative observations of implementation within…

  17. Socialisation for Learning at a Distance in a 3-D Multi-User Virtual Environment

    ERIC Educational Resources Information Center

    Edirisingha, Palitha; Nie, Ming; Pluciennik, Mark; Young, Ruth

    2009-01-01

    This paper reports findings of a pilot study that examined the pedagogical potential of "Second Life" (SL), a popular three-dimensional multi-user virtual environment (3-D MUVE) developed by the Linden Lab. The study is part of a 1-year research and development project titled "Modelling of Secondlife Environments"…

  18. DEEP SPACE: High Resolution VR Platform for Multi-user Interactive Narratives

    NASA Astrophysics Data System (ADS)

    Kuka, Daniela; Elias, Oliver; Martins, Ronald; Lindinger, Christopher; Pramböck, Andreas; Jalsovec, Andreas; Maresch, Pascal; Hörtner, Horst; Brandl, Peter

    DEEP SPACE is a large-scale platform for interactive, stereoscopic and high resolution content. The spatial and the system design of DEEP SPACE are facing constraints of CAVETM-like systems in respect to multi-user interactive storytelling. To be used as research platform and as public exhibition space for many people, DEEP SPACE is capable to process interactive, stereoscopic applications on two projection walls with a size of 16 by 9 meters and a resolution of four times 1080p (4K) each. The processed applications are ranging from Virtual Reality (VR)-environments to 3D-movies to computationally intensive 2D-productions. In this paper, we are describing DEEP SPACE as an experimental VR platform for multi-user interactive storytelling. We are focusing on the system design relevant for the platform, including the integration of the Apple iPod Touch technology as VR control, and a special case study that is demonstrating the research efforts in the field of multi-user interactive storytelling. The described case study, entitled "Papyrate's Island", provides a prototypical scenario of how physical drawings may impact on digital narratives. In this special case, DEEP SPACE helps us to explore the hypothesis that drawing, a primordial human creative skill, gives us access to entirely new creative possibilities in the domain of interactive storytelling.

  19. Web GIS in practice X: a Microsoft Kinect natural user interface for Google Earth navigation

    PubMed Central

    2011-01-01

    This paper covers the use of depth sensors such as Microsoft Kinect and ASUS Xtion to provide a natural user interface (NUI) for controlling 3-D (three-dimensional) virtual globes such as Google Earth (including its Street View mode), Bing Maps 3D, and NASA World Wind. The paper introduces the Microsoft Kinect device, briefly describing how it works (the underlying technology by PrimeSense), as well as its market uptake and application potential beyond its original intended purpose as a home entertainment and video game controller. The different software drivers available for connecting the Kinect device to a PC (Personal Computer) are also covered, and their comparative pros and cons briefly discussed. We survey a number of approaches and application examples for controlling 3-D virtual globes using the Kinect sensor, then describe Kinoogle, a Kinect interface for natural interaction with Google Earth, developed by students at Texas A&M University. Readers interested in trying out the application on their own hardware can download a Zip archive (included with the manuscript as additional files 1, 2, &3) that contains a 'Kinnogle installation package for Windows PCs'. Finally, we discuss some usability aspects of Kinoogle and similar NUIs for controlling 3-D virtual globes (including possible future improvements), and propose a number of unique, practical 'use scenarios' where such NUIs could prove useful in navigating a 3-D virtual globe, compared to conventional mouse/3-D mouse and keyboard-based interfaces. PMID:21791054

  20. Web GIS in practice X: a Microsoft Kinect natural user interface for Google Earth navigation.

    PubMed

    Boulos, Maged N Kamel; Blanchard, Bryan J; Walker, Cory; Montero, Julio; Tripathy, Aalap; Gutierrez-Osuna, Ricardo

    2011-07-26

    This paper covers the use of depth sensors such as Microsoft Kinect and ASUS Xtion to provide a natural user interface (NUI) for controlling 3-D (three-dimensional) virtual globes such as Google Earth (including its Street View mode), Bing Maps 3D, and NASA World Wind. The paper introduces the Microsoft Kinect device, briefly describing how it works (the underlying technology by PrimeSense), as well as its market uptake and application potential beyond its original intended purpose as a home entertainment and video game controller. The different software drivers available for connecting the Kinect device to a PC (Personal Computer) are also covered, and their comparative pros and cons briefly discussed. We survey a number of approaches and application examples for controlling 3-D virtual globes using the Kinect sensor, then describe Kinoogle, a Kinect interface for natural interaction with Google Earth, developed by students at Texas A&M University. Readers interested in trying out the application on their own hardware can download a Zip archive (included with the manuscript as additional files 1, 2, &3) that contains a 'Kinnogle installation package for Windows PCs'. Finally, we discuss some usability aspects of Kinoogle and similar NUIs for controlling 3-D virtual globes (including possible future improvements), and propose a number of unique, practical 'use scenarios' where such NUIs could prove useful in navigating a 3-D virtual globe, compared to conventional mouse/3-D mouse and keyboard-based interfaces.

  1. Engineering Internship Program Report

    NASA Technical Reports Server (NTRS)

    Bosch, Brian Y.

    1994-01-01

    Towards the end of the summer, I prepared for a presentation to the chief of the Flight Crew Support Division to obtain funding for Phase 1 of the project. I presented information on the tracking systems, David Ray presented on the POGO and PABF and the integration of the virtual reality systems, and Mike Van Chau talked about other hardware issues such as head-mounted display, 3-D sound, gloves, graphics platforms, and other peripherals. The funding was approved, and work was to begin at the end of August in evaluating a couple of the tracking systems, to integrate the graphics platform and video equipment with the POGO, and to build a larger gantry for the POGO. This tour I learned how to effectively gather information and present them in a convincible form to gain funding. I explored a entirely new area of technology, that being virtual reality from the most general form down to finer details in its tracking systems. The experiences over the summer have added a lot of detail to work at the Johnson Space Center, life within NASA, and to the many possibilities for becoming involved with the space program.

  2. Virtual k -Space Modulation Optical Microscopy

    NASA Astrophysics Data System (ADS)

    Kuang, Cuifang; Ma, Ye; Zhou, Renjie; Zheng, Guoan; Fang, Yue; Xu, Yingke; Liu, Xu; So, Peter T. C.

    2016-07-01

    We report a novel superresolution microscopy approach for imaging fluorescence samples. The reported approach, termed virtual k -space modulation optical microscopy (VIKMOM), is able to improve the lateral resolution by a factor of 2, reduce the background level, improve the optical sectioning effect and correct for unknown optical aberrations. In the acquisition process of VIKMOM, we used a scanning confocal microscope setup with a 2D detector array to capture sample information at each scanned x -y position. In the recovery process of VIKMOM, we first modulated the captured data by virtual k -space coding and then employed a ptychography-inspired procedure to recover the sample information and correct for unknown optical aberrations. We demonstrated the performance of the reported approach by imaging fluorescent beads, fixed bovine pulmonary artery endothelial (BPAE) cells, and living human astrocytes (HA). As the VIKMOM approach is fully compatible with conventional confocal microscope setups, it may provide a turn-key solution for imaging biological samples with ˜100 nm lateral resolution, in two or three dimensions, with improved optical sectioning capabilities and aberration correcting.

  3. Fast localized orthonormal virtual orbitals which depend smoothly on nuclear coordinates.

    PubMed

    Subotnik, Joseph E; Dutoi, Anthony D; Head-Gordon, Martin

    2005-09-15

    We present here an algorithm for computing stable, well-defined localized orthonormal virtual orbitals which depend smoothly on nuclear coordinates. The algorithm is very fast, limited only by diagonalization of two matrices with dimension the size of the number of virtual orbitals. Furthermore, we require no more than quadratic (in the number of electrons) storage. The basic premise behind our algorithm is that one can decompose any given atomic-orbital (AO) vector space as a minimal basis space (which includes the occupied and valence virtual spaces) and a hard-virtual (HV) space (which includes everything else). The valence virtual space localizes easily with standard methods, while the hard-virtual space is constructed to be atom centered and automatically local. The orbitals presented here may be computed almost as quickly as projecting the AO basis onto the virtual space and are almost as local (according to orbital variance), while our orbitals are orthonormal (rather than redundant and nonorthogonal). We expect this algorithm to find use in local-correlation methods.

  4. Exploring interaction with 3D volumetric displays

    NASA Astrophysics Data System (ADS)

    Grossman, Tovi; Wigdor, Daniel; Balakrishnan, Ravin

    2005-03-01

    Volumetric displays generate true volumetric 3D images by actually illuminating points in 3D space. As a result, viewing their contents is similar to viewing physical objects in the real world. These displays provide a 360 degree field of view, and do not require the user to wear hardware such as shutter glasses or head-trackers. These properties make them a promising alternative to traditional display systems for viewing imagery in 3D. Because these displays have only recently been made available commercially (e.g., www.actuality-systems.com), their current use tends to be limited to non-interactive output-only display devices. To take full advantage of the unique features of these displays, however, it would be desirable if the 3D data being displayed could be directly interacted with and manipulated. We investigate interaction techniques for volumetric display interfaces, through the development of an interactive 3D geometric model building application. While this application area itself presents many interesting challenges, our focus is on the interaction techniques that are likely generalizable to interactive applications for other domains. We explore a very direct style of interaction where the user interacts with the virtual data using direct finger manipulations on and around the enclosure surrounding the displayed 3D volumetric image.

  5. Augmented reality for breast imaging.

    PubMed

    Rancati, Alberto; Angrigiani, Claudio; Nava, Maurizio B; Catanuto, Giuseppe; Rocco, Nicola; Ventrice, Fernando; Dorr, Julio

    2018-06-01

    Augmented reality (AR) enables the superimposition of virtual reality reconstructions onto clinical images of a real patient, in real time. This allows visualization of internal structures through overlying tissues, thereby providing a virtual transparency vision of surgical anatomy. AR has been applied to neurosurgery, which utilizes a relatively fixed space, frames, and bony references; the application of AR facilitates the relationship between virtual and real data. Augmented breast imaging (ABI) is described. Breast MRI studies for breast implant patients with seroma were performed using a Siemens 3T system with a body coil and a four-channel bilateral phased-array breast coil as the transmitter and receiver, respectively. Gadolinium was injected as a contrast agent (0.1 mmol/kg at 2 mL/s) using a programmable power injector. Dicom formatted images data from 10 MRI cases of breast implant seroma and 10 MRI cases with T1-2 N0 M0 breast cancer, were imported and transformed into augmented reality images. ABI demonstrated stereoscopic depth perception, focal point convergence, 3D cursor use, and joystick fly-through. ABI can improve clinical outcomes, providing an enhanced view of the structures to work on. It should be further studied to determine its utility in clinical practice.

  6. Stereoscopic vascular models of the head and neck: A computed tomography angiography visualization.

    PubMed

    Cui, Dongmei; Lynch, James C; Smith, Andrew D; Wilson, Timothy D; Lehman, Michael N

    2016-01-01

    Computer-assisted 3D models are used in some medical and allied health science schools; however, they are often limited to online use and 2D flat screen-based imaging. Few schools take advantage of 3D stereoscopic learning tools in anatomy education and clinically relevant anatomical variations when teaching anatomy. A new approach to teaching anatomy includes use of computed tomography angiography (CTA) images of the head and neck to create clinically relevant 3D stereoscopic virtual models. These high resolution images of the arteries can be used in unique and innovative ways to create 3D virtual models of the vasculature as a tool for teaching anatomy. Blood vessel 3D models are presented stereoscopically in a virtual reality environment, can be rotated 360° in all axes, and magnified according to need. In addition, flexible views of internal structures are possible. Images are displayed in a stereoscopic mode, and students view images in a small theater-like classroom while wearing polarized 3D glasses. Reconstructed 3D models enable students to visualize vascular structures with clinically relevant anatomical variations in the head and neck and appreciate spatial relationships among the blood vessels, the skull and the skin. © 2015 American Association of Anatomists.

  7. Development of microgravity, full body functional reach envelope using 3-D computer graphic models and virtual reality technology

    NASA Technical Reports Server (NTRS)

    Lindsey, Patricia F.

    1994-01-01

    In microgravity conditions mobility is greatly enhanced and body stability is difficult to achieve. Because of these difficulties, optimum placement and accessibility of objects and controls can be critical to required tasks on board shuttle flights or on the proposed space station. Anthropometric measurement of the maximum reach of occupants of a microgravity environment provide knowledge about maximum functional placement for tasking situations. Calculations for a full body, functional reach envelope for microgravity environments are imperative. To this end, three dimensional computer modeled human figures, providing a method of anthropometric measurement, were used to locate the data points that define the full body, functional reach envelope. Virtual reality technology was utilized to enable an occupant of the microgravity environment to experience movement within the reach envelope while immersed in a simulated microgravity environment.

  8. Development of an Annular Electron Beam HPM Amplifier

    DTIC Science & Technology

    1994-09-01

    34, Phys.Rev.Lett., 64(19), ppgs 2320-2323, 7 May 1990 9. Lau, Y.Y. and Chernin, D., "A review of the ac space - charge effect in electron-circuit interactions...the Child-Lanamuir, space - charge limiting current in the beam line. This removes the potential of torming a virtual cathode (Ref. 19). The...propagates the electron beam through a single modulating gap, with a specified voltage, frequency, and gap extent. The beam space charge is an input

  9. Study of Magnetic Field Spatial Variations in the Southern Hemisphere's Low Latitudes due to Different Interplanetary Structures Using the 3-D MHD SWMF/BATSRUS Model

    NASA Astrophysics Data System (ADS)

    Souza, V. M. C. E. S.; Jauer, P. R.; Alves, L. R.; Padilha, A. L.; Padua, M. B.; Vitorello, I.; Alves, M. V.; Da Silva, L. A.

    2017-12-01

    Interplanetary structures such as Coronal Mass Ejections (CME), Shocks, Corotating Interaction Regions (CIR) and Magnetic Clouds (MC) interfere directly on Space Weather conditions and can cause severe and intense disturbances in the Earth's magnetic field as measured in space and on the ground. During magnetically disturbed periods characterized by world-wide, abrupt variations of the geomagnetic field, large and intense current systems can be induced and amplified within the Earth even at low latitudes. Such current systems are known as geomagnetically induced currents (GIC) and can cause damage to power transmission lines, transformers and the degradation of pipelines. As part of an effort to estimate GIC intensities throughout the low to equatorial latitudes of the Brazilian territory, we used the 3-D MHD SWMF/BATSRUS code to estimate spatial variations of the geomagnetic field during periods when the magnetosphere is under the influence of CME and MC structures. Specifically, we used the CalcDeltaB tool (Rastatter et al., Space Weather, 2014) to provide a proxy for the spatial variations of the geomagnetic field, with a 1 minute cadence, at 31 virtual magnetometer stations located in the proposed study region. The stations are spatially arranged in a two-dimensional network with each station being 5 degrees apart in latitude and longitude. In a preliminary analysis, we found that prior to the arrival of each interplanetary structure, there is no appreciable variation in the components of the geomagnetic field between the virtual stations. However, when the interplanetary structures reach the magnetosphere, each station perceives the magnetic field variation differently, so that it is not possible to use a single station to represent the magnetic field perturbation throughout the Brazilian region. We discuss the minimum number and spacing between stations to adequately detail the geomagnetic field variations in this region.

  10. Low Q2 jet production at HERA and virtual photon structure

    NASA Astrophysics Data System (ADS)

    H1 Collaboration; Adloff, C.; Aid, S.; Anderson, M.; Andreev, V.; Andrieu, B.; Arkadov, V.; Arndt, C.; Ayyaz, I.; Babaev, A.; Bähr, J.; Bán, J.; Baranov, P.; Barrelet, E.; Barschke, R.; Bartel, W.; Bassler, U.; Beck, M.; Behrend, H.-J.; Beier, C.; Belousov, A.; Berger, Ch.; Bernardi, G.; Bertrand-Coremans, G.; Beyer, R.; Biddulph, P.; Bizot, J. C.; Borras, K.; Botterweck, F.; Boudry, V.; Bourov, S.; Braemer, A.; Braunschweig, W.; Brisson, V.; Brown, D. P.; Brückner, W.; Bruel, P.; Bruncko, D.; Brune, C.; Bürger, J.; Büsser, F. W.; Buniatian, A.; Burke, S.; Buschhorn, G.; Calvet, D.; Campbell, A. J.; Carli, T.; Charlet, M.; Clarke, D.; Clerbaux, B.; Cocks, S.; Contreras, J. G.; Cormack, C.; Coughlan, J. A.; Cousinou, M.-C.; Cox, B. E.; Cozzika, G.; Cussans, D. G.; Cvach, J.; Dagoret, S.; Dainton, J. B.; Dau, W. D.; Daum, K.; David, M.; de Roeck, A.; de Wolf, E. A.; Delcourt, B.; Dirkmann, M.; Dixon, P.; Dlugosz, W.; Donovan, K. T.; Dowell, J. D.; Droutskoi, A.; Ebert, J.; Ebert, T. R.; Eckerlin, G.; Efremenko, V.; Egli, S.; Eichler, R.; Eisele, F.; Eisenhandler, E.; Elsen, E.; Erdmann, M.; Fahr, A. B.; Favart, L.; Fedotov, A.; Felst, R.; Feltesse, J.; Ferencei, J.; Ferrarotto, F.; Flamm, K.; Fleischer, M.; Flieser, M.; Flügge, G.; Fomenko, A.; Formánek, J.; Foster, J. M.; Franke, G.; Gabathuler, E.; Gabathuler, K.; Gaede, F.; Garvey, J.; Gayler, J.; Gebauer, M.; Gerhards, R.; Glazov, A.; Goerlich, L.; Gogitidze, N.; Goldberg, M.; Gonzalez-Pineiro, B.; Gorelov, I.; Grab, C.; Grässler, H.; Greenshaw, T.; Griffiths, R. K.; Grindhammer, G.; Gruber, A.; Gruber, C.; Hadig, T.; Haidt, D.; Hajduk, L.; Haller, T.; Hampel, M.; Haynes, W. J.; Heinemann, B.; Heinzelmann, G.; Henderson, R. C. W.; Hengstmann, S.; Henschel, H.; Herynek, I.; Hess, M. F.; Hewitt, K.; Hiller, K. H.; Hilton, C. D.; Hladký, J.; Höppner, M.; Hoffmann, D.; Holtom, T.; Horisberger, R.; Hudgson, V. L.; Hütte, M.; Ibbotson, M.; Isolarş Sever, Ç.; Itterbeck, H.; Jacquet, M.; Jaffre, M.; Janoth, J.; Jansen, D. M.; Jönsson, L.; Johnson, D. P.; Jung, H.; Kalmus, P. I. P.; Kander, M.; Kant, D.; Kathage, U.; Katzy, J.; Kaufmann, H. H.; Kaufmann, O.; Kausch, M.; Kazarian, S.; Kenyon, I. R.; Kermiche, S.; Keuker, C.; Kiesling, C.; Klein, M.; Kleinwort, C.; Knies, G.; Köhne, J. H.; Kolanoski, H.; Kolya, S. D.; Korbel, V.; Kostka, P.; Kotelnikov, S. K.; Krämerkämper, T.; Krasny, M. W.; Krehbiel, H.; Krücker, D.; Küpper, A.; Küster, H.; Kuhlen, M.; Kurča, T.; Laforge, B.; Lahmann, R.; Landon, M. P. J.; Lange, W.; Langenegger, U.; Lebedev, A.; Lehner, F.; Lemaitre, V.; Levonian, S.; Lindstroem, M.; Lipinski, J.; List, B.; Lobo, G.; Lopez, G. C.; Lubimov, V.; Lüke, D.; Lytkin, L.; Magnussen, N.; Mahlke-Krüger, H.; Malinovski, E.; Maraček, R.; Marage, P.; Marks, J.; Marshall, R.; Martens, J.; Martin, G.; Martin, R.; Martyn, H.-U.; Martyniak, J.; Mavroidis, T.; Maxfield, S. J.; McMahon, S. J.; Mehta, A.; Meier, K.; Merkel, P.; Metlica, F.; Meyer, A.; Meyer, A.; Meyer, H.; Meyer, J.; Meyer, P.-O.; Migliori, A.; Mikocki, S.; Milstead, D.; Moeck, J.; Moreau, F.; Morris, J. V.; Mroczko, E.; Müller, D.; Müller, K.; Murín, P.; Nagovizin, V.; Nahnhauer, R.; Naroska, B.; Naumann, Th.; Négri, I.; Newman, P. R.; Newton, D.; Nguyen, H. K.; Nicholls, T. C.; Niebergall, F.; Niebuhr, C.; Niedzballa, Ch.; Niggli, H.; Nowak, G.; Nunnemann, T.; Oberlack, H.; Olsson, J. E.; Ozerov, D.; Palmen, P.; Panaro, E.; Panitch, A.; Pascaud, C.; Passaggio, S.; Patel, G. D.; Pawletta, H.; Peppel, E.; Perez, E.; Phillips, J. P.; Pieuchot, A.; Pitzl, D.; Pöschl, R.; Pope, G.; Povh, B.; Rabbertz, K.; Reimer, P.; Rick, H.; Riess, S.; Rizvi, E.; Robmann, P.; Roosen, R.; Rosenbauer, K.; Rostovtsev, A.; Rouse, F.; Royon, C.; Rüter, K.; Rusakov, S.; Rybicki, K.; Sankey, D. P. C.; Schacht, P.; Scheins, J.; Schiek, S.; Schleif, S.; Schleper, P.; von Schlippe, W.; Schmidt, D.; Schmidt, G.; Schoeffel, L.; Schöning, A.; Schröder, V.; Schuhmann, E.; Schultz-Coulon, H.-C.; Schwab, B.; Sefkow, F.; Semenov, A.; Shekelyan, V.; Sheviakov, I.; Shtarkov, L. N.; Siegmon, G.; Siewert, U.; Sirois, Y.; Skillicorn, I. O.; Sloan, T.; Smirnov, P.; Smith, M.; Solochenko, V.; Soloviev, Y.; Specka, A.; Spiekermann, J.; Spielman, S.; Spitzer, H.; Squinabol, F.; Steffen, P.; Steinberg, R.; Steinhart, J.; Stella, B.; Stellberger, A.; Stiewe, J.; Stolze, K.; Straumann, U.; Struczinski, W.; Sutton, J. P.; Swart, M.; Tapprogge, S.; Taševský, M.; Tchernyshov, V.; Tchetchelnitski, S.; Theissen, J.; Thompson, G.; Thompson, P. D.; Tobien, N.; Todenhagen, R.; Truöl, P.; Zálešák, J.; Tsipolitis, G.; Turnau, J.; Tzamariudaki, E.; Uelkes, P.; Usik, A.; Valkár, S.; Valkárová, A.; Vallée, C.; van Esch, P.; van Mechelen, P.; Vandenplas, D.; Vazdik, Y.; Verrecchia, P.; Villet, G.; Wacker, K.; Wagener, A.; Wagener, M.; Wallny, R.; Walter, T.; Waugh, B.; Weber, G.; Weber, M.; Wegener, D.; Wegner, A.; Wengler, T.; Werner, M.; West, L. R.; Wiesand, S.; Wilksen, T.; Willard, S.; Winde, M.; Winter, G.-G.; Wittek, C.; Wobisch, M.; Wollatz, H.; Wünsch, E.; Žáček, J.; Zarbock, D.; Zhang, Z.; Zhokin, A.; Zini, P.; Zomer, F.; Zsembery, J.; Zurnedden, M.

    1997-12-01

    The transition between photoproduction and deep-inelastic scattering is investigated in jet production at the HERA ep collider, using data collected by the H1 experiment. Measurements of the differential inclusive jet cross-sections dσep/dEt* and dσep/dη*, where Et* and η* are the transverse energy and the pseudorapidity of the jets in the virtual photon-proton centre of mass frame, are presented for 0

  11. Putting 3D modelling and 3D printing into practice: virtual surgery and preoperative planning to reconstruct complex post-traumatic skeletal deformities and defects

    PubMed Central

    Tetsworth, Kevin; Block, Steve; Glatt, Vaida

    2017-01-01

    3D printing technology has revolutionized and gradually transformed manufacturing across a broad spectrum of industries, including healthcare. Nowhere is this more apparent than in orthopaedics with many surgeons already incorporating aspects of 3D modelling and virtual procedures into their routine clinical practice. As a more extreme application, patient-specific 3D printed titanium truss cages represent a novel approach for managing the challenge of segmental bone defects. This review illustrates the potential indications of this innovative technique using 3D printed titanium truss cages in conjunction with the Masquelet technique. These implants are custom designed during a virtual surgical planning session with the combined input of an orthopaedic surgeon, an orthopaedic engineering professional and a biomedical design engineer. The ability to 3D model an identical replica of the original intact bone in a virtual procedure is of vital importance when attempting to precisely reconstruct normal anatomy during the actual procedure. Additionally, other important factors must be considered during the planning procedure, such as the three-dimensional configuration of the implant. Meticulous design is necessary to allow for successful implantation through the planned surgical exposure, while being aware of the constraints imposed by local anatomy and prior implants. This review will attempt to synthesize the current state of the art as well as discuss our personal experience using this promising technique. It will address implant design considerations including the mechanical, anatomical and functional aspects unique to each case. PMID:28220752

  12. Putting 3D modelling and 3D printing into practice: virtual surgery and preoperative planning to reconstruct complex post-traumatic skeletal deformities and defects.

    PubMed

    Tetsworth, Kevin; Block, Steve; Glatt, Vaida

    2017-01-01

    3D printing technology has revolutionized and gradually transformed manufacturing across a broad spectrum of industries, including healthcare. Nowhere is this more apparent than in orthopaedics with many surgeons already incorporating aspects of 3D modelling and virtual procedures into their routine clinical practice. As a more extreme application, patient-specific 3D printed titanium truss cages represent a novel approach for managing the challenge of segmental bone defects. This review illustrates the potential indications of this innovative technique using 3D printed titanium truss cages in conjunction with the Masquelet technique. These implants are custom designed during a virtual surgical planning session with the combined input of an orthopaedic surgeon, an orthopaedic engineering professional and a biomedical design engineer. The ability to 3D model an identical replica of the original intact bone in a virtual procedure is of vital importance when attempting to precisely reconstruct normal anatomy during the actual procedure. Additionally, other important factors must be considered during the planning procedure, such as the three-dimensional configuration of the implant. Meticulous design is necessary to allow for successful implantation through the planned surgical exposure, while being aware of the constraints imposed by local anatomy and prior implants. This review will attempt to synthesize the current state of the art as well as discuss our personal experience using this promising technique. It will address implant design considerations including the mechanical, anatomical and functional aspects unique to each case. © The Authors, published by EDP Sciences, 2017.

  13. Computer Vision Assisted Virtual Reality Calibration

    NASA Technical Reports Server (NTRS)

    Kim, W.

    1999-01-01

    A computer vision assisted semi-automatic virtual reality (VR) calibration technology has been developed that can accurately match a virtual environment of graphically simulated three-dimensional (3-D) models to the video images of the real task environment.

  14. Web-based Three-dimensional Virtual Body Structures: W3D-VBS

    PubMed Central

    Temkin, Bharti; Acosta, Eric; Hatfield, Paul; Onal, Erhan; Tong, Alex

    2002-01-01

    Major efforts are being made to improve the teaching of human anatomy to foster cognition of visuospatial relationships. The Visible Human Project of the National Library of Medicine makes it possible to create virtual reality-based applications for teaching anatomy. Integration of traditional cadaver and illustration-based methods with Internet-based simulations brings us closer to this goal. Web-based three-dimensional Virtual Body Structures (W3D-VBS) is a next-generation immersive anatomical training system for teaching human anatomy over the Internet. It uses Visible Human data to dynamically explore, select, extract, visualize, manipulate, and stereoscopically palpate realistic virtual body structures with a haptic device. Tracking user’s progress through evaluation tools helps customize lesson plans. A self-guided “virtual tour” of the whole body allows investigation of labeled virtual dissections repetitively, at any time and place a user requires it. PMID:12223495

  15. Web-based three-dimensional Virtual Body Structures: W3D-VBS.

    PubMed

    Temkin, Bharti; Acosta, Eric; Hatfield, Paul; Onal, Erhan; Tong, Alex

    2002-01-01

    Major efforts are being made to improve the teaching of human anatomy to foster cognition of visuospatial relationships. The Visible Human Project of the National Library of Medicine makes it possible to create virtual reality-based applications for teaching anatomy. Integration of traditional cadaver and illustration-based methods with Internet-based simulations brings us closer to this goal. Web-based three-dimensional Virtual Body Structures (W3D-VBS) is a next-generation immersive anatomical training system for teaching human anatomy over the Internet. It uses Visible Human data to dynamically explore, select, extract, visualize, manipulate, and stereoscopically palpate realistic virtual body structures with a haptic device. Tracking user's progress through evaluation tools helps customize lesson plans. A self-guided "virtual tour" of the whole body allows investigation of labeled virtual dissections repetitively, at any time and place a user requires it.

  16. Generation IV Nuclear Energy Systems Construction Cost Reductions through the Use of Virtual Environments - Task 4 Report: Virtual Mockup Maintenance Task Evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Timothy Shaw; Anthony Baratta; Vaughn Whisker

    2005-02-28

    Task 4 report of 3 year DOE NERI-sponsored effort evaluating immersive virtual reality (CAVE) technology for design review, construction planning, and maintenance planning and training for next generation nuclear power plants. Program covers development of full-scale virtual mockups generated from 3D CAD data presented in a CAVE visualization facility. This report focuses on using Full-scale virtual mockups for nuclear power plant training applications.

  17. Rapid prototyping 3D virtual world interfaces within a virtual factory environment

    NASA Technical Reports Server (NTRS)

    Kosta, Charles Paul; Krolak, Patrick D.

    1993-01-01

    On-going work into user requirements analysis using CLIPS (NASA/JSC) expert systems as an intelligent event simulator has led to research into three-dimensional (3D) interfaces. Previous work involved CLIPS and two-dimensional (2D) models. Integral to this work was the development of the University of Massachusetts Lowell parallel version of CLIPS, called PCLIPS. This allowed us to create both a Software Bus and a group problem-solving environment for expert systems development. By shifting the PCLIPS paradigm to use the VEOS messaging protocol we have merged VEOS (HlTL/Seattle) and CLIPS into a distributed virtual worlds prototyping environment (VCLIPS). VCLIPS uses the VEOS protocol layer to allow multiple experts to cooperate on a single problem. We have begun to look at the control of a virtual factory. In the virtual factory there are actors and objects as found in our Lincoln Logs Factory of the Future project. In this artificial reality architecture there are three VCLIPS entities in action. One entity is responsible for display and user events in the 3D virtual world. Another is responsible for either simulating the virtual factory or communicating with the real factory. The third is a user interface expert. The interface expert maps user input levels, within the current prototype, to control information for the factory. The interface to the virtual factory is based on a camera paradigm. The graphics subsystem generates camera views of the factory on standard X-Window displays. The camera allows for view control and object control. Control or the factory is accomplished by the user reaching into the camera views to perform object interactions. All communication between the separate CLIPS expert systems is done through VEOS.

  18. Virtual Character Animation Based on Affordable Motion Capture and Reconfigurable Tangible Interfaces.

    PubMed

    Lamberti, Fabrizio; Paravati, Gianluca; Gatteschi, Valentina; Cannavo, Alberto; Montuschi, Paolo

    2018-05-01

    Software for computer animation is generally characterized by a steep learning curve, due to the entanglement of both sophisticated techniques and interaction methods required to control 3D geometries. This paper proposes a tool designed to support computer animation production processes by leveraging the affordances offered by articulated tangible user interfaces and motion capture retargeting solutions. To this aim, orientations of an instrumented prop are recorded together with animator's motion in the 3D space and used to quickly pose characters in the virtual environment. High-level functionalities of the animation software are made accessible via a speech interface, thus letting the user control the animation pipeline via voice commands while focusing on his or her hands and body motion. The proposed solution exploits both off-the-shelf hardware components (like the Lego Mindstorms EV3 bricks and the Microsoft Kinect, used for building the tangible device and tracking animator's skeleton) and free open-source software (like the Blender animation tool), thus representing an interesting solution also for beginners approaching the world of digital animation for the first time. Experimental results in different usage scenarios show the benefits offered by the designed interaction strategy with respect to a mouse & keyboard-based interface both for expert and non-expert users.

  19. 3D for Geosciences: Interactive Tangibles and Virtual Models

    NASA Astrophysics Data System (ADS)

    Pippin, J. E.; Matheney, M.; Kitsch, N.; Rosado, G.; Thompson, Z.; Pierce, S. A.

    2016-12-01

    Point cloud processing provides a method of studying and modelling geologic features relevant to geoscience systems and processes. Here, software including Skanect, MeshLab, Blender, PDAL, and PCL are used in conjunction with 3D scanning hardware, including a Structure scanner and a Kinect camera, to create and analyze point cloud images of small scale topography, karst features, tunnels, and structures at high resolution. This project successfully scanned internal karst features ranging from small stalactites to large rooms, as well as an external waterfall feature. For comparison purposes, multiple scans of the same object were merged into single object files both automatically, using commercial software, and manually using open source libraries and code. Files with format .ply were manually converted into numeric data sets to be analyzed for similar regions between files in order to match them together. We can assume a numeric process would be more powerful and efficient than the manual method, however it could lack other useful features that GUI's may have. The digital models have applications in mining as efficient means of replacing topography functions such as measuring distances and areas. Additionally, it is possible to make simulation models such as drilling templates and calculations related to 3D spaces. Advantages of using methods described here for these procedures include the relatively quick time to obtain data and the easy transport of the equipment. With regard to openpit mining, obtaining 3D images of large surfaces and with precision would be a high value tool by georeferencing scan data to interactive maps. The digital 3D images obtained from scans may be saved as printable files to create physical 3D-printable models to create tangible objects based on scientific information, as well as digital "worlds" able to be navigated virtually. The data, models, and algorithms explored here can be used to convey complex scientific ideas to a range of professionals and audiences.

  20. A convertor and user interface to import CAD files into worldtoolkit virtual reality systems

    NASA Technical Reports Server (NTRS)

    Wang, Peter Hor-Ching

    1996-01-01

    Virtual Reality (VR) is a rapidly developing human-to-computer interface technology. VR can be considered as a three-dimensional computer-generated Virtual World (VW) which can sense particular aspects of a user's behavior, allow the user to manipulate the objects interactively, and render the VW at real-time accordingly. The user is totally immersed in the virtual world and feel the sense of transforming into that VW. NASA/MSFC Computer Application Virtual Environments (CAVE) has been developing the space-related VR applications since 1990. The VR systems in CAVE lab are based on VPL RB2 system which consists of a VPL RB2 control tower, an LX eyephone, an Isotrak polhemus sensor, two Fastrak polhemus sensors, a folk of Bird sensor, and two VPL DG2 DataGloves. A dynamics animator called Body Electric from VPL is used as the control system to interface with all the input/output devices and to provide the network communications as well as VR programming environment. The RB2 Swivel 3D is used as the modelling program to construct the VW's. A severe limitation of the VPL VR system is the use of RB2 Swivel 3D, which restricts the files to a maximum of 1020 objects and doesn't have the advanced graphics texture mapping. The other limitation is that the VPL VR system is a turn-key system which does not provide the flexibility for user to add new sensors and C language interface. Recently, NASA/MSFC CAVE lab provides VR systems built on Sense8 WorldToolKit (WTK) which is a C library for creating VR development environments. WTK provides device drivers for most of the sensors and eyephones available on the VR market. WTK accepts several CAD file formats, such as Sense8 Neutral File Format, AutoCAD DXF and 3D Studio file format, Wave Front OBJ file format, VideoScape GEO file format, Intergraph EMS stereolithographics and CATIA Stereolithographics STL file formats. WTK functions are object-oriented in their naming convention, are grouped into classes, and provide easy C language interface. Using a CAD or modelling program to build a VW for WTK VR applications, we typically construct the stationary universe with all the geometric objects except the dynamic objects, and create each dynamic object in an individual file.

  1. Feasibility and validation of virtual autopsy for dental identification using the Interpol dental codes.

    PubMed

    Franco, Ademir; Thevissen, Patrick; Coudyzer, Walter; Develter, Wim; Van de Voorde, Wim; Oyen, Raymond; Vandermeulen, Dirk; Jacobs, Reinhilde; Willems, Guy

    2013-05-01

    Virtual autopsy is a medical imaging technique, using full body computed tomography (CT), allowing for a noninvasive and permanent observation of all body parts. For dental identification clinically and radiologically observed ante-mortem (AM) and post-mortem (PM) oral identifiers are compared. The study aimed to verify if a PM dental charting can be performed on virtual reconstructions of full-body CT's using the Interpol dental codes. A sample of 103 PM full-body CT's was collected from the forensic autopsy files of the Department of Forensic Medicine University Hospitals, KU Leuven, Belgium. For validation purposes, 3 of these bodies underwent a complete dental autopsy, a dental radiological and a full-body CT examination. The bodies were scanned in a Siemens Definition Flash CT Scanner (Siemens Medical Solutions, Germany). The images were examined on 8- and 12-bit screen resolution as three-dimensional (3D) reconstructions and as axial, coronal and sagittal slices. InSpace(®) (Siemens Medical Solutions, Germany) software was used for 3D reconstruction. The dental identifiers were charted on pink PM Interpol forms (F1, F2), using the related dental codes. Optimal dental charting was obtained by combining observations on 3D reconstructions and CT slices. It was not feasible to differentiate between different kinds of dental restoration materials. The 12-bit resolution enabled to collect more detailed evidences, mainly related to positions within a tooth. Oral identifiers, not implemented in the Interpol dental coding were observed. Amongst these, the observed (3D) morphological features of dental and maxillofacial structures are important identifiers. The latter can become particularly more relevant towards the future, not only because of the inherent spatial features, yet also because of the increasing preventive dental treatment, and the decreasing application of dental restorations. In conclusion, PM full-body CT examinations need to be implemented in the PM dental charting protocols and the Interpol dental codes should be adapted accordingly. Copyright © 2012 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  2. New generation of 3D desktop computer interfaces

    NASA Astrophysics Data System (ADS)

    Skerjanc, Robert; Pastoor, Siegmund

    1997-05-01

    Today's computer interfaces use 2-D displays showing windows, icons and menus and support mouse interactions for handling programs and data files. The interface metaphor is that of a writing desk with (partly) overlapping sheets of documents placed on its top. Recent advances in the development of 3-D display technology give the opportunity to take the interface concept a radical stage further by breaking the design limits of the desktop metaphor. The major advantage of the envisioned 'application space' is, that it offers an additional, immediately perceptible dimension to clearly and constantly visualize the structure and current state of interrelations between documents, videos, application programs and networked systems. In this context, we describe the development of a visual operating system (VOS). Under VOS, applications appear as objects in 3-D space. Users can (graphically connect selected objects to enable communication between the respective applications. VOS includes a general concept of visual and object oriented programming for tasks ranging from, e.g., low-level programming up to high-level application configuration. In order to enable practical operation in an office or at home for many hours, the system should be very comfortable to use. Since typical 3-D equipment used, e.g., in virtual-reality applications (head-mounted displays, data gloves) is rather cumbersome and straining, we suggest to use off-head displays and contact-free interaction techniques. In this article, we introduce an autostereoscopic 3-D display and connected video based interaction techniques which allow viewpoint-depending imaging (by head tracking) and visually controlled modification of data objects and links (by gaze tracking, e.g., to pick, 3-D objects just by looking at them).

  3. 3D-Lab: a collaborative web-based platform for molecular modeling.

    PubMed

    Grebner, Christoph; Norrby, Magnus; Enström, Jonatan; Nilsson, Ingemar; Hogner, Anders; Henriksson, Jonas; Westin, Johan; Faramarzi, Farzad; Werner, Philip; Boström, Jonas

    2016-09-01

    The use of 3D information has shown impact in numerous applications in drug design. However, it is often under-utilized and traditionally limited to specialists. We want to change that, and present an approach making 3D information and molecular modeling accessible and easy-to-use 'for the people'. A user-friendly and collaborative web-based platform (3D-Lab) for 3D modeling, including a blazingly fast virtual screening capability, was developed. 3D-Lab provides an interface to automatic molecular modeling, like conformer generation, ligand alignments, molecular dockings and simple quantum chemistry protocols. 3D-Lab is designed to be modular, and to facilitate sharing of 3D-information to promote interactions between drug designers. Recent enhancements to our open-source virtual reality tool Molecular Rift are described. The integrated drug-design platform allows drug designers to instantaneously access 3D information and readily apply advanced and automated 3D molecular modeling tasks, with the aim to improve decision-making in drug design projects.

  4. Virtual reality and 3D visualizations in heart surgery education.

    PubMed

    Friedl, Reinhard; Preisack, Melitta B; Klas, Wolfgang; Rose, Thomas; Stracke, Sylvia; Quast, Klaus J; Hannekum, Andreas; Gödje, Oliver

    2002-01-01

    Computer assisted teaching plays an increasing role in surgical education. The presented paper describes the development of virtual reality (VR) and 3D visualizations for educational purposes concerning aortocoronary bypass grafting and their prototypical implementation into a database-driven and internet-based educational system in heart surgery. A multimedia storyboard has been written and digital video has been encoded. Understanding of these videos was not always satisfying; therefore, additional 3D and VR visualizations have been modelled as VRML, QuickTime, QuickTime Virtual Reality and MPEG-1 applications. An authoring process in terms of integration and orchestration of different multimedia components to educational units has been started. A virtual model of the heart has been designed. It is highly interactive and the user is able to rotate it, move it, zoom in for details or even fly through. It can be explored during the cardiac cycle and a transparency mode demonstrates coronary arteries, movement of the heart valves, and simultaneous blood-flow. Myocardial ischemia and the effect of an IMA-Graft on myocardial perfusion is simulated. Coronary artery stenoses and bypass-grafts can be interactively added. 3D models of anastomotique techniques and closed thrombendarterectomy have been developed. Different visualizations have been prototypically implemented into a teaching application about operative techniques. Interactive virtual reality and 3D teaching applications can be used and distributed via the World Wide Web and have the power to describe surgical anatomy and principles of surgical techniques, where temporal and spatial events play an important role, in a way superior to traditional teaching methods.

  5. Toward real-time endoscopically-guided robotic navigation based on a 3D virtual surgical field model

    NASA Astrophysics Data System (ADS)

    Gong, Yuanzheng; Hu, Danying; Hannaford, Blake; Seibel, Eric J.

    2015-03-01

    The challenge is to accurately guide the surgical tool within the three-dimensional (3D) surgical field for roboticallyassisted operations such as tumor margin removal from a debulked brain tumor cavity. The proposed technique is 3D image-guided surgical navigation based on matching intraoperative video frames to a 3D virtual model of the surgical field. A small laser-scanning endoscopic camera was attached to a mock minimally-invasive surgical tool that was manipulated toward a region of interest (residual tumor) within a phantom of a debulked brain tumor. Video frames from the endoscope provided features that were matched to the 3D virtual model, which were reconstructed earlier by raster scanning over the surgical field. Camera pose (position and orientation) is recovered by implementing a constrained bundle adjustment algorithm. Navigational error during the approach to fluorescence target (residual tumor) is determined by comparing the calculated camera pose to the measured camera pose using a micro-positioning stage. From these preliminary results, computation efficiency of the algorithm in MATLAB code is near real-time (2.5 sec for each estimation of pose), which can be improved by implementation in C++. Error analysis produced 3-mm distance error and 2.5 degree of orientation error on average. The sources of these errors come from 1) inaccuracy of the 3D virtual model, generated on a calibrated RAVEN robotic platform with stereo tracking; 2) inaccuracy of endoscope intrinsic parameters, such as focal length; and 3) any endoscopic image distortion from scanning irregularities. This work demonstrates feasibility of micro-camera 3D guidance of a robotic surgical tool.

  6. Investigation of tracking systems properties in CAVE-type virtual reality systems

    NASA Astrophysics Data System (ADS)

    Szymaniak, Magda; Mazikowski, Adam; Meironke, Michał

    2017-08-01

    In recent years, many scientific and industrial centers in the world developed a virtual reality systems or laboratories. One of the most advanced solutions are Immersive 3D Visualization Lab (I3DVL), a CAVE-type (Cave Automatic Virtual Environment) laboratory. It contains two CAVE-type installations: six-screen installation arranged in a form of a cube, and four-screen installation, a simplified version of the previous one. The user feeling of "immersion" and interaction with virtual world depend on many factors, in particular on the accuracy of the tracking system of the user. In this paper properties of the tracking systems applied in I3DVL was investigated. For analysis two parameters were selected: the accuracy of the tracking system and the range of detection of markers by the tracking system in space of the CAVE. Measurements of system accuracy were performed for six-screen installation, equipped with four tracking cameras for three axes: X, Y, Z. Rotation around the Y axis was also analyzed. Measured tracking system shows good linear and rotating accuracy. The biggest issue was the range of the monitoring of markers inside the CAVE. It turned out, that the tracking system lose sight of the markers in the corners of the installation. For comparison, for a simplified version of CAVE (four-screen installation), equipped with eight tracking cameras, this problem was not occur. Obtained results will allow for improvement of cave quality.

  7. Augmented reality on poster presentations, in the field and in the classroom

    NASA Astrophysics Data System (ADS)

    Hawemann, Friedrich; Kolawole, Folarin

    2017-04-01

    Augmented reality (AR) is the direct addition of virtual information through an interface to a real-world environment. In practice, through a mobile device such as a tablet or smartphone, information can be projected onto a target- for example, an image on a poster. Mobile devices are widely distributed today such that augmented reality is easily accessible to almost everyone. Numerous studies have shown that multi-dimensional visualization is essential for efficient perception of the spatial, temporal and geometrical configuration of geological structures and processes. Print media, such as posters and handouts lack the ability to display content in the third and fourth dimensions, which might be in space-domain as seen in three-dimensional (3-D) objects, or time-domain (four-dimensional, 4-D) expressible in the form of videos. Here, we show that augmented reality content can be complimentary to geoscience poster presentations, hands-on material and in the field. In the latter example, location based data is loaded and for example, a virtual geological profile can be draped over a real-world landscape. In object based AR, the application is trained to recognize an image or object through the camera of the user's mobile device, such that specific content is automatically downloaded and displayed on the screen of the device, and positioned relative to the trained image or object. We used ZapWorks, a commercially-available software application to create and present examples of content that is poster-based, in which important supplementary information is presented as interactive virtual images, videos and 3-D models. We suggest that the flexibility and real-time interactivity offered by AR makes it an invaluable tool for effective geoscience poster presentation, class-room and field geoscience learning.

  8. Web GIS in practice V: 3-D interactive and real-time mapping in Second Life

    PubMed Central

    Boulos, Maged N Kamel; Burden, David

    2007-01-01

    This paper describes technologies from Daden Limited for geographically mapping and accessing live news stories/feeds, as well as other real-time, real-world data feeds (e.g., Google Earth KML feeds and GeoRSS feeds) in the 3-D virtual world of Second Life, by plotting and updating the corresponding Earth location points on a globe or some other suitable form (in-world), and further linking those points to relevant information and resources. This approach enables users to visualise, interact with, and even walk or fly through, the plotted data in 3-D. Users can also do the reverse: put pins on a map in the virtual world, and then view the data points on the Web in Google Maps or Google Earth. The technologies presented thus serve as a bridge between mirror worlds like Google Earth and virtual worlds like Second Life. We explore the geo-data display potential of virtual worlds and their likely convergence with mirror worlds in the context of the future 3-D Internet or Metaverse, and reflect on the potential of such technologies and their future possibilities, e.g. their use to develop emergency/public health virtual situation rooms to effectively manage emergencies and disasters in real time. The paper also covers some of the issues associated with these technologies, namely user interface accessibility and individual privacy. PMID:18042275

  9. Accuracy of Three-Dimensional Planning in Surgery-First Orthognathic Surgery: Planning Versus Outcome

    PubMed Central

    Tran, Ngoc Hieu; Tantidhnazet, Syrina; Raocharernporn, Somchart; Kiattavornchareon, Sirichai; Pairuchvej, Verasak; Wongsirichat, Natthamet

    2018-01-01

    Background The benefit of computer-assisted planning in orthognathic surgery (OGS) has been extensively documented over the last decade. This study aimed to evaluate the accuracy of three-dimensional (3D) virtual planning in surgery-first OGS. Methods Fifteen patients with skeletal class III malocclusion who underwent bimaxillary OGS with surgery-first approach were included. A composite skull model was reconstructed using data from cone-beam computed tomography and stereolithography from a scanned dental cast. Surgical procedures were simulated using Simplant O&O software, and the virtual plan was transferred to the operation room using 3D-printed splints. Differences of the 3D measurements between the virtual plan and postoperative results were evaluated, and the accuracy was reported using root mean square deviation (RMSD) and the Bland-Altman method. Results The virtual planning was successfully transferred to surgery. The overall mean linear difference was 0.88 mm (0.79 mm for the maxilla and 1 mm for the mandible), and the overall mean angular difference was 1.16°. The RMSD ranged from 0.86 to 1.46 mm and 1.27° to 1.45°, within the acceptable clinical criteria. Conclusion In this study, virtual surgical planning and 3D-printed surgical splints facilitated the diagnosis and treatment planning, and offered an accurate outcome in surgery-first OGS. PMID:29581806

  10. Molecular Dynamics Visualization (MDV): Stereoscopic 3D Display of Biomolecular Structure and Interactions Using the Unity Game Engine.

    PubMed

    Wiebrands, Michael; Malajczuk, Chris J; Woods, Andrew J; Rohl, Andrew L; Mancera, Ricardo L

    2018-06-21

    Molecular graphics systems are visualization tools which, upon integration into a 3D immersive environment, provide a unique virtual reality experience for research and teaching of biomolecular structure, function and interactions. We have developed a molecular structure and dynamics application, the Molecular Dynamics Visualization tool, that uses the Unity game engine combined with large scale, multi-user, stereoscopic visualization systems to deliver an immersive display experience, particularly with a large cylindrical projection display. The application is structured to separate the biomolecular modeling and visualization systems. The biomolecular model loading and analysis system was developed as a stand-alone C# library and provides the foundation for the custom visualization system built in Unity. All visual models displayed within the tool are generated using Unity-based procedural mesh building routines. A 3D user interface was built to allow seamless dynamic interaction with the model while being viewed in 3D space. Biomolecular structure analysis and display capabilities are exemplified with a range of complex systems involving cell membranes, protein folding and lipid droplets.

  11. Computer Assisted Virtual Environment - CAVE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erickson, Phillip; Podgorney, Robert; Weingartner,

    Research at the Center for Advanced Energy Studies is taking on another dimension with a 3-D device known as a Computer Assisted Virtual Environment. The CAVE uses projection to display high-end computer graphics on three walls and the floor. By wearing 3-D glasses to create depth perception and holding a wand to move and rotate images, users can delve into data.

  12. The Avenor Virtual Trainer Project--A 3D Interactive Training Module on Energy Control Procedures: Development and First Validation Results.

    ERIC Educational Resources Information Center

    Giardina, Max

    This paper examines the implementation of 3D simulation through the development of the Avenor Virtual Trainer and how situated learning and fidelity of model representation become the basis for more effective Interactive Multimedia Training Situations. The discussion will focus of some principles concerned with situated training, simulation,…

  13. A 3-D Virtual Reality Model of the Sun and the Moon for E-Learning at Elementary Schools

    ERIC Educational Resources Information Center

    Sun, Koun-Tem; Lin, Ching-Ling; Wang, Sheng-Min

    2010-01-01

    The relative positions of the sun, moon, and earth, their movements, and their relationships are abstract and difficult to understand astronomical concepts in elementary school science. This study proposes a three-dimensional (3-D) virtual reality (VR) model named the "Sun and Moon System." This e-learning resource was designed by…

  14. Computer Assisted Virtual Environment - CAVE

    ScienceCinema

    Erickson, Phillip; Podgorney, Robert; Weingartner,

    2018-05-30

    Research at the Center for Advanced Energy Studies is taking on another dimension with a 3-D device known as a Computer Assisted Virtual Environment. The CAVE uses projection to display high-end computer graphics on three walls and the floor. By wearing 3-D glasses to create depth perception and holding a wand to move and rotate images, users can delve into data.

  15. Communication Modes, Persuasiveness, and Decision-Making Quality: A Comparison of Audio Conferencing, Video Conferencing, and a Virtual Environment

    ERIC Educational Resources Information Center

    Lockwood, Nicholas S.

    2011-01-01

    Geographically dispersed teams rely on information and communication technologies (ICTs) to communicate and collaborate. Three ICTs that have received attention are audio conferencing (AC), video conferencing (VC), and, recently, 3D virtual environments (3D VEs). These ICTs offer modes of communication that differ primarily in the number and type…

  16. Virtual Reality and Learning: Where Is the Pedagogy?

    ERIC Educational Resources Information Center

    Fowler, Chris

    2015-01-01

    The aim of this paper was to build upon Dalgarno and Lee's model or framework of learning in three-dimensional (3-D) virtual learning environments (VLEs) and to extend their road map for further research in this area. The enhanced model shares the common goal with Dalgarno and Lee of identifying the learning benefits from using 3-D VLEs. The…

  17. Agreement and reliability of pelvic floor measurements during rest and on maximum Valsalva maneuver using three-dimensional translabial ultrasound and virtual reality imaging.

    PubMed

    Speksnijder, L; Oom, D M J; Koning, A H J; Biesmeijer, C S; Steegers, E A P; Steensma, A B

    2016-08-01

    Imaging of the levator ani hiatus provides valuable information for the diagnosis and follow-up of patients with pelvic organ prolapse (POP). This study compared measurements of levator ani hiatal volume during rest and on maximum Valsalva, obtained using conventional three-dimensional (3D) translabial ultrasound and virtual reality imaging. Our objectives were to establish their agreement and reliability, and their relationship with prolapse symptoms and POP quantification (POP-Q) stage. One hundred women with an intact levator ani were selected from our tertiary clinic database. Information on clinical symptoms were obtained using standardized questionnaires. Ultrasound datasets were analyzed using a rendered volume with a slice thickness of 1.5 cm, at the level of minimal hiatal dimensions, during rest and on maximum Valsalva. The levator area (in cm(2) ) was measured and multiplied by 1.5 to obtain the levator ani hiatal volume (in cm(3) ) on conventional 3D ultrasound. Levator ani hiatal volume (in cm(3) ) was measured semi-automatically by virtual reality imaging using a segmentation algorithm. Twenty patients were chosen randomly to analyze intra- and interobserver agreement. The mean difference between levator hiatal volume measurements on 3D ultrasound and by virtual reality was 1.52 cm(3) (95% CI, 1.00-2.04 cm(3) ) at rest and 1.16 cm(3) (95% CI, 0.56-1.76 cm(3) ) during maximum Valsalva (P < 0.001). Both intra- and interobserver intraclass correlation coefficients were ≥ 0.96 for conventional 3D ultrasound and > 0.99 for virtual reality. Patients with prolapse symptoms or POP-Q Stage ≥ 2 had significantly larger hiatal measurements than those without symptoms or POP-Q Stage < 2. Levator ani hiatal volume at rest and on maximum Valsalva is significantly smaller when using virtual reality compared with conventional 3D ultrasound; however, this difference does not seem clinically important. Copyright © 2015 ISUOG. Published by John Wiley & Sons Ltd. Copyright © 2015 ISUOG. Published by John Wiley & Sons Ltd.

  18. Impact of 3D virtual planning on reconstruction of mandibular and maxillary surgical defects in head and neck oncology.

    PubMed

    Witjes, Max J H; Schepers, Rutger H; Kraeima, Joep

    2018-04-01

    This review describes the advances in 3D virtual planning for mandibular and maxillary reconstruction surgical defects with full prosthetic rehabilitation. The primary purpose is to provide an overview of various techniques that apply 3D technology safely in primary and secondary reconstructive cases of patients suffering from head and neck cancer. Methods have been developed to overcome the problem of control over the margin during surgery while the crucial decision with regard to resection margin and planning of osteotomies were predetermined by virtual planning. The unlimited possibilities of designing patient-specific implants can result in creative uniquely applied solutions for single cases but should be applied wisely with knowledge of biomechanical engineering principles. The high surgical accuracy of an executed 3D virtual plan provides tumor margin control during ablative surgery and the possibility of planned combined use of osseus free flaps and dental implants in the reconstruction in one surgical procedure. A thorough understanding of the effects of radiotherapy on the reconstruction, soft tissue management, and prosthetic rehabilitation is imperative in individual cases when deciding to use dental implants in patients who received radiotherapy.

  19. Advanced 3-dimensional planning in neurosurgery.

    PubMed

    Ferroli, Paolo; Tringali, Giovanni; Acerbi, Francesco; Schiariti, Marco; Broggi, Morgan; Aquino, Domenico; Broggi, Giovanni

    2013-01-01

    During the past decades, medical applications of virtual reality technology have been developing rapidly, ranging from a research curiosity to a commercially and clinically important area of medical informatics and technology. With the aid of new technologies, the user is able to process large amounts of data sets to create accurate and almost realistic reconstructions of anatomic structures and related pathologies. As a result, a 3-diensional (3-D) representation is obtained, and surgeons can explore the brain for planning or training. Further improvement such as a feedback system increases the interaction between users and models by creating a virtual environment. Its use for advanced 3-D planning in neurosurgery is described. Different systems of medical image volume rendering have been used and analyzed for advanced 3-D planning: 1 is a commercial "ready-to-go" system (Dextroscope, Bracco, Volume Interaction, Singapore), whereas the others are open-source-based software (3-D Slicer, FSL, and FreesSurfer). Different neurosurgeons at our institution experienced how advanced 3-D planning before surgery allowed them to facilitate and increase their understanding of the complex anatomic and pathological relationships of the lesion. They all agreed that the preoperative experience of virtually planning the approach was helpful during the operative procedure. Virtual reality for advanced 3-D planning in neurosurgery has achieved considerable realism as a result of the available processing power of modern computers. Although it has been found useful to facilitate the understanding of complex anatomic relationships, further effort is needed to increase the quality of the interaction between the user and the model.

  20. Conflict between object structural and functional affordances in peripersonal space.

    PubMed

    Kalénine, Solène; Wamain, Yannick; Decroix, Jérémy; Coello, Yann

    2016-10-01

    Recent studies indicate that competition between conflicting action representations slows down planning of object-directed actions. The present study aims to assess whether similar conflict effects exist during manipulable object perception. Twenty-six young adults performed reach-to-grasp and semantic judgements on conflictual objects (with competing structural and functional gestures) and non-conflictual objects (with similar structural and functional gestures) presented at difference distances in a 3D virtual environment. Results highlight a space-dependent conflict between structural and functional affordances. Perceptual judgments on conflictual objects were slower that perceptual judgments on non-conflictual objects, but only when objects were presented within reach. Findings demonstrate that competition between structural and functional affordances during object perception induces a processing cost, and further show that object position in space can bias affordance competition. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Building intuitive 3D interfaces for virtual reality systems

    NASA Astrophysics Data System (ADS)

    Vaidya, Vivek; Suryanarayanan, Srikanth; Seitel, Mathias; Mullick, Rakesh

    2007-03-01

    An exploration of techniques for developing intuitive, and efficient user interfaces for virtual reality systems. Work seeks to understand which paradigms from the better-understood world of 2D user interfaces remain viable within 3D environments. In order to establish this a new user interface was created that applied various understood principles of interface design. A user study was then performed where it was compared with an earlier interface for a series of medical visualization tasks.

  2. The design, production and clinical application of 3D patient-specific implants with drilling guides for acetabular surgery.

    PubMed

    Merema, B J; Kraeima, J; Ten Duis, K; Wendt, K W; Warta, R; Vos, E; Schepers, R H; Witjes, M J H; IJpma, F F A

    2017-11-01

    An innovative procedure for the development of 3D patient-specific implants with drilling guides for acetabular fracture surgery is presented. By using CT data and 3D surgical planning software, a virtual model of the fractured pelvis was created. During this process the fracture was virtually reduced. Based on the reduced fracture model, patient-specific titanium plates including polyamide drilling guides were designed, 3D printed and milled for intra-operative use. One of the advantages of this procedure is that the personalised plates could be tailored to both the shape of the pelvis and the type of fracture. The optimal screw directions and sizes were predetermined in the 3D model. The virtual plan was translated towards the surgical procedure by using the surgical guides and patient-specific osteosynthesis. Besides the description of the newly developed multi-disciplinary workflow, a clinical case example is presented to demonstrate that this technique is feasible and promising for the operative treatment of complex acetabular fractures. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Elderly Healthcare Monitoring Using an Avatar-Based 3D Virtual Environment

    PubMed Central

    Pouke, Matti; Häkkilä, Jonna

    2013-01-01

    Homecare systems for elderly people are becoming increasingly important due to both economic reasons as well as patients’ preferences. Sensor-based surveillance technologies are an expected future trend, but research so far has devoted little attention to the User Interface (UI) design of such systems and the user-centric design approach. In this paper, we explore the possibilities of an avatar-based 3D visualization system, which exploits wearable sensors and human activity simulations. We present a technical prototype and the evaluation of alternative concept designs for UIs based on a 3D virtual world. The evaluation was conducted with homecare providers through focus groups and an online survey. Our results show firstly that systems taking advantage of 3D virtual world visualization techniques have potential especially due to the privacy preserving and simplified information presentation style, and secondly that simple representations and glancability should be emphasized in the design. The identified key use cases highlight that avatar-based 3D presentations can be helpful if they provide an overview as well as details on demand. PMID:24351747

  4. Virtual probing system for medical volume data

    NASA Astrophysics Data System (ADS)

    Xiao, Yongfei; Fu, Yili; Wang, Shuguo

    2007-12-01

    Because of the huge computation in 3D medical data visualization, looking into its inner data interactively is always a problem to be resolved. In this paper, we present a novel approach to explore 3D medical dataset in real time by utilizing a 3D widget to manipulate the scanning plane. With the help of the 3D texture property in modern graphics card, a virtual scanning probe is used to explore oblique clipping plane of medical volume data in real time. A 3D model of the medical dataset is also rendered to illustrate the relationship between the scanning-plane image and the other tissues in medical data. It will be a valuable tool in anatomy education and understanding of medical images in the medical research.

  5. Towards Automatic Processing of Virtual City Models for Simulations

    NASA Astrophysics Data System (ADS)

    Piepereit, R.; Schilling, A.; Alam, N.; Wewetzer, M.; Pries, M.; Coors, V.

    2016-10-01

    Especially in the field of numerical simulations, such as flow and acoustic simulations, the interest in using virtual 3D models to optimize urban systems is increasing. The few instances in which simulations were already carried out in practice have been associated with an extremely high manual and therefore uneconomical effort for the processing of models. Using different ways of capturing models in Geographic Information System (GIS) and Computer Aided Engineering (CAE), increases the already very high complexity of the processing. To obtain virtual 3D models suitable for simulation, we developed a tool for automatic processing with the goal to establish ties between the world of GIS and CAE. In this paper we introduce a way to use Coons surfaces for the automatic processing of building models in LoD2, and investigate ways to simplify LoD3 models in order to reduce unnecessary information for a numerical simulation.

  6. Bioturbo similarity searching: combining chemical and biological similarity to discover structurally diverse bioactive molecules.

    PubMed

    Wassermann, Anne Mai; Lounkine, Eugen; Glick, Meir

    2013-03-25

    Virtual screening using bioactivity profiles has become an integral part of currently applied hit finding methods in pharmaceutical industry. However, a significant drawback of this approach is that it is only applicable to compounds that have been biologically tested in the past and have sufficient activity annotations for meaningful profile comparisons. Although bioactivity data generated in pharmaceutical institutions are growing on an unprecedented scale, the number of biologically annotated compounds still covers only a minuscule fraction of chemical space. For a newly synthesized compound or an isolated natural product to be biologically characterized across multiple assays, it may take a considerable amount of time. Consequently, this chemical matter will not be included in virtual screening campaigns based on bioactivity profiles. To overcome this problem, we herein introduce bioturbo similarity searching that uses chemical similarity to map molecules without biological annotations into bioactivity space and then searches for biologically similar compounds in this reference system. In benchmark calculations on primary screening data, we demonstrate that our approach generally achieves higher hit rates and identifies structurally more diverse compounds than approaches using chemical information only. Furthermore, our method is able to discover hits with novel modes of inhibition that traditional 2D and 3D similarity approaches are unlikely to discover. Test calculations on a set of natural products reveal the practical utility of the approach for identifying novel and synthetically more accessible chemical matter.

  7. Virtual egocenters as a function of display geometric field of view and eye station point

    NASA Technical Reports Server (NTRS)

    Psotka, Joseph

    1993-01-01

    The accurate location of one's virtual egocenter in a geometric space is of critical importance for immersion technologies. This experiment was conducted to investigate the role of field of view (FOV) and observer station points in the perception of the location of one's egocenter (the personal viewpoint) in virtual space. Rivalrous cues to the accurate location of one's egocenter may be one factor involved in simulator sickness. Fourteen subjects viewed an animated 3D model, of the room in which they sat, binocularly, from Eye Station Points (ESP) of either 300 or 800 millimeters. The display was on a 190 by 245 mm monitor, at a resolution of 320 by 200 pixels with 256 colors. They saw four models of the room designed with four geometric field of view (FOVg) conditions of 18, 48, 86, and 140 degrees. They drew the apparent paths of the camera in the room on a bitmap of the room as seen from infinity above. Large differences in the paths of the camera were seen as a function of both FOVg and ESP. Ten of the subjects were then asked to find the position for each display that minimized camera motion. The results fit well with predictions from an equation that took the ratio of human FOV (roughly 180 degrees) to FOVg times the Geometric Eye Point (GEP) of the imager: Zero Station Point = (180/FOVg)*GEP

  8. Value of 3D printing for the comprehension of surgical anatomy.

    PubMed

    Marconi, Stefania; Pugliese, Luigi; Botti, Marta; Peri, Andrea; Cavazzi, Emma; Latteri, Saverio; Auricchio, Ferdinando; Pietrabissa, Andrea

    2017-10-01

    In a preliminary experience, we claimed the potential value of 3D printing technology for pre-operative counseling and surgical planning. However, no objective analysis has ever assessed its additional benefit in transferring anatomical information from radiology to final users. We decided to validate the pre-operative use of 3D-printed anatomical models in patients with solid organs' diseases as a new tool to deliver morphological information. Fifteen patients scheduled for laparoscopic splenectomy, nephrectomy, or pancreatectomy were selected and, for each, a full-size 3D virtual anatomical object was reconstructed from a contrast-enhanced MDCT (Multiple Detector Computed Tomography) and then prototyped using a 3D printer. After having carefully evaluated-in a random sequence-conventional contrast MDCT scans, virtual 3D reconstructions on a flat monitor, and 3D-printed models of the same anatomy for each selected case, thirty subjects with different expertise in radiological imaging (10 medical students, 10 surgeons and 10 radiologists) were administered a multiple-item questionnaire. Crucial issues for the anatomical understanding and the pre-operative planning of the scheduled procedure were addressed. The visual and tactile inspection of 3D models allowed the best anatomical understanding, with faster and clearer comprehension of the surgical anatomy. As expected, less experienced medical students perceived the highest benefit (53.9% ± 4.14 of correct answers with 3D-printed models, compared to 53.4 % ± 4.6 with virtual models and 45.5% ± 4.6 with MDCT), followed by surgeons and radiologists. The average time spent by participants in 3D model assessing was shorter (60.67 ± 25.5 s) than the one of the corresponding virtual 3D reconstruction (70.8 ± 28.18 s) or conventional MDCT scan (127.04 ± 35.91 s). 3D-printed models help to transfer complex anatomical information to clinicians, resulting useful in the pre-operative planning, for intra-operative navigation and for surgical training purposes.

  9. A hybrid 3D spatial access method based on quadtrees and R-trees for globe data

    NASA Astrophysics Data System (ADS)

    Gong, Jun; Ke, Shengnan; Li, Xiaomin; Qi, Shuhua

    2009-10-01

    3D spatial access method for globe data is very crucial technique for virtual earth. This paper presents a brand-new maintenance method to index 3d objects distributed on the whole surface of the earth, which integrates the 1:1,000,000- scale topographic map tiles, Quad-tree and R-tree. Furthermore, when traditional methods are extended into 3d space, the performance of spatial index deteriorates badly, for example 3D R-tree. In order to effectively solve this difficult problem, a new algorithm of dynamic R-tree is put forward, which includes two sub-procedures, namely node-choosing and node-split. In the node-choosing algorithm, a new strategy is adopted, not like the traditional mode which is from top to bottom, but firstly from bottom to top then from top to bottom. This strategy can effectively solve the negative influence of node overlap. In the node-split algorithm, 2-to-3 split mode substitutes the traditional 1-to-2 mode, which can better concern the shape and size of nodes. Because of the rational tree shape, this R-tree method can easily integrate the concept of LOD. Therefore, it will be later implemented in commercial DBMS and adopted in time-crucial 3d GIS system.

  10. Virtual Worlds? "Outlook Good"

    ERIC Educational Resources Information Center

    Kelton, AJ

    2008-01-01

    Many people believed that virtual worlds would end up like the eight-track audiotape: a memory of something no longer used (or useful). Yet today there are hundreds of higher education institutions represented in three-dimensional (3D) virtual worlds such as Active Worlds and Second Life. The movement toward the virtual realm as a viable teaching…

  11. Student performance and appreciation using 3D vs. 2D vision in a virtual learning environment.

    PubMed

    de Boer, I R; Wesselink, P R; Vervoorn, J M

    2016-08-01

    The aim of this study was to investigate the differences in the performance and appreciation of students working in a virtual learning environment with two (2D)- or three (3D)-dimensional vision. One hundred and twenty-four randomly divided first-year dental students performed a manual dexterity exercise on the Simodont dental trainer with an automatic assessment. Group 1 practised in 2D vision and Group 2 in 3D. All of the students practised five times for 45 min and then took a test using the vision they had practised in. After test 1, all of the students switched the type of vision to control for the learning curve: Group 1 practised in 3D and took a test in 3D, whilst Group 2 practised in 2D and took the test in 2D. To pass, three of five exercises had to be successfully completed within a time limit. The students filled out a questionnaire after completing test 2. The results show that students working with 3D vision achieved significantly better results than students who worked in 2D. Ninety-five per cent of the students filled out the questionnaire, and over 90 per cent preferred 3D vision. The use of 3D vision in a virtual learning environment has a significant positive effect on the performance of the students as well as on their appreciation of the environment. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  12. Customized "In-Office" Three-Dimensional Printing for Virtual Surgical Planning in Craniofacial Surgery.

    PubMed

    Mendez, Bernardino M; Chiodo, Michael V; Patel, Parit A

    2015-07-01

    Virtual surgical planning using three-dimensional (3D) printing technology has improved surgical efficiency and precision. A limitation to this technology is that production of 3D surgical models requires a third-party source, leading to increased costs (up to $4000) and prolonged assembly times (averaging 2-3 weeks). The purpose of this study is to evaluate the feasibility, cost, and production time of customized skull models created by an "in-office" 3D printer for craniofacial reconstruction. Two patients underwent craniofacial reconstruction with the assistance of "in-office" 3D printing technology. Three-dimensional skull models were created from a bioplastic filament with a 3D printer using computed tomography (CT) image data. The cost and production time for each model were measured. For both patients, a customized 3D surgical model was used preoperatively to plan split calvarial bone grafting and intraoperatively to more efficiently and precisely perform the craniofacial reconstruction. The average cost for surgical model production with the "in-office" 3D printer was $25 (cost of bioplastic materials used to create surgical model) and the average production time was 14  hours. Virtual surgical planning using "in office" 3D printing is feasible and allows for a more cost-effective and less time consuming method for creating surgical models and guides. By bringing 3D printing to the office setting, we hope to improve intraoperative efficiency, surgical precision, and overall cost for various types of craniofacial and reconstructive surgery.

  13. Scalable Multi-Platform Distribution of Spatial 3d Contents

    NASA Astrophysics Data System (ADS)

    Klimke, J.; Hagedorn, B.; Döllner, J.

    2013-09-01

    Virtual 3D city models provide powerful user interfaces for communication of 2D and 3D geoinformation. Providing high quality visualization of massive 3D geoinformation in a scalable, fast, and cost efficient manner is still a challenging task. Especially for mobile and web-based system environments, software and hardware configurations of target systems differ significantly. This makes it hard to provide fast, visually appealing renderings of 3D data throughout a variety of platforms and devices. Current mobile or web-based solutions for 3D visualization usually require raw 3D scene data such as triangle meshes together with textures delivered from server to client, what makes them strongly limited in terms of size and complexity of the models they can handle. In this paper, we introduce a new approach for provisioning of massive, virtual 3D city models on different platforms namely web browsers, smartphones or tablets, by means of an interactive map assembled from artificial oblique image tiles. The key concept is to synthesize such images of a virtual 3D city model by a 3D rendering service in a preprocessing step. This service encapsulates model handling and 3D rendering techniques for high quality visualization of massive 3D models. By generating image tiles using this service, the 3D rendering process is shifted from the client side, which provides major advantages: (a) The complexity of the 3D city model data is decoupled from data transfer complexity (b) the implementation of client applications is simplified significantly as 3D rendering is encapsulated on server side (c) 3D city models can be easily deployed for and used by a large number of concurrent users, leading to a high degree of scalability of the overall approach. All core 3D rendering techniques are performed on a dedicated 3D rendering server, and thin-client applications can be compactly implemented for various devices and platforms.

  14. Space Station technology testbed: 2010 deep space transport

    NASA Technical Reports Server (NTRS)

    Holt, Alan C.

    1993-01-01

    A space station in a crew-tended or permanently crewed configuration will provide major R&D opportunities for innovative, technology and materials development and advanced space systems testing. A space station should be designed with the basic infrastructure elements required to grow into a major systems technology testbed. This space-based technology testbed can and should be used to support the development of technologies required to expand our utilization of near-Earth space, the Moon and the Earth-to-Jupiter region of the Solar System. Space station support of advanced technology and materials development will result in new techniques for high priority scientific research and the knowledge and R&D base needed for the development of major, new commercial product thrusts. To illustrate the technology testbed potential of a space station and to point the way to a bold, innovative approach to advanced space systems' development, a hypothetical deep space transport development and test plan is described. Key deep space transport R&D activities are described would lead to the readiness certification of an advanced, reusable interplanetary transport capable of supporting eight crewmembers or more. With the support of a focused and highly motivated, multi-agency ground R&D program, a deep space transport of this type could be assembled and tested by 2010. Key R&D activities on a space station would include: (1) experimental research investigating the microgravity assisted, restructuring of micro-engineered, materials (to develop and verify the in-space and in-situ 'tuning' of materials for use in debris and radiation shielding and other protective systems), (2) exposure of microengineered materials to the space environment for passive and operational performance tests (to develop in-situ maintenance and repair techniques and to support the development, enhancement, and implementation of protective systems, data and bio-processing systems, and virtual reality and telepresence/kinetic processes), (3) subsystem tests of advanced nuclear power, nuclear propulsion and communication systems (using boom extensions, remote station-keeping platforms and mobile EVA crew and robots), and (4) logistics support (crew and equipment) and command and control of deep space transport assembly, maintenance, and refueling (using a station-keeping platform).

  15. Human factors issues and approaches in the spatial layout of a space station control room, including the use of virtual reality as a design analysis tool

    NASA Technical Reports Server (NTRS)

    Hale, Joseph P., II

    1994-01-01

    Human Factors Engineering support was provided for the 30% design review of the late Space Station Freedom Payload Control Area (PCA). The PCA was to be the payload operations control room, analogous to the Spacelab Payload Operations Control Center (POCC). This effort began with a systematic collection and refinement of the relevant requirements driving the spatial layout of the consoles and PCA. This information was used as input for specialized human factors analytical tools and techniques in the design and design analysis activities. Design concepts and configuration options were developed and reviewed using sketches, 2-D Computer-Aided Design (CAD) drawings, and immersive Virtual Reality (VR) mockups.

  16. WE-AB-BRA-12: Virtual Endoscope Tracking for Endoscopy-CT Image Registration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ingram, W; Rao, A; Wendt, R

    Purpose: The use of endoscopy in radiotherapy will remain limited until we can register endoscopic video to CT using standard clinical equipment. In this phantom study we tested a registration method using virtual endoscopy to measure CT-space positions from endoscopic video. Methods: Our phantom is a contorted clay cylinder with 2-mm-diameter markers in the luminal surface. These markers are visible on both CT and endoscopic video. Virtual endoscope images were rendered from a polygonal mesh created by segmenting the phantom’s luminal surface on CT. We tested registration accuracy by tracking the endoscope’s 6-degree-of-freedom coordinates frame-to-frame in a video recorded asmore » it moved through the phantom, and using these coordinates to measure CT-space positions of markers visible in the final frame. To track the endoscope we used the Nelder-Mead method to search for coordinates that render the virtual frame most similar to the next recorded frame. We measured the endoscope’s initial-frame coordinates using a set of visible markers, and for image similarity we used a combination of mutual information and gradient alignment. CT-space marker positions were measured by projecting their final-frame pixel addresses through the virtual endoscope to intersect with the mesh. Registration error was quantified as the distance between this intersection and the marker’s manually-selected CT-space position. Results: Tracking succeeded for 6 of 8 videos, for which the mean registration error was 4.8±3.5mm (24 measurements total). The mean error in the axial direction (3.1±3.3mm) was larger than in the sagittal or coronal directions (2.0±2.3mm, 1.7±1.6mm). In the other 2 videos, the virtual endoscope got stuck in a false minimum. Conclusion: Our method can successfully track the position and orientation of an endoscope, and it provides accurate spatial mapping from endoscopic video to CT. This method will serve as a foundation for an endoscopy-CT registration framework that is clinically valuable and requires no specialized equipment.« less

  17. Journey to the centre of the cell: Virtual reality immersion into scientific data.

    PubMed

    Johnston, Angus P R; Rae, James; Ariotti, Nicholas; Bailey, Benjamin; Lilja, Andrew; Webb, Robyn; Ferguson, Charles; Maher, Sheryl; Davis, Thomas P; Webb, Richard I; McGhee, John; Parton, Robert G

    2018-02-01

    Visualization of scientific data is crucial not only for scientific discovery but also to communicate science and medicine to both experts and a general audience. Until recently, we have been limited to visualizing the three-dimensional (3D) world of biology in 2 dimensions. Renderings of 3D cells are still traditionally displayed using two-dimensional (2D) media, such as on a computer screen or paper. However, the advent of consumer grade virtual reality (VR) headsets such as Oculus Rift and HTC Vive means it is now possible to visualize and interact with scientific data in a 3D virtual world. In addition, new microscopic methods provide an unprecedented opportunity to obtain new 3D data sets. In this perspective article, we highlight how we have used cutting edge imaging techniques to build a 3D virtual model of a cell from serial block-face scanning electron microscope (SBEM) imaging data. This model allows scientists, students and members of the public to explore and interact with a "real" cell. Early testing of this immersive environment indicates a significant improvement in students' understanding of cellular processes and points to a new future of learning and public engagement. In addition, we speculate that VR can become a new tool for researchers studying cellular architecture and processes by populating VR models with molecular data. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  18. STS-131 crew during VR Lab MSS/EVAB SUPT3 Team 91016 training

    NASA Image and Video Library

    2009-09-25

    JSC2009-E-214340 (25 Sept. 2009) --- NASA astronaut Clayton Anderson, STS-131 mission specialist, uses virtual reality hardware in the Space Vehicle Mock-up Facility at NASA's Johnson Space Center to rehearse some of his duties on the upcoming mission to the International Space Station. This type of virtual reality training allows the astronauts to wear a helmet and special gloves while looking at computer displays simulating actual movements around the various locations on the station hardware with which they will be working.

  19. STS-132 crew during their MSS/SIMP EVA3 OPS 4 training

    NASA Image and Video Library

    2010-01-28

    JSC2010-E-014958 (28 Jan. 2010) --- NASA astronaut Michael Good, STS-132 mission specialist, uses virtual reality hardware in the Space Vehicle Mock-up Facility at NASA's Johnson Space Center to rehearse some of his duties on the upcoming mission to the International Space Station. This type of virtual reality training allows the astronauts to wear a helmet and special gloves while looking at computer displays simulating actual movements around the various locations on the station hardware with which they will be working.

  20. STS-132 crew during their MSS/SIMP EVA3 OPS 4 training

    NASA Image and Video Library

    2010-01-28

    JSC2010-E-014962 (28 Jan. 2010) --- NASA astronauts Michael Good (foreground) and Garrett Reisman, both STS-132 mission specialists, use virtual reality hardware in the Space Vehicle Mock-up Facility at NASA's Johnson Space Center to rehearse some of their duties on the upcoming mission to the International Space Station. This type of virtual reality training allows the astronauts to wear a helmet and special gloves while looking at computer displays simulating actual movements around the various locations on the station hardware with which they will be working.

Top