Sample records for multi-user virtual environments

  1. Inspiring Equal Contribution and Opportunity in a 3D Multi-User Virtual Environment: Bringing Together Men Gamers and Women Non-Gamers in Second Life[R

    ERIC Educational Resources Information Center

    deNoyelles, Aimee; Seo, Kay Kyeong-Ju

    2012-01-01

    A 3D multi-user virtual environment holds promise to support and enhance student online learning communities due to its ability to promote global synchronous interaction and collaboration, rich multisensory experience and expression, and elaborate design capabilities. Second Life[R], a multi-user virtual environment intended for adult users 18 and…

  2. Pre-Service Teachers' Perspectives on Using Scenario-Based Virtual Worlds in Science Education

    ERIC Educational Resources Information Center

    Kennedy-Clark, Shannon

    2011-01-01

    This paper presents the findings of a study on the current knowledge and attitudes of pre-service teachers on the use of scenario-based multi-user virtual environments in science education. The 28 participants involved in the study were introduced to "Virtual Singapura," a multi-user virtual environment, and completed an open-ended questionnaire.…

  3. A workout for virtual bodybuilders (design issues for embodiment in multi-actor virtual environments)

    NASA Technical Reports Server (NTRS)

    Benford, Steve; Bowers, John; Fahlen, Lennart E.; Greenhalgh, Chris; Snowdon, Dave

    1994-01-01

    This paper explores the issue of user embodiment within collaborative virtual environments. By user embodiment we mean the provision of users with appropriate body images so as to represent them to others and also to themselves. By collaborative virtual environments we mean multi-user virtual reality systems which support cooperative work (although we argue that the results of our exploration may also be applied to other kinds of collaborative systems). The main part of the paper identifies a list of embodiment design issues including: presence, location, identity, activity, availability, history of activity, viewpoint, action point, gesture, facial expression, voluntary versus involuntary expression, degree of presence, reflecting capabilities, manipulating the user's view of others, representation across multiple media, autonomous and distributed body parts, truthfulness and efficiency. Following this, we show how these issues are reflected in our own DIVE and MASSIVE prototype collaborative virtual environments.

  4. A Multi-User Virtual Environment for Building and Assessing Higher Order Inquiry Skills in Science

    ERIC Educational Resources Information Center

    Ketelhut, Diane Jass; Nelson, Brian C.; Clarke, Jody; Dede, Chris

    2010-01-01

    This study investigated novel pedagogies for helping teachers infuse inquiry into a standards-based science curriculum. Using a multi-user virtual environment (MUVE) as a pedagogical vehicle, teams of middle-school students collaboratively solved problems around disease in a virtual town called River City. The students interacted with "avatars" of…

  5. Civic Participation among Seventh-Grade Social Studies Students in Multi-User Virtual Environments

    ERIC Educational Resources Information Center

    Zieger, Laura; Farber, Matthew

    2012-01-01

    Technological advances on the Internet now enable students to develop participation skills in virtual worlds. Similar to controlling a character in a video game, multi-user virtual environments, or MUVEs, allow participants to interact with others in synchronous, online settings. The authors of this study created a link between MUVEs and…

  6. Managing Cognitive Load in Educational Multi-User Virtual Environments: Reflection on Design Practice

    ERIC Educational Resources Information Center

    Nelson, Brian C.; Erlandson, Benjamin E.

    2008-01-01

    In this paper, we explore how the application of multimedia design principles may inform the development of educational multi-user virtual environments (MUVEs). We look at design principles that have been shown to help learners manage cognitive load within multimedia environments and conduct a conjectural analysis of the extent to which such…

  7. Socialisation for Learning at a Distance in a 3-D Multi-User Virtual Environment

    ERIC Educational Resources Information Center

    Edirisingha, Palitha; Nie, Ming; Pluciennik, Mark; Young, Ruth

    2009-01-01

    This paper reports findings of a pilot study that examined the pedagogical potential of "Second Life" (SL), a popular three-dimensional multi-user virtual environment (3-D MUVE) developed by the Linden Lab. The study is part of a 1-year research and development project titled "Modelling of Secondlife Environments"…

  8. 'Putting it on the table': direct-manipulative interaction and multi-user display technologies for semi-immersive environments and augmented reality applications.

    PubMed

    Encarnação, L Miguel; Bimber, Oliver

    2002-01-01

    Collaborative virtual environments for diagnosis and treatment planning are increasingly gaining importance in our global society. Virtual and Augmented Reality approaches promised to provide valuable means for the involved interactive data analysis, but the underlying technologies still create a cumbersome work environment that is inadequate for clinical employment. This paper addresses two of the shortcomings of such technology: Intuitive interaction with multi-dimensional data in immersive and semi-immersive environments as well as stereoscopic multi-user displays combining the advantages of Virtual and Augmented Reality technology.

  9. Global Village as Virtual Community (On Writing, Thinking, and Teacher Education).

    ERIC Educational Resources Information Center

    Polin, Linda

    1993-01-01

    Describes virtual communities known as Multi-User Simulated Environment (MUSE) or Multi-User Object Oriented environment (MOO), text-based computer "communities" whose inhabitants are a combination of the real people and constructed objects that people agree to treat as real. Describes their uses in the classroom. (SR)

  10. Proposal for Implementing Multi-User Database (MUD) Technology in an Academic Library.

    ERIC Educational Resources Information Center

    Filby, A. M. Iliana

    1996-01-01

    Explores the use of MOO (multi-user object oriented) virtual environments in academic libraries to enhance reference services. Highlights include the development of multi-user database (MUD) technology from gaming to non-recreational settings; programming issues; collaborative MOOs; MOOs as distinguished from other types of virtual reality; audio…

  11. A Case Study in User Support for Managing OpenSim Based Multi User Learning Environments

    ERIC Educational Resources Information Center

    Perera, Indika; Miller, Alan; Allison, Colin

    2017-01-01

    Immersive 3D Multi User Learning Environments (MULE) have shown sufficient success to warrant their consideration as a mainstream educational paradigm. These are based on 3D Multi User Virtual Environment platforms (MUVE), and although they have been used for various innovative educational projects their complex permission systems and large…

  12. The Effect of the Use of the 3-D Multi-User Virtual Environment "Second Life" on Student Motivation and Language Proficiency in Courses of Spanish as a Foreign Language

    ERIC Educational Resources Information Center

    Pares-Toral, Maria T.

    2013-01-01

    The ever increasing popularity of virtual worlds, also known as 3-D multi-user virtual environments (MUVEs) or simply virtual worlds provides language instructors with a new tool they can exploit in their courses. For now, "Second Life" is one of the most popular MUVEs used for teaching and learning, and although "Second Life"…

  13. Preservice Teachers Experience Reading Response Pedagogy in a Multi-User Virtual Environment

    ERIC Educational Resources Information Center

    Dooley, Caitlin McMunn; Calandra, Brendan; Harmon, Stephen

    2014-01-01

    This qualitative case study describes how 18 preservice teachers learned to nurture literary meaning-making via activities based on Louise Rosenblatt's Reader Response Theory within a multi-user virtual environment (MUVE). Participants re-created and responded to scenes from selected works of children's literature in Second Life as a way to…

  14. Immersive telepresence system using high-resolution omnidirectional movies and a locomotion interface

    NASA Astrophysics Data System (ADS)

    Ikeda, Sei; Sato, Tomokazu; Kanbara, Masayuki; Yokoya, Naokazu

    2004-05-01

    Technology that enables users to experience a remote site virtually is called telepresence. A telepresence system using real environment images is expected to be used in the field of entertainment, medicine, education and so on. This paper describes a novel telepresence system which enables users to walk through a photorealistic virtualized environment by actual walking. To realize such a system, a wide-angle high-resolution movie is projected on an immersive multi-screen display to present users the virtualized environments and a treadmill is controlled according to detected user's locomotion. In this study, we use an omnidirectional multi-camera system to acquire images real outdoor scene. The proposed system provides users with rich sense of walking in a remote site.

  15. Exploring the Integration of Technology into Jewish Education: Multi-User Virtual Environments and Supplementary School Settings

    ERIC Educational Resources Information Center

    Sohn, Johannah Eve

    2014-01-01

    This descriptive case study explores the implementation of a multi-user virtual environment (MUVE) in a Jewish supplemental school setting. The research was conducted to present the recollections and reflections of three constituent populations of a new technology exploring constructivist education in the context of supplemental and online…

  16. Multi-Level Adaptation in End-User Development of 3D Virtual Chemistry Experiments

    ERIC Educational Resources Information Center

    Liu, Chang; Zhong, Ying

    2014-01-01

    Multi-level adaptation in end-user development (EUD) is an effective way to enable non-technical end users such as educators to gradually introduce more functionality with increasing complexity to 3D virtual learning environments developed by themselves using EUD approaches. Parameterization, integration, and extension are three levels of…

  17. The Impact of Student Self-Efficacy on Scientific Inquiry Skills: An Exploratory Investigation in "River City," a Multi-User Virtual Environment

    ERIC Educational Resources Information Center

    Ketelhut, Diane Jass

    2007-01-01

    This exploratory study investigated data-gathering behaviors exhibited by 100 seventh-grade students as they participated in a scientific inquiry-based curriculum project delivered by a multi-user virtual environment (MUVE). This research examined the relationship between students' self-efficacy on entry into the authentic scientific activity and…

  18. Relating Narrative, Inquiry, and Inscriptions: Supporting Consequential Play

    NASA Astrophysics Data System (ADS)

    Barab, Sasha A.; Sadler, Troy D.; Heiselt, Conan; Hickey, Daniel; Zuiker, Steven

    2007-02-01

    In this paper we describe our research using a multi-user virtual environment, Quest Atlantis, to embed fourth grade students in an aquatic habitat simulation. Specifically targeted towards engaging students in a rich inquiry investigation, we layered a socio-scientific narrative and an interactive rule set into a multi-user virtual environment gaming engine to establish a virtual world through which students learned about science inquiry, water quality concepts, and the challenges in balancing scientific and socio-economic factors. Overall, students were clearly engaged, participated in rich scientific discourse, submitted quality work, and learned science content. Further, through participation in this narrative, students developed a rich perceptual, conceptual, and ethical understanding of science. This study suggests that multi-user virtual worlds can be effectively leveraged to support academic content learning.

  19. Erratum to: Relating Narrative, Inquiry, and Inscriptions: Supporting Consequential Play

    NASA Astrophysics Data System (ADS)

    Barab, Sasha A.; Sadler, Troy D.; Heiselt, Conan; Hickey, Daniel; Zuiker, Steven

    2010-08-01

    In this paper we describe our research using a multi-user virtual environment, Quest Atlantis, to embed fourth grade students in an aquatic habitat simulation. Specifically targeted towards engaging students in a rich inquiry investigation, we layered a socio-scientific narrative and an interactive rule set into a multi-user virtual environment gaming engine to establish a virtual world through which students learned about science inquiry, water quality concepts, and the challenges in balancing scientific and socio-economic factors. Overall, students were clearly engaged, participated in rich scientific discourse, submitted quality work, and learned science content. Further, through participation in this narrative, students developed a rich perceptual, conceptual, and ethical understanding of science. This study suggests that multi-user virtual worlds can be effectively leveraged to support academic content learning.

  20. SmallTool - a toolkit for realizing shared virtual environments on the Internet

    NASA Astrophysics Data System (ADS)

    Broll, Wolfgang

    1998-09-01

    With increasing graphics capabilities of computers and higher network communication speed, networked virtual environments have become available to a large number of people. While the virtual reality modelling language (VRML) provides users with the ability to exchange 3D data, there is still a lack of appropriate support to realize large-scale multi-user applications on the Internet. In this paper we will present SmallTool, a toolkit to support shared virtual environments on the Internet. The toolkit consists of a VRML-based parsing and rendering library, a device library, and a network library. This paper will focus on the networking architecture, provided by the network library - the distributed worlds transfer and communication protocol (DWTP). DWTP provides an application-independent network architecture to support large-scale multi-user environments on the Internet.

  1. Educational MOO: Text-Based Virtual Reality for Learning in Community. ERIC Digest.

    ERIC Educational Resources Information Center

    Turbee, Lonnie

    MOO stands for "Multi-user domain, Object-Oriented." Early multi-user domains, or "MUDs," began as net-based dungeons-and-dragons type games, but MOOs have evolved from these origins to become some of cyberspace's most fascinating and engaging online communities. MOOs are social environments in a text-based virtual reality…

  2. Relating Narrative, Inquiry, and Inscriptions: Supporting Consequential Play

    ERIC Educational Resources Information Center

    Barab, Sasha A.; Sadler, Troy D.; Heiselt, Conan; Hickey, Daniel; Zuiker, Steven

    2007-01-01

    In this paper we describe our research using a multi-user virtual environment, "Quest Atlantis," to embed fourth grade students in an aquatic habitat simulation. Specifically targeted towards engaging students in a rich inquiry investigation, we layered a socio-scientific narrative and an interactive rule set into a multi-user virtual environment…

  3. Second Life in Higher Education: Assessing the Potential for and the Barriers to Deploying Virtual Worlds in Learning and Teaching

    ERIC Educational Resources Information Center

    Warburton, Steven

    2009-01-01

    "Second Life" (SL) is currently the most mature and popular multi-user virtual world platform being used in education. Through an in-depth examination of SL, this article explores its potential and the barriers that multi-user virtual environments present to educators wanting to use immersive 3-D spaces in their teaching. The context is set by…

  4. Towards Gesture-Based Multi-User Interactions in Collaborative Virtual Environments

    NASA Astrophysics Data System (ADS)

    Pretto, N.; Poiesi, F.

    2017-11-01

    We present a virtual reality (VR) setup that enables multiple users to participate in collaborative virtual environments and interact via gestures. A collaborative VR session is established through a network of users that is composed of a server and a set of clients. The server manages the communication amongst clients and is created by one of the users. Each user's VR setup consists of a Head Mounted Display (HMD) for immersive visualisation, a hand tracking system to interact with virtual objects and a single-hand joypad to move in the virtual environment. We use Google Cardboard as a HMD for the VR experience and a Leap Motion for hand tracking, thus making our solution low cost. We evaluate our VR setup though a forensics use case, where real-world objects pertaining to a simulated crime scene are included in a VR environment, acquired using a smartphone-based 3D reconstruction pipeline. Users can interact using virtual gesture-based tools such as pointers and rulers.

  5. An Intelligent Crawler for a Virtual World

    ERIC Educational Resources Information Center

    Eno, Joshua

    2010-01-01

    Virtual worlds, which allow users to create and interact with content in a 3D, multi-user environment, growing and becoming more integrated with the traditional flat web. However, little is empirically known about the content users create in virtual world and how it can be indexed and searched effectively. In order to gain a better understanding…

  6. Collaboration Modality, Cognitive Load, and Science Inquiry Learning in Virtual Inquiry Environments

    ERIC Educational Resources Information Center

    Erlandson, Benjamin E.; Nelson, Brian C.; Savenye, Wilhelmina C.

    2010-01-01

    Educational multi-user virtual environments (MUVEs) have been shown to be effective platforms for situated science inquiry curricula. While researchers find MUVEs to be supportive of collaborative scientific inquiry processes, the complex mix of multi-modal messages present in MUVEs can lead to cognitive overload, with learners unable to…

  7. Interaction Design and Usability of Learning Spaces in 3D Multi-user Virtual Worlds

    NASA Astrophysics Data System (ADS)

    Minocha, Shailey; Reeves, Ahmad John

    Three-dimensional virtual worlds are multimedia, simulated environments, often managed over the Web, which users can 'inhabit' and interact via their own graphical, self-representations known as 'avatars'. 3D virtual worlds are being used in many applications: education/training, gaming, social networking, marketing and commerce. Second Life is the most widely used 3D virtual world in education. However, problems associated with usability, navigation and way finding in 3D virtual worlds may impact on student learning and engagement. Based on empirical investigations of learning spaces in Second Life, this paper presents design guidelines to improve the usability and ease of navigation in 3D spaces. Methods of data collection include semi-structured interviews with Second Life students, educators and designers. The findings have revealed that design principles from the fields of urban planning, Human- Computer Interaction, Web usability, geography and psychology can influence the design of spaces in 3D multi-user virtual environments.

  8. Collaborative Virtual Gaming Worlds in Higher Education

    ERIC Educational Resources Information Center

    Whitton, Nicola; Hollins, Paul

    2008-01-01

    There is growing interest in the use of virtual gaming worlds in education, supported by the increased use of multi-user virtual environments (MUVEs) and massively multi-player online role-playing games (MMORPGs) for collaborative learning. However, this paper argues that collaborative gaming worlds have been in use much longer and are much wider…

  9. An Examination of Usability of a Virtual Environment for Students Enrolled in a College of Agriculture

    ERIC Educational Resources Information Center

    Murphrey, Theresa Pesl; Rutherford, Tracy A.; Doerfert, David L.; Edgar, Leslie D.; Edgar, Don W.

    2014-01-01

    Educational technology continues to expand with multi-user virtual environments (e.g., Second Life™) being the latest technology. Understanding a virtual environment's usability can enhance educational planning and effective use. Usability includes the interaction quality between an individual and the item being assessed. The purpose was to assess…

  10. DWTP: a basis for networked VR on the Internet

    NASA Astrophysics Data System (ADS)

    Broll, Wolfgang; Schick, Daniel

    1998-04-01

    Shared virtual worlds are one of today's major research topics. While limited to particular application areas and high speed networks in the past, they become more and more available to a large number of users. One reason for this development was the introduction of VRML (the Virtual Reality Modeling Language), which has been established as a standard of the exchange of 3D worlds on the Internet. Although a number of prototype systems have been developed to realize shared multi-user worlds based on VRML, no suitable network protocol to support the demands of such environments has yet been established. In this paper we will introduce our approach of a network protocol for shared virtual environments: DWTP--the Distributed Worlds Transfer and communication Protocol. We will show how DWTP meets the demands of shared virtual environments on the Internet. We will further present SmallView, our prototype of a distributed multi-user VR system, to show how DWTP can be used to realize shared worlds.

  11. Real-Life Migrants on the MUVE: Stories of Virtual Transitions

    ERIC Educational Resources Information Center

    Perkins, Ross A.; Arreguin, Cathy

    2007-01-01

    The communication and collaborative interface known as a multi-user virtual environment (MUVE), has existed since as early as the late 1970s. MUVEs refer to programs that have an animated character ("avatar") controlled by a user within a wider environment that can be explored--or built--at will. Second Life, a MUVE created by San Francisco-based…

  12. Design and Implementation of a 3D Multi-User Virtual World for Language Learning

    ERIC Educational Resources Information Center

    Ibanez, Maria Blanca; Garcia, Jose Jesus; Galan, Sergio; Maroto, David; Morillo, Diego; Kloos, Carlos Delgado

    2011-01-01

    The best way to learn is by having a good teacher and the best language learning takes place when the learner is immersed in an environment where the language is natively spoken. 3D multi-user virtual worlds have been claimed to be useful for learning, and the field of exploiting them for education is becoming more and more active thanks to the…

  13. Brave New World

    ERIC Educational Resources Information Center

    Panettieri, Joseph C.

    2007-01-01

    Across the globe, progressive universities are embracing any number of MUVEs (multi-user virtual environments), 3D environments, and "immersive" virtual reality tools. And within the next few months, several universities are expected to test so-called "telepresence" videoconferencing systems from Cisco Systems and other leading…

  14. A Virtual World Workshop Environment for Learning Agile Software Development Techniques

    ERIC Educational Resources Information Center

    Parsons, David; Stockdale, Rosemary

    2012-01-01

    Multi-User Virtual Environments (MUVEs) are the subject of increasing interest for educators and trainers. This article reports on a longitudinal project that seeks to establish a virtual agile software development workshop hosted in the Open Wonderland MUVE, designed to help learners to understand the basic principles of some core agile software…

  15. Design Concerns in the Engineering of Virtual Worlds for Learning

    ERIC Educational Resources Information Center

    Rapanotti, Lucia; Hall, Jon G.

    2011-01-01

    The convergence of 3D simulation and social networking into current multi-user virtual environments has opened the door to new forms of interaction for learning in order to complement the face-to-face and Web 2.0-based systems. Yet, despite a growing user community, design knowledge for virtual worlds remains patchy, particularly when it comes to…

  16. Robots, multi-user virtual environments and healthcare: synergies for future directions.

    PubMed

    Moon, Ajung; Grajales, Francisco J; Van der Loos, H F Machiel

    2011-01-01

    The adoption of technology in healthcare over the last twenty years has steadily increased, particularly as it relates to medical robotics and Multi-User Virtual Environments (MUVEs) such as Second Life. Both disciplines have been shown to improve the quality of care and have evolved, for the most part, in isolation from each other. In this paper, we present four synergies between medical robotics and MUVEs that have the potential to decrease resource utilization and improve the quality of healthcare delivery. We conclude with some foreseeable barriers and future research directions for researchers in these fields.

  17. Multi-modal virtual environment research at Armstrong Laboratory

    NASA Technical Reports Server (NTRS)

    Eggleston, Robert G.

    1995-01-01

    One mission of the Paul M. Fitts Human Engineering Division of Armstrong Laboratory is to improve the user interface for complex systems through user-centered exploratory development and research activities. In support of this goal, many current projects attempt to advance and exploit user-interface concepts made possible by virtual reality (VR) technologies. Virtual environments may be used as a general purpose interface medium, an alternative display/control method, a data visualization and analysis tool, or a graphically based performance assessment tool. An overview is given of research projects within the division on prototype interface hardware/software development, integrated interface concept development, interface design and evaluation tool development, and user and mission performance evaluation tool development.

  18. Claiming Unclaimed Spaces: Virtual Spaces for Learning

    ERIC Educational Resources Information Center

    Miller, Nicole C.

    2016-01-01

    The purpose of this study was to describe and examine the environments used by teacher candidates in multi-user virtual environments. Secondary data analysis of a case study methodology was employed. Multiple data sources including interviews, surveys, observations, snapshots, course artifacts, and the researcher's journal were used in the initial…

  19. The right view from the wrong location: depth perception in stereoscopic multi-user virtual environments.

    PubMed

    Pollock, Brice; Burton, Melissa; Kelly, Jonathan W; Gilbert, Stephen; Winer, Eliot

    2012-04-01

    Stereoscopic depth cues improve depth perception and increase immersion within virtual environments (VEs). However, improper display of these cues can distort perceived distances and directions. Consider a multi-user VE, where all users view identical stereoscopic images regardless of physical location. In this scenario, cues are typically customized for one "leader" equipped with a head-tracking device. This user stands at the center of projection (CoP) and all other users ("followers") view the scene from other locations and receive improper depth cues. This paper examines perceived depth distortion when viewing stereoscopic VEs from follower perspectives and the impact of these distortions on collaborative spatial judgments. Pairs of participants made collaborative depth judgments of virtual shapes viewed from the CoP or after displacement forward or backward. Forward and backward displacement caused perceived depth compression and expansion, respectively, with greater compression than expansion. Furthermore, distortion was less than predicted by a ray-intersection model of stereo geometry. Collaboration times were significantly longer when participants stood at different locations compared to the same location, and increased with greater perceived depth discrepancy between the two viewing locations. These findings advance our understanding of spatial distortions in multi-user VEs, and suggest a strategy for reducing distortion.

  20. On Being Bored and Lost (in Virtuality)

    ERIC Educational Resources Information Center

    Moore, Kristen; Pflugfelder, Ehren Helmut

    2010-01-01

    Education in virtual worlds has the potential, it seems, for engaging students in innovative ways and for enabling new discourses on a host of issues. Virtual locations like "Second Life," "Kaneva," or "World of Warcraft," among other multi-user virtual environments (MUVEs), also come with unique challenges for educators as they consider the…

  1. Teaching with Virtual Worlds: Factors to Consider for Instructional Use of Second Life

    ERIC Educational Resources Information Center

    Mayrath, Michael C.; Traphagan, Tomoko; Jarmon, Leslie; Trivedi, Avani; Resta, Paul

    2010-01-01

    Substantial evidence now supports pedagogical applications of virtual worlds; however, most research supporting virtual worlds for education has been conducted using researcher-developed Multi-User Virtual Environments (MUVE). Second Life (SL) is a MUVE that has been adopted by a large number of academic institutions; however, little research has…

  2. [Virtual + 1] * Reality

    NASA Astrophysics Data System (ADS)

    Beckhaus, Steffi

    Virtual Reality aims at creating an artificial environment that can be perceived as a substitute to a real setting. Much effort in research and development goes into the creation of virtual environments that in their majority are perceivable only by eyes and hands. The multisensory nature of our perception, however, allows and, arguably, also expects more than that. As long as we are not able to simulate and deliver a fully sensory believable virtual environment to a user, we could make use of the fully sensory, multi-modal nature of real objects to fill in for this deficiency. The idea is to purposefully integrate real artifacts into the application and interaction, instead of dismissing anything real as hindering the virtual experience. The term virtual reality - denoting the goal, not the technology - shifts from a core virtual reality to an “enriched” reality, technologically encompassing both the computer generated and the real, physical artifacts. Together, either simultaneously or in a hybrid way, real and virtual jointly provide stimuli that are perceived by users through their senses and are later formed into an experience by the user's mind.

  3. The Creation of a Theoretical Framework for Avatar Creation and Revision

    ERIC Educational Resources Information Center

    Beck, Dennis; Murphy, Cheryl

    2014-01-01

    Multi-User Virtual Environments (MUVE) are increasingly being used in education and provide environments where users can manipulate minute details of their avatar's appearance including those traditionally associated with gender and race identification. The ability to choose racial and gender characteristics differs from real-world educational…

  4. E-Drama: Facilitating Online Role-Play Using an AI Actor and Emotionally Expressive Characters

    ERIC Educational Resources Information Center

    Zhang, Li; Gillies, Marco; Dhaliwal, Kulwant; Gower, Amanda; Robertson, Dale; Crabtree, Barry

    2009-01-01

    This paper describes a multi-user role-playing environment, referred to as "e-drama", which enables groups of people to converse online, in scenario driven virtual environments. The starting point of this research, is an existing application known as "edrama", a 2D graphical environment in which users are represented by static…

  5. Investigating Student Attitudes toward a Synchronous, Online Graduate Course in a Multi-User Virtual Learning Environment

    ERIC Educational Resources Information Center

    Annetta, Leonard; Murray, Marshall; Gull Laird, Shelby; Bohr, Stephanie; Park, John

    2008-01-01

    This article describes a graduate distance education course at North Carolina State University, which combined science content and pedagogy with video game design. The course was conducted entirely in a synchronous, online, Virtual Learning Environment (VLE) through the ActiveWorlds[TM] platform. Inservice teachers enrolled as graduate students in…

  6. Designing for Real-World Scientific Inquiry in Virtual Environments

    ERIC Educational Resources Information Center

    Ketelhut, Diane Jass; Nelson, Brian C.

    2010-01-01

    Background: Most policy doctrines promote the use of scientific inquiry in the K-12 classroom, but good inquiry is hard to implement, particularly for schools with fiscal and safety constraints and for teachers struggling with understanding how to do so. Purpose: In this paper, we present the design of a multi-user virtual environment (MUVE)…

  7. Avatars Go to Class: A Virtual Environment Soil Science Activity

    ERIC Educational Resources Information Center

    Mamo, M.; Namuth-Covert, D.; Guru, A.; Nugent, G.; Phillips, L.; Sandall, L.; Kettler, T.; McCallister, D.

    2011-01-01

    Web 2.0 technology is expanding rapidly from social and gaming uses into the educational applications. Specifically, the multi-user virtual environment (MUVE), such as SecondLife, allows educators to fill the gap of first-hand experience by creating simulated realistic evolving problems/games. In a pilot study, a team of educators at the…

  8. An Investigation into Cooperative Learning in a Virtual World Using Problem-Based Learning

    ERIC Educational Resources Information Center

    Parson, Vanessa; Bignell, Simon

    2017-01-01

    Three-dimensional multi-user virtual environments (MUVEs) have the potential to provide experiential learning qualitatively similar to that found in the real world. MUVEs offer a pedagogically-driven immersive learning opportunity for educationalists that is cost-effective and enjoyable. A family of digital virtual avatars was created within…

  9. Emerging technologies in education and training: applications for the laboratory animal science community.

    PubMed

    Ketelhut, Diane Jass; Niemi, Steven M

    2007-01-01

    This article examines several new and exciting communication technologies. Many of the technologies were developed by the entertainment industry; however, other industries are adopting and modifying them for their own needs. These new technologies allow people to collaborate across distance and time and to learn in simulated work contexts. The article explores the potential utility of these technologies for advancing laboratory animal care and use through better education and training. Descriptions include emerging technologies such as augmented reality and multi-user virtual environments, which offer new approaches with different capabilities. Augmented reality interfaces, characterized by the use of handheld computers to infuse the virtual world into the real one, result in deeply immersive simulations. In these simulations, users can access virtual resources and communicate with real and virtual participants. Multi-user virtual environments enable multiple participants to simultaneously access computer-based three-dimensional virtual spaces, called "worlds," and to interact with digital tools. They allow for authentic experiences that promote collaboration, mentoring, and communication. Because individuals may learn or train differently, it is advantageous to combine the capabilities of these technologies and applications with more traditional methods to increase the number of students who are served by using current methods alone. The use of these technologies in animal care and use programs can create detailed training and education environments that allow students to learn the procedures more effectively, teachers to assess their progress more objectively, and researchers to gain insights into animal care.

  10. CLEW: A Cooperative Learning Environment for the Web.

    ERIC Educational Resources Information Center

    Ribeiro, Marcelo Blois; Noya, Ricardo Choren; Fuks, Hugo

    This paper outlines CLEW (collaborative learning environment for the Web). The project combines MUD (Multi-User Dimension), workflow, VRML (Virtual Reality Modeling Language) and educational concepts like constructivism in a learning environment where students actively participate in the learning process. The MUD shapes the environment structure.…

  11. Formalizing and Promoting Collaboration in 3D Virtual Environments - A Blueprint for the Creation of Group Interaction Patterns

    NASA Astrophysics Data System (ADS)

    Schmeil, Andreas; Eppler, Martin J.

    Despite the fact that virtual worlds and other types of multi-user 3D collaboration spaces have long been subjects of research and of application experiences, it still remains unclear how to best benefit from meeting with colleagues and peers in a virtual environment with the aim of working together. Making use of the potential of virtual embodiment, i.e. being immersed in a space as a personal avatar, allows for innovative new forms of collaboration. In this paper, we present a framework that serves as a systematic formalization of collaboration elements in virtual environments. The framework is based on the semiotic distinctions among pragmatic, semantic and syntactic perspectives. It serves as a blueprint to guide users in designing, implementing, and executing virtual collaboration patterns tailored to their needs. We present two team and two community collaboration pattern examples as a result of the application of the framework: Virtual Meeting, Virtual Design Studio, Spatial Group Configuration, and Virtual Knowledge Fair. In conclusion, we also point out future research directions for this emerging domain.

  12. Usage of Thin-Client/Server Architecture in Computer Aided Education

    ERIC Educational Resources Information Center

    Cimen, Caghan; Kavurucu, Yusuf; Aydin, Halit

    2014-01-01

    With the advances of technology, thin-client/server architecture has become popular in multi-user/single network environments. Thin-client is a user terminal in which the user can login to a domain and run programs by connecting to a remote server. Recent developments in network and hardware technologies (cloud computing, virtualization, etc.)…

  13. A CALL for Evolving Teacher Education through 3D Microteaching

    ERIC Educational Resources Information Center

    Pappa, Giouli; Papadima-Sophocleous, Salomi

    2016-01-01

    This paper describes micro-teaching delivery in virtual worlds. Emphasis is placed on examining the effectiveness of Singularity Viewer, an Internet-based Multi-User Virtual Environment (MUVE) as the tool used for assessment of the student teacher performance. The overall goal of this endeavour lies in exploiting the opportunities derived from…

  14. Second Life in Education: The Case of Commercial Online Virtual Reality Applied to Teaching and Learning

    ERIC Educational Resources Information Center

    Brown, Abbie; Sugar, William

    2009-01-01

    Second Life is a three-dimensional, multi-user virtual environment that has attracted particular attention for its instructional potential in professional development and higher education settings. This article describes Second Life in general and explores the benefits and challenges of using it for teaching and learning.

  15. 13 Tips for Virtual World Teaching

    ERIC Educational Resources Information Center

    Villano, Matt

    2008-01-01

    Multi-user virtual environments (MUVEs) are gaining momentum as the latest and greatest learning tool in the world of education technology. How does one get started with them? How do they work? This article shares 13 secrets from immersive education experts and educators on how to have success in implementing these new tools and technologies on…

  16. Seamless 3D interaction for virtual tables, projection planes, and CAVEs

    NASA Astrophysics Data System (ADS)

    Encarnacao, L. M.; Bimber, Oliver; Schmalstieg, Dieter; Barton, Robert J., III

    2000-08-01

    The Virtual Table presents stereoscopic graphics to a user in a workbench-like setting. This device shares with other large- screen display technologies (such as data walls and surround- screen projection systems) the lack of human-centered unencumbered user interfaces and 3D interaction technologies. Such shortcomings present severe limitations to the application of virtual reality (VR) technology to time- critical applications as well as employment scenarios that involve heterogeneous groups of end-users without high levels of computer familiarity and expertise. Traditionally such employment scenarios are common in planning-related application areas such as mission rehearsal and command and control. For these applications, a high grade of flexibility with respect to the system requirements (display and I/O devices) as well as to the ability to seamlessly and intuitively switch between different interaction modalities and interaction are sought. Conventional VR techniques may be insufficient to meet this challenge. This paper presents novel approaches for human-centered interfaces to Virtual Environments focusing on the Virtual Table visual input device. It introduces new paradigms for 3D interaction in virtual environments (VE) for a variety of application areas based on pen-and-clipboard, mirror-in-hand, and magic-lens metaphors, and introduces new concepts for combining VR and augmented reality (AR) techniques. It finally describes approaches toward hybrid and distributed multi-user interaction environments and concludes by hypothesizing on possible use cases for defense applications.

  17. Immersive Collaboration Simulations: Multi-User Virtual Environments and Augmented Realities

    NASA Technical Reports Server (NTRS)

    Dede, Chris

    2008-01-01

    Emerging information technologies are reshaping the following: shifts in the knowledge and skills society values, development of new methods of teaching and learning, and changes in the characteristics of learning.

  18. The Role of Environment Design in an Educational Multi-User Virtual Environment

    ERIC Educational Resources Information Center

    Papachristos, Nikiforos M.; Vrellis, Ioannis; Natsis, Antonis; Mikropoulos, Tassos A.

    2014-01-01

    This paper presents empirical results from an exploratory study conducted in an authentic educational situation with preservice education students enrolled in an undergraduate course, which was partially taught in Second Life. The study investigated the effect of environment design on presence, learning outcomes and the overall experience of the…

  19. Performance implications from sizing a VM on multi-core systems: A Data analytic application s view

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lim, Seung-Hwan; Horey, James L; Begoli, Edmon

    In this paper, we present a quantitative performance analysis of data analytics applications running on multi-core virtual machines. Such environments form the core of cloud computing. In addition, data analytics applications, such as Cassandra and Hadoop, are becoming increasingly popular on cloud computing platforms. This convergence necessitates a better understanding of the performance and cost implications of such hybrid systems. For example, the very rst step in hosting applications in virtualized environments, requires the user to con gure the number of virtual processors and the size of memory. To understand performance implications of this step, we benchmarked three Yahoo Cloudmore » Serving Benchmark (YCSB) workloads in a virtualized multi-core environment. Our measurements indicate that the performance of Cassandra for YCSB workloads does not heavily depend on the processing capacity of a system, while the size of the data set is critical to performance relative to allocated memory. We also identi ed a strong relationship between the running time of workloads and various hardware events (last level cache loads, misses, and CPU migrations). From this analysis, we provide several suggestions to improve the performance of data analytics applications running on cloud computing environments.« less

  20. Interaction Management Strategies on IRC and Virtual Chat Rooms.

    ERIC Educational Resources Information Center

    Altun, Arif

    Internet Relay Chat (IRC) is an electronic medium that combines orthographic form with real time, synchronous transmission in an unregulated global multi-user environment. The orthographic letters mediate the interaction in that users can only access the IRC session through reading and writing; they have no access to any visual representations at…

  1. Surgical model-view-controller simulation software framework for local and collaborative applications

    PubMed Central

    Sankaranarayanan, Ganesh; Halic, Tansel; Arikatla, Venkata Sreekanth; Lu, Zhonghua; De, Suvranu

    2010-01-01

    Purpose Surgical simulations require haptic interactions and collaboration in a shared virtual environment. A software framework for decoupled surgical simulation based on a multi-controller and multi-viewer model-view-controller (MVC) pattern was developed and tested. Methods A software framework for multimodal virtual environments was designed, supporting both visual interactions and haptic feedback while providing developers with an integration tool for heterogeneous architectures maintaining high performance, simplicity of implementation, and straightforward extension. The framework uses decoupled simulation with updates of over 1,000 Hz for haptics and accommodates networked simulation with delays of over 1,000 ms without performance penalty. Results The simulation software framework was implemented and was used to support the design of virtual reality-based surgery simulation systems. The framework supports the high level of complexity of such applications and the fast response required for interaction with haptics. The efficacy of the framework was tested by implementation of a minimally invasive surgery simulator. Conclusion A decoupled simulation approach can be implemented as a framework to handle simultaneous processes of the system at the various frame rates each process requires. The framework was successfully used to develop collaborative virtual environments (VEs) involving geographically distributed users connected through a network, with the results comparable to VEs for local users. PMID:20714933

  2. Surgical model-view-controller simulation software framework for local and collaborative applications.

    PubMed

    Maciel, Anderson; Sankaranarayanan, Ganesh; Halic, Tansel; Arikatla, Venkata Sreekanth; Lu, Zhonghua; De, Suvranu

    2011-07-01

    Surgical simulations require haptic interactions and collaboration in a shared virtual environment. A software framework for decoupled surgical simulation based on a multi-controller and multi-viewer model-view-controller (MVC) pattern was developed and tested. A software framework for multimodal virtual environments was designed, supporting both visual interactions and haptic feedback while providing developers with an integration tool for heterogeneous architectures maintaining high performance, simplicity of implementation, and straightforward extension. The framework uses decoupled simulation with updates of over 1,000 Hz for haptics and accommodates networked simulation with delays of over 1,000 ms without performance penalty. The simulation software framework was implemented and was used to support the design of virtual reality-based surgery simulation systems. The framework supports the high level of complexity of such applications and the fast response required for interaction with haptics. The efficacy of the framework was tested by implementation of a minimally invasive surgery simulator. A decoupled simulation approach can be implemented as a framework to handle simultaneous processes of the system at the various frame rates each process requires. The framework was successfully used to develop collaborative virtual environments (VEs) involving geographically distributed users connected through a network, with the results comparable to VEs for local users.

  3. Design on the MUVE: Synergizing Online Design Education with Multi-User Virtual Environments (MUVE)

    ERIC Educational Resources Information Center

    Sakalli, Isinsu; Chung, WonJoon

    2015-01-01

    The world is becoming increasingly virtual. Since the invention of the World Wide Web, information and human interaction has been transferring to the web at a rapid rate. Education is one of the many institutions that is taking advantage of accessing large numbers of people globally through computers. While this can be a simpler task for…

  4. A hardware and software architecture to deal with multimodal and collaborative interactions in multiuser virtual reality environments

    NASA Astrophysics Data System (ADS)

    Martin, P.; Tseu, A.; Férey, N.; Touraine, D.; Bourdot, P.

    2014-02-01

    Most advanced immersive devices provide collaborative environment within several users have their distinct head-tracked stereoscopic point of view. Combining with common used interactive features such as voice and gesture recognition, 3D mouse, haptic feedback, and spatialized audio rendering, these environments should faithfully reproduce a real context. However, even if many studies have been carried out on multimodal systems, we are far to definitively solve the issue of multimodal fusion, which consists in merging multimodal events coming from users and devices, into interpretable commands performed by the application. Multimodality and collaboration was often studied separately, despite of the fact that these two aspects share interesting similarities. We discuss how we address this problem, thought the design and implementation of a supervisor that is able to deal with both multimodal fusion and collaborative aspects. The aim of this supervisor is to ensure the merge of user's input from virtual reality devices in order to control immersive multi-user applications. We deal with this problem according to a practical point of view, because the main requirements of this supervisor was defined according to a industrial task proposed by our automotive partner, that as to be performed with multimodal and collaborative interactions in a co-located multi-user environment. In this task, two co-located workers of a virtual assembly chain has to cooperate to insert a seat into the bodywork of a car, using haptic devices to feel collision and to manipulate objects, combining speech recognition and two hands gesture recognition as multimodal instructions. Besides the architectural aspect of this supervisor, we described how we ensure the modularity of our solution that could apply on different virtual reality platforms, interactive contexts and virtual contents. A virtual context observer included in this supervisor in was especially designed to be independent to the content of the virtual scene of targeted application, and is use to report high-level interactive and collaborative events. This context observer allows the supervisor to merge these interactive and collaborative events, but is also used to deal with new issues coming from our observation of two co-located users in an immersive device performing this assembly task. We highlight the fact that when speech recognition features are provided to the two users, it is required to automatically detect according to the interactive context, whether the vocal instructions must be translated into commands that have to be performed by the machine, or whether they take a part of the natural communication necessary for collaboration. Information coming from this context observer that indicates a user is looking at its collaborator, is important to detect if the user is talking to its partner. Moreover, as the users are physically co-localised and head-tracking is used to provide high fidelity stereoscopic rendering, and natural walking navigation in the virtual scene, we have to deals with collision and screen occlusion between the co-located users in the physical work space. Working area and focus of each user, computed and reported by the context observer is necessary to prevent or avoid these situations.

  5. Combined virtual and real robotic test-bed for single operator control of multiple robots

    NASA Astrophysics Data System (ADS)

    Lee, Sam Y.-S.; Hunt, Shawn; Cao, Alex; Pandya, Abhilash

    2010-04-01

    Teams of heterogeneous robots with different dynamics or capabilities could perform a variety of tasks such as multipoint surveillance, cooperative transport and explorations in hazardous environments. In this study, we work with heterogeneous robots of semi-autonomous ground and aerial robots for contaminant localization. We developed a human interface system which linked every real robot to its virtual counterpart. A novel virtual interface has been integrated with Augmented Reality that can monitor the position and sensory information from video feed of ground and aerial robots in the 3D virtual environment, and improve user situational awareness. An operator can efficiently control the real multi-robots using the Drag-to-Move method on the virtual multi-robots. This enables an operator to control groups of heterogeneous robots in a collaborative way for allowing more contaminant sources to be pursued simultaneously. The advanced feature of the virtual interface system is guarded teleoperation. This can be used to prevent operators from accidently driving multiple robots into walls and other objects. Moreover, the feature of the image guidance and tracking is able to reduce operator workload.

  6. Ergonomic approaches to designing educational materials for immersive multi-projection system

    NASA Astrophysics Data System (ADS)

    Shibata, Takashi; Lee, JaeLin; Inoue, Tetsuri

    2014-02-01

    Rapid advances in computer and display technologies have made it possible to present high quality virtual reality (VR) environment. To use such virtual environments effectively, research should be performed into how users perceive and react to virtual environment in view of particular human factors. We created a VR simulation of sea fish for science education, and we conducted an experiment to examine how observers perceive the size and depth of an object within their reach and evaluated their visual fatigue. We chose a multi-projection system for presenting the educational VR simulation, because this system can provide actual-size objects and produce stereo images located close to the observer. The results of the experiment show that estimation of size and depth was relatively accurate when subjects used physical actions to assess them. Presenting images within the observer's reach is suggested to be useful for education in VR environment. Evaluation of visual fatigue shows that the level of symptoms from viewing stereo images with a large disparity in VR environment was low in a short time.

  7. Simulation Platform: a cloud-based online simulation environment.

    PubMed

    Yamazaki, Tadashi; Ikeno, Hidetoshi; Okumura, Yoshihiro; Satoh, Shunji; Kamiyama, Yoshimi; Hirata, Yutaka; Inagaki, Keiichiro; Ishihara, Akito; Kannon, Takayuki; Usui, Shiro

    2011-09-01

    For multi-scale and multi-modal neural modeling, it is needed to handle multiple neural models described at different levels seamlessly. Database technology will become more important for these studies, specifically for downloading and handling the neural models seamlessly and effortlessly. To date, conventional neuroinformatics databases have solely been designed to archive model files, but the databases should provide a chance for users to validate the models before downloading them. In this paper, we report our on-going project to develop a cloud-based web service for online simulation called "Simulation Platform". Simulation Platform is a cloud of virtual machines running GNU/Linux. On a virtual machine, various software including developer tools such as compilers and libraries, popular neural simulators such as GENESIS, NEURON and NEST, and scientific software such as Gnuplot, R and Octave, are pre-installed. When a user posts a request, a virtual machine is assigned to the user, and the simulation starts on that machine. The user remotely accesses to the machine through a web browser and carries out the simulation, without the need to install any software but a web browser on the user's own computer. Therefore, Simulation Platform is expected to eliminate impediments to handle multiple neural models that require multiple software. Copyright © 2011 Elsevier Ltd. All rights reserved.

  8. Reprint of: Simulation Platform: a cloud-based online simulation environment.

    PubMed

    Yamazaki, Tadashi; Ikeno, Hidetoshi; Okumura, Yoshihiro; Satoh, Shunji; Kamiyama, Yoshimi; Hirata, Yutaka; Inagaki, Keiichiro; Ishihara, Akito; Kannon, Takayuki; Usui, Shiro

    2011-11-01

    For multi-scale and multi-modal neural modeling, it is needed to handle multiple neural models described at different levels seamlessly. Database technology will become more important for these studies, specifically for downloading and handling the neural models seamlessly and effortlessly. To date, conventional neuroinformatics databases have solely been designed to archive model files, but the databases should provide a chance for users to validate the models before downloading them. In this paper, we report our on-going project to develop a cloud-based web service for online simulation called "Simulation Platform". Simulation Platform is a cloud of virtual machines running GNU/Linux. On a virtual machine, various software including developer tools such as compilers and libraries, popular neural simulators such as GENESIS, NEURON and NEST, and scientific software such as Gnuplot, R and Octave, are pre-installed. When a user posts a request, a virtual machine is assigned to the user, and the simulation starts on that machine. The user remotely accesses to the machine through a web browser and carries out the simulation, without the need to install any software but a web browser on the user's own computer. Therefore, Simulation Platform is expected to eliminate impediments to handle multiple neural models that require multiple software. Copyright © 2011 Elsevier Ltd. All rights reserved.

  9. Studying the Effectiveness of Multi-User Immersive Environments for Collaborative Evaluation Tasks

    ERIC Educational Resources Information Center

    Lorenzo, Carlos-Miguel; Sicilia, Miguel Angel; Sanchez, Salvador

    2012-01-01

    Massively Multiuser On-line Learning (MMOL) Platforms, often called "virtual learning worlds", constitute a still unexplored context for communication-enhanced learning, where synchronous communication skills in an explicit social setting enhance the potential of effective collaboration. In this paper, we report on an experimental study of…

  10. Three Community Building Strategies and Their Impacts in an On-Line Course.

    ERIC Educational Resources Information Center

    Egbert, Joy; Chao, Chin-Chi; Ngeow, Karen

    This paper describes three instructional strategies designed to support community building in an online graduate teacher education course: (1) MOO (Multi-User Dimensions Object Oriented) field trips, in which participants are introduced to text-based virtual environments on the Internet through metaphoric online "field trips"; (2)…

  11. MOOs for Teaching and Learning.

    ERIC Educational Resources Information Center

    Furst-Bowe, Julie

    1996-01-01

    Discusses the use of MOOs (Multi-User Dimension/Dungeon Object Oriented), text-based virtual reality environments, in education. Highlights include connecting to a network; exploring several MOOs to determine which is most appropriate; and familiarizing students with the MOO's interaction and behavior policies, as well as how to operate in the…

  12. Developing Simulations in Multi-User Virtual Environments to Enhance Healthcare Education

    ERIC Educational Resources Information Center

    Rogers, Luke

    2011-01-01

    Computer-based clinical simulations are a powerful teaching and learning tool because of their ability to expand healthcare students' clinical experience by providing practice-based learning. Despite the benefits of traditional computer-based clinical simulations, there are significant issues that arise when incorporating them into a flexible,…

  13. Educational Game as Supplemental Learning Tool: Benefits, Challenges, and Tensions Arising from Use in an Elementary School Classroom

    ERIC Educational Resources Information Center

    Warren, Scott; Dondlinger, Mary Jo; Stein, Richard; Barab, Sasha

    2009-01-01

    This article examines the qualitative findings from a mixed-methods comparison study of the use of an online multi-user virtual environment called Anytown which supplemented face-to-face writing instruction in a fourth grade classroom to determine implications for the design of such environments and the reported impact of this design on students…

  14. The Selimiye Mosque of Edirne, Turkey - AN Immersive and Interactive Virtual Reality Experience Using Htc Vive

    NASA Astrophysics Data System (ADS)

    Kersten, T. P.; Büyüksalih, G.; Tschirschwitz, F.; Kan, T.; Deggim, S.; Kaya, Y.; Baskaraca, A. P.

    2017-05-01

    Recent advances in contemporary Virtual Reality (VR) technologies are going to have a significant impact on veryday life. Through VR it is possible to virtually explore a computer-generated environment as a different reality, and to immerse oneself into the past or in a virtual museum without leaving the current real-life situation. For such the ultimate VR experience, the user should only see the virtual world. Currently, the user must wear a VR headset which fits around the head and over the eyes to visually separate themselves from the physical world. Via the headset images are fed to the eyes through two small lenses. Cultural heritage monuments are ideally suited both for thorough multi-dimensional geometric documentation and for realistic interactive visualisation in immersive VR applications. Additionally, the game industry offers tools for interactive visualisation of objects to motivate users to virtually visit objects and places. In this paper the generation of a virtual 3D model of the Selimiye mosque in the city of Edirne, Turkey and its processing for data integration into the game engine Unity is presented. The project has been carried out as a co-operation between BİMTAŞ, a company of the Greater Municipality of Istanbul, Turkey and the Photogrammetry & Laser Scanning Lab of the HafenCity University Hamburg, Germany to demonstrate an immersive and interactive visualisation using the new VR system HTC Vive. The workflow from data acquisition to VR visualisation, including the necessary programming for navigation, is described. Furthermore, the possible use (including simultaneous multiple users environments) of such a VR visualisation for a CH monument is discussed in this contribution.

  15. Temporal Issues in the Design of Virtual Learning Environments.

    ERIC Educational Resources Information Center

    Bergeron, Bryan; Obeid, Jihad

    1995-01-01

    Describes design methods used to influence user perception of time in virtual learning environments. Examines the use of temporal cues in medical education and clinical competence testing. Finds that user perceptions of time affects user acceptance, ease of use, and the level of realism of a virtual learning environment. Contains 51 references.…

  16. Virtual button interface

    DOEpatents

    Jones, Jake S.

    1999-01-01

    An apparatus and method of issuing commands to a computer by a user interfacing with a virtual reality environment. To issue a command, the user directs gaze at a virtual button within the virtual reality environment, causing a perceptible change in the virtual button, which then sends a command corresponding to the virtual button to the computer, optionally after a confirming action is performed by the user, such as depressing a thumb switch.

  17. ARTEMIS: Reinvigorating History and Theory in Art and Design Education

    ERIC Educational Resources Information Center

    Janet, Jeff; Miles, Melissa

    2009-01-01

    ARTEMIS (Art Educational Multiplayer Interactive Space) is an online multi-user virtual environment that is designed around the objects, artefacts, philosophies, personalities and critical discourses of the histories and theories of art and design. Conceived as a means of reinvigorating art history and theory education in the digital age, ARTEMIS…

  18. MOOving around the Net: The Educational Potential of MOOs.

    ERIC Educational Resources Information Center

    1996

    The use of MOOs, multi-user simulated environments in education examined in three papers include: "MOOving around the Net: The Educational Potential of MOOs: A Point of View" (Daniel Ingvarson); "MOOing in a Foreign Language: How, Why and Who?" (Lonnie Turbee) and "MOOving around the Net: Real to Virtual and Back…

  19. Programming for Fun: MUDs as a Context for Collaborative Learning.

    ERIC Educational Resources Information Center

    Bruckman, Amy

    Multi-User Dungeons (MUDs), are text-based virtual reality environments in which participants separated by great physical distances can communicate and collaborate in programming. Most MUDs started out as adventure games but are quickly being adapted for more "serious" endeavors. This paper presents a case study of the experiences of a…

  20. Intelligent Motion and Interaction Within Virtual Environments

    NASA Technical Reports Server (NTRS)

    Ellis, Stephen R. (Editor); Slater, Mel (Editor); Alexander, Thomas (Editor)

    2007-01-01

    What makes virtual actors and objects in virtual environments seem real? How can the illusion of their reality be supported? What sorts of training or user-interface applications benefit from realistic user-environment interactions? These are some of the central questions that designers of virtual environments face. To be sure simulation realism is not necessarily the major, or even a required goal, of a virtual environment intended to communicate specific information. But for some applications in entertainment, marketing, or aspects of vehicle simulation training, realism is essential. The following chapters will examine how a sense of truly interacting with dynamic, intelligent agents may arise in users of virtual environments. These chapters are based on presentations at the London conference on Intelligent Motion and Interaction within a Virtual Environments which was held at University College, London, U.K., 15-17 September 2003.

  1. vTrain: a novel curriculum for patient surge training in a multi-user virtual environment (MUVE).

    PubMed

    Greci, Laura S; Ramloll, Rameshsharma; Hurst, Samantha; Garman, Karen; Beedasy, Jaishree; Pieper, Eric B; Huang, Ricky; Higginbotham, Erin; Agha, Zia

    2013-06-01

    During a pandemic influenza, emergency departments will be overwhelmed with a large influx of patients seeking care. Although all hospitals should have a written plan for dealing with this surge of health care utilization, most hospitals struggle with ways to educate the staff and practice for potentially catastrophic events. Hypothesis/Problem To better prepare hospital staff for a patient surge, a novel educational curriculum was developed utilizing an emergency department for a patient surge functional drill. A multidisciplinary team of medical educators, evaluators, emergency preparedness experts, and technology specialists developed a curriculum to: (1) train novice users to function in their job class in a multi-user virtual environment (MUVE); (2) obtain appropriate pre-drill disaster preparedness training; (3) perform functional team exercises in a MUVE; and (4) reflect on their performance after the drill. A total of 14 students participated in one of two iterations of the pilot training program; seven nurses completed the emergency department triage course, and seven hospital administrators completed the Command Post (CP) course. All participants reported positive experiences in written course evaluations and structured verbal debriefings, and self-reported increase in disaster preparedness knowledge. Students also reported improved team communication, planning, team decision making, and the ability to visualize and reflect on their performance. Data from this pilot program suggest that the immersive, virtual teaching method is well suited to team-based, reflective practice and learning of disaster management skills.

  2. Shared virtual environments for aerospace training

    NASA Technical Reports Server (NTRS)

    Loftin, R. Bowen; Voss, Mark

    1994-01-01

    Virtual environments have the potential to significantly enhance the training of NASA astronauts and ground-based personnel for a variety of activities. A critical requirement is the need to share virtual environments, in real or near real time, between remote sites. It has been hypothesized that the training of international astronaut crews could be done more cheaply and effectively by utilizing such shared virtual environments in the early stages of mission preparation. The Software Technology Branch at NASA's Johnson Space Center has developed the capability for multiple users to simultaneously share the same virtual environment. Each user generates the graphics needed to create the virtual environment. All changes of object position and state are communicated to all users so that each virtual environment maintains its 'currency.' Examples of these shared environments will be discussed and plans for the utilization of the Department of Defense's Distributed Interactive Simulation (DIS) protocols for shared virtual environments will be presented. Finally, the impact of this technology on training and education in general will be explored.

  3. Virtual button interface

    DOEpatents

    Jones, J.S.

    1999-01-12

    An apparatus and method of issuing commands to a computer by a user interfacing with a virtual reality environment are disclosed. To issue a command, the user directs gaze at a virtual button within the virtual reality environment, causing a perceptible change in the virtual button, which then sends a command corresponding to the virtual button to the computer, optionally after a confirming action is performed by the user, such as depressing a thumb switch. 4 figs.

  4. Get immersed in the Soil Sciences: the first community of avatars in the EGU Assembly 2015!

    NASA Astrophysics Data System (ADS)

    Castillo, Sebastian; Alarcón, Purificación; Beato, Mamen; Emilio Guerrero, José; José Martínez, Juan; Pérez, Cristina; Ortiz, Leovigilda; Taguas, Encarnación V.

    2015-04-01

    Virtual reality and immersive worlds refer to artificial computer-generated environments, with which users act and interact as in a known environment by the use of figurative virtual individuals (avatars). Virtual environments will be the technology of the early twenty-first century that will most dramatically change the way we live, particularly in the areas of training and education, product development and entertainment (Schmorrow, 2009). The usefulness of immersive worlds has been proved in different fields. They reduce geographic and social barriers between different stakeholders and create virtual social spaces which can positively impact learning and discussion outcomes (Lorenzo et al. 2012). In this work we present a series of interactive meetings in a virtual building to celebrate the International Year of Soil to promote the importance of soil functions and its conservation. In a virtual room, the avatars of different senior researchers will meet young scientist avatars to talk about: 1) what remains to be done in Soil Sciences; 2) which are their main current limitations and difficulties and 3) which are the future hot research lines. The interactive participation does not require physically attend to the EGU Assembly 2015. In addition, this virtual building inspired in Soil Sciences can be completed with different teaching resources from different locations around the world and it will be used to improve the learning of Soil Sciences in a multicultural context. REFERENCES: Lorenzo C.M., Sicilia, M.A., Sánchez S. 2012. Studying the effectiveness of multi-user immersive environments for collaborative evaluation tasks. Computers & Education 59 (2012) 1361-1376 Schmorrow D.D. 2009. "Why virtual?" Theoretical Issues in Ergonomics Science 10(3): 279-282.

  5. Virtual World Currency Value Fluctuation Prediction System Based on User Sentiment Analysis.

    PubMed

    Kim, Young Bin; Lee, Sang Hyeok; Kang, Shin Jin; Choi, Myung Jin; Lee, Jung; Kim, Chang Hun

    2015-01-01

    In this paper, we present a method for predicting the value of virtual currencies used in virtual gaming environments that support multiple users, such as massively multiplayer online role-playing games (MMORPGs). Predicting virtual currency values in a virtual gaming environment has rarely been explored; it is difficult to apply real-world methods for predicting fluctuating currency values or shares to the virtual gaming world on account of differences in domains between the two worlds. To address this issue, we herein predict virtual currency value fluctuations by collecting user opinion data from a virtual community and analyzing user sentiments or emotions from the opinion data. The proposed method is straightforward and applicable to predicting virtual currencies as well as to gaming environments, including MMORPGs. We test the proposed method using large-scale MMORPGs and demonstrate that virtual currencies can be effectively and efficiently predicted with it.

  6. Virtual World Currency Value Fluctuation Prediction System Based on User Sentiment Analysis

    PubMed Central

    Kim, Young Bin; Lee, Sang Hyeok; Kang, Shin Jin; Choi, Myung Jin; Lee, Jung; Kim, Chang Hun

    2015-01-01

    In this paper, we present a method for predicting the value of virtual currencies used in virtual gaming environments that support multiple users, such as massively multiplayer online role-playing games (MMORPGs). Predicting virtual currency values in a virtual gaming environment has rarely been explored; it is difficult to apply real-world methods for predicting fluctuating currency values or shares to the virtual gaming world on account of differences in domains between the two worlds. To address this issue, we herein predict virtual currency value fluctuations by collecting user opinion data from a virtual community and analyzing user sentiments or emotions from the opinion data. The proposed method is straightforward and applicable to predicting virtual currencies as well as to gaming environments, including MMORPGs. We test the proposed method using large-scale MMORPGs and demonstrate that virtual currencies can be effectively and efficiently predicted with it. PMID:26241496

  7. A Proposed Theory Seeded Methodology for Design Based Research into Effective Use of MUVES in Vocational Education Contexts

    ERIC Educational Resources Information Center

    Cochrane, Todd; Davis, Niki; Morrow, Donna

    2013-01-01

    A methodology for design based research (DBR) into effective development and use of Multi-User Virtual Environments (MUVE) in vocational education is proposed. It blends software development with DBR with two theories selected to inform the methodology. Legitimate peripheral participation LPP (Lave & Wenger, 1991) provides a filter when…

  8. Shifts in Student Motivation during Usage of a Multi-User Virtual Environment for Ecosystem Science

    ERIC Educational Resources Information Center

    Metcalf, Shari; Chen, Jason; Kamarainen, Amy; Frumin, Kim; Vickrey, Trisha; Grotzer, Tina; Dede, Chris

    2014-01-01

    In incorporating technology in science education, some have expressed concern that the value added by technology is primarily due to the novelty or excitement about using the devices, resulting in no lasting effect on student motivation or learning in science. This research addresses this concern through evaluation of student motivation during a…

  9. Adoption of Second Life in Higher Education: Comparing the Effect of Utilitarian and Hedonic Behaviours

    ERIC Educational Resources Information Center

    Saeed, Nauman; Sinnappan, Sukunesan

    2013-01-01

    Second Life is a three dimensional multi-user virtual environment within the Web 2.0 suite of applications which has gained wide spread popularity amongst educators in the recent years. However, limited empirical research has been reported on the adoption of Second Life, especially within higher education. The majority of technology adoption…

  10. Exploring the Use of Three-Dimensional Multi-User Virtual Environments for Online Problem-Based Learning

    ERIC Educational Resources Information Center

    Omale, Nicholas M.

    2010-01-01

    This exploratory case study examines how three media attributes in 3-D MUVEs--avatars, 3-D spaces and bubble dialogue boxes--affect interaction in an online problem-based learning (PBL) activity. The study participants were eleven undergraduate students enrolled in a 200-level, three-credit-hour technology integration course at a Midwestern…

  11. Using "Second Life" in School Librarianship

    ERIC Educational Resources Information Center

    Perez, Lisa

    2009-01-01

    In this article, the author discusses using Second Life (SL) in school librarianship. SL is a multi-user virtual environment in which persons create avatars to allow them to move and interact with other avatars. They can build and manipulate objects. To move, they can walk, run, fly, or teleport. There are many areas within SL to allow people to…

  12. Learning Outcome, Presence and Satisfaction from a Science Activity in Second Life

    ERIC Educational Resources Information Center

    Vrellis, Ioannis; Avouris, Nikolaos; Mikropoulos, Tassos A.

    2016-01-01

    Although problem-based learning (PBL) has many advantages, it often fails to connect to the real world outside the classroom. The integration with the laboratory setting and the use of information and communication technologies (ICTs) have been proposed to address this deficiency. Multi-user virtual environments (MUVEs) like Second Life (SL) are…

  13. Affordances and Limitations of Immersive Participatory Augmented Reality Simulations for Teaching and Learning

    ERIC Educational Resources Information Center

    Dunleavy, Matt; Dede, Chris; Mitchell, Rebecca

    2009-01-01

    The purpose of this study was to document how teachers and students describe and comprehend the ways in which participating in an augmented reality (AR) simulation aids or hinders teaching and learning. Like the multi-user virtual environment (MUVE) interface that underlies Internet games, AR is a good medium for immersive collaborative…

  14. Measuring sense of presence and user characteristics to predict effective training in an online simulated virtual environment.

    PubMed

    De Leo, Gianluca; Diggs, Leigh A; Radici, Elena; Mastaglio, Thomas W

    2014-02-01

    Virtual-reality solutions have successfully been used to train distributed teams. This study aimed to investigate the correlation between user characteristics and sense of presence in an online virtual-reality environment where distributed teams are trained. A greater sense of presence has the potential to make training in the virtual environment more effective, leading to the formation of teams that perform better in a real environment. Being able to identify, before starting online training, those user characteristics that are predictors of a greater sense of presence can lead to the selection of trainees who would benefit most from the online simulated training. This is an observational study with a retrospective postsurvey of participants' user characteristics and degree of sense of presence. Twenty-nine members from 3 Air Force National Guard Medical Service expeditionary medical support teams participated in an online virtual environment training exercise and completed the Independent Television Commission-Sense of Presence Inventory survey, which measures sense of presence and user characteristics. Nonparametric statistics were applied to determine the statistical significance of user characteristics to sense of presence. Comparing user characteristics to the 4 scales of the Independent Television Commission-Sense of Presence Inventory using Kendall τ test gave the following results: the user characteristics "how often you play video games" (τ(26)=-0.458, P<0.01) and "television/film production knowledge" (τ(27)=-0.516, P<0.01) were significantly related to negative effects. Negative effects refer to adverse physiologic reactions owing to the virtual environment experience such as dizziness, nausea, headache, and eyestrain. The user characteristic "knowledge of virtual reality" was significantly related to engagement (τ(26)=0.463, P<0.01) and negative effects (τ(26)=-0.404, P<0.05). Individuals who have knowledge about virtual environments and experience with gaming environments report a higher sense of presence that indicates that they will likely benefit more from online virtual training. Future research studies could include a larger population of expeditionary medical support, and the results obtained could be used to create a model that predicts the level of presence based on the user characteristics. To maximize results and minimize costs, only those individuals who, based on their characteristics, are supposed to have a higher sense of presence and less negative effects could be selected for online simulated virtual environment training.

  15. Optimal allocation of physical water resources integrated with virtual water trade in water scarce regions: A case study for Beijing, China.

    PubMed

    Ye, Quanliang; Li, Yi; Zhuo, La; Zhang, Wenlong; Xiong, Wei; Wang, Chao; Wang, Peifang

    2018-02-01

    This study provides an innovative application of virtual water trade in the traditional allocation of physical water resources in water scarce regions. A multi-objective optimization model was developed to optimize the allocation of physical water and virtual water resources to different water users in Beijing, China, considering the trade-offs between economic benefit and environmental impacts of water consumption. Surface water, groundwater, transferred water and reclaimed water constituted the physical resource of water supply side, while virtual water flow associated with the trade of five major crops (barley, corn, rice, soy and wheat) and three livestock products (beef, pork and poultry) in agricultural sector (calculated by the trade quantities of products and their virtual water contents). Urban (daily activities and public facilities), industry, environment and agriculture (products growing) were considered in water demand side. As for the traditional allocation of physical water resources, the results showed that agriculture and urban were the two predominant water users (accounting 54% and 28%, respectively), while groundwater and surface water satisfied around 70% water demands of different users (accounting 36% and 34%, respectively). When considered the virtual water trade of eight agricultural products in water allocation procedure, the proportion of agricultural consumption decreased to 45% in total water demand, while the groundwater consumption decreased to 24% in total water supply. Virtual water trade overturned the traditional components of water supplied from different sources for agricultural consumption, and became the largest water source in Beijing. Additionally, it was also found that environmental demand took a similar percentage of water consumption in each water source. Reclaimed water was the main water source for industrial and environmental users. The results suggest that physical water resources would mainly satisfy the consumption of urban and environment, and the unbalance between water supply and demand could be filled by virtual water import in water scarce regions. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. A virtual therapeutic environment with user projective agents.

    PubMed

    Ookita, S Y; Tokuda, H

    2001-02-01

    Today, we see the Internet as more than just an information infrastructure, but a socializing place and a safe outlet of inner feelings. Many personalities develop aside from real world life due to its anonymous environment. Virtual world interactions are bringing about new psychological illnesses ranging from netaddiction to technostress, as well as online personality disorders and conflicts in multiple identities that exist in the virtual world. Presently, there are no standard therapy models for the virtual environment. There are very few therapeutic environments, or tools especially made for virtual therapeutic environments. The goal of our research is to provide the therapy model and middleware tools for psychologists to use in virtual therapeutic environments. We propose the Cyber Therapy Model, and Projective Agents, a tool used in the therapeutic environment. To evaluate the effectiveness of the tool, we created a prototype system, called the Virtual Group Counseling System, which is a therapeutic environment that allows the user to participate in group counseling through the eyes of their Projective Agent. Projective Agents inherit the user's personality traits. During the virtual group counseling, the user's Projective Agent interacts and collaborates to recover and increase their psychological growth. The prototype system provides a simulation environment where psychologists can adjust the parameters and customize their own simulation environment. The model and tool is a first attempt toward simulating online personalities that may exist only online, and provide data for observation.

  17. Exploring the Educational Potential of Three-Dimensional Multi-User Virtual Worlds for STEM Education: A Mixed-Method Systematic Literature Review

    ERIC Educational Resources Information Center

    Pellas, Nikolaos; Kazanidis, Ioannis; Konstantinou, Nikolaos; Georgiou, Georgia

    2017-01-01

    The present literature review builds on the results of 50 research articles published from 2000 until 2016. All these studies have successfully accomplished various learning tasks in the domain of Science, Technology, Engineering, and Mathematics (STEM) education using three-dimensional (3-D) multi-user virtual worlds for Primary, Secondary and…

  18. Perception of Graphical Virtual Environments by Blind Users via Sensory Substitution

    PubMed Central

    Maidenbaum, Shachar; Buchs, Galit; Abboud, Sami; Lavi-Rotbain, Ori; Amedi, Amir

    2016-01-01

    Graphical virtual environments are currently far from accessible to blind users as their content is mostly visual. This is especially unfortunate as these environments hold great potential for this population for purposes such as safe orientation, education, and entertainment. Previous tools have increased accessibility but there is still a long way to go. Visual-to-audio Sensory-Substitution-Devices (SSDs) can increase accessibility generically by sonifying on-screen content regardless of the specific environment and offer increased accessibility without the use of expensive dedicated peripherals like electrode/vibrator arrays. Using SSDs virtually utilizes similar skills as when using them in the real world, enabling both training on the device and training on environments virtually before real-world visits. This could enable more complex, standardized and autonomous SSD training and new insights into multisensory interaction and the visually-deprived brain. However, whether congenitally blind users, who have never experienced virtual environments, will be able to use this information for successful perception and interaction within them is currently unclear.We tested this using the EyeMusic SSD, which conveys whole-scene visual information, to perform virtual tasks otherwise impossible without vision. Congenitally blind users had to navigate virtual environments and find doors, differentiate between them based on their features (Experiment1:task1) and surroundings (Experiment1:task2) and walk through them; these tasks were accomplished with a 95% and 97% success rate, respectively. We further explored the reactions of congenitally blind users during their first interaction with a more complex virtual environment than in the previous tasks–walking down a virtual street, recognizing different features of houses and trees, navigating to cross-walks, etc. Users reacted enthusiastically and reported feeling immersed within the environment. They highlighted the potential usefulness of such environments for understanding what visual scenes are supposed to look like and their potential for complex training and suggested many future environments they wished to experience. PMID:26882473

  19. Perception of Graphical Virtual Environments by Blind Users via Sensory Substitution.

    PubMed

    Maidenbaum, Shachar; Buchs, Galit; Abboud, Sami; Lavi-Rotbain, Ori; Amedi, Amir

    2016-01-01

    Graphical virtual environments are currently far from accessible to blind users as their content is mostly visual. This is especially unfortunate as these environments hold great potential for this population for purposes such as safe orientation, education, and entertainment. Previous tools have increased accessibility but there is still a long way to go. Visual-to-audio Sensory-Substitution-Devices (SSDs) can increase accessibility generically by sonifying on-screen content regardless of the specific environment and offer increased accessibility without the use of expensive dedicated peripherals like electrode/vibrator arrays. Using SSDs virtually utilizes similar skills as when using them in the real world, enabling both training on the device and training on environments virtually before real-world visits. This could enable more complex, standardized and autonomous SSD training and new insights into multisensory interaction and the visually-deprived brain. However, whether congenitally blind users, who have never experienced virtual environments, will be able to use this information for successful perception and interaction within them is currently unclear.We tested this using the EyeMusic SSD, which conveys whole-scene visual information, to perform virtual tasks otherwise impossible without vision. Congenitally blind users had to navigate virtual environments and find doors, differentiate between them based on their features (Experiment1:task1) and surroundings (Experiment1:task2) and walk through them; these tasks were accomplished with a 95% and 97% success rate, respectively. We further explored the reactions of congenitally blind users during their first interaction with a more complex virtual environment than in the previous tasks-walking down a virtual street, recognizing different features of houses and trees, navigating to cross-walks, etc. Users reacted enthusiastically and reported feeling immersed within the environment. They highlighted the potential usefulness of such environments for understanding what visual scenes are supposed to look like and their potential for complex training and suggested many future environments they wished to experience.

  20. Multi-degree of freedom joystick for virtual reality simulation.

    PubMed

    Head, M J; Nelson, C A; Siu, K C

    2013-11-01

    A modular control interface and simulated virtual reality environment were designed and created in order to determine how the kinematic architecture of a control interface affects minimally invasive surgery training. A user is able to selectively determine the kinematic configuration of an input device (number, type and location of degrees of freedom) for a specific surgical simulation through the use of modular joints and constraint components. Furthermore, passive locking was designed and implemented through the use of inflated latex tubing around rotational joints in order to allow a user to step away from a simulation without unwanted tool motion. It is believed that these features will facilitate improved simulation of a variety of surgical procedures and, thus, improve surgical skills training.

  1. Exploring Design Requirements for Repurposing Dental Virtual Patients From the Web to Second Life: A Focus Group Study

    PubMed Central

    Antoniou, Panagiotis E; Athanasopoulou, Christina A; Dafli, Eleni

    2014-01-01

    Background Since their inception, virtual patients have provided health care educators with a way to engage learners in an experience simulating the clinician’s environment without danger to learners and patients. This has led this learning modality to be accepted as an essential component of medical education. With the advent of the visually and audio-rich 3-dimensional multi-user virtual environment (MUVE), a new deployment platform has emerged for educational content. Immersive, highly interactive, multimedia-rich, MUVEs that seamlessly foster collaboration provide a new hotbed for the deployment of medical education content. Objective This work aims to assess the suitability of the Second Life MUVE as a virtual patient deployment platform for undergraduate dental education, and to explore the requirements and specifications needed to meaningfully repurpose Web-based virtual patients in MUVEs. Methods Through the scripting capabilities and available art assets in Second Life, we repurposed an existing Web-based periodontology virtual patient into Second Life. Through a series of point-and-click interactions and multiple-choice queries, the user experienced a specific periodontology case and was asked to provide the optimal responses for each of the challenges of the case. A focus group of 9 undergraduate dentistry students experienced both the Web-based and the Second Life version of this virtual patient. The group convened 3 times and discussed relevant issues such as the group’s computer literacy, the assessment of Second Life as a virtual patient deployment platform, and compared the Web-based and MUVE-deployed virtual patients. Results A comparison between the Web-based and the Second Life virtual patient revealed the inherent advantages of the more experiential and immersive Second Life virtual environment. However, several challenges for the successful repurposing of virtual patients from the Web to the MUVE were identified. The identified challenges for repurposing of Web virtual patients to the MUVE platform from the focus group study were (1) increased case complexity to facilitate the user’s gaming preconception in a MUVE, (2) necessity to decrease textual narration and provide the pertinent information in a more immersive sensory way, and (3) requirement to allow the user to actuate the solutions of problems instead of describing them through narration. Conclusions For a successful systematic repurposing effort of virtual patients to MUVEs such as Second Life, the best practices of experiential and immersive game design should be organically incorporated in the repurposing workflow (automated or not). These findings are pivotal in an era in which open educational content is transferred to and shared among users, learners, and educators of various open repositories/environments. PMID:24927470

  2. Virtual hand: a 3D tactile interface to virtual environments

    NASA Astrophysics Data System (ADS)

    Rogowitz, Bernice E.; Borrel, Paul

    2008-02-01

    We introduce a novel system that allows users to experience the sensation of touch in a computer graphics environment. In this system, the user places his/her hand on an array of pins, which is moved about space on a 6 degree-of-freedom robot arm. The surface of the pins defines a surface in the virtual world. This "virtual hand" can move about the virtual world. When the virtual hand encounters an object in the virtual world, the heights of the pins are adjusted so that they represent the object's shape, surface, and texture. A control system integrates pin and robot arm motions to transmit information about objects in the computer graphics world to the user. It also allows the user to edit, change and move the virtual objects, shapes and textures. This system provides a general framework for touching, manipulating, and modifying objects in a 3-D computer graphics environment, which may be useful in a wide range of applications, including computer games, computer aided design systems, and immersive virtual worlds.

  3. Exploring Ecosystems from the Inside: How Immersive Multi-User Virtual Environments Can Support Development of Epistemologically Grounded Modeling Practices in Ecosystem Science Instruction

    ERIC Educational Resources Information Center

    Kamarainen, Amy M.; Metcalf, Shari; Grotzer, Tina; Dede, Chris

    2015-01-01

    Recent reform efforts and the next generation science standards emphasize the importance of incorporating authentic scientific practices into science instruction. Modeling can be a particularly challenging practice to address because modeling occurs within a socially structured system of representation that is specific to a domain. Further, in the…

  4. A Look inside a MUVE Design Process: Blending Instructional Design and Game Principles to Target Writing Skills

    ERIC Educational Resources Information Center

    Warren, Scott J.; Stein, Richard A.; Dondlinger, Mary Jo; Barab, Sasha A.

    2009-01-01

    The number of games, simulations, and multi-user virtual environments designed to promote learning, engagement with subject matter, or intended to contextualize learning has been steadily increasing over the past decade. While the use of these digital designs in educational settings has begun to show promise for improving learning, motivation, and…

  5. Teachers and Game-Based Learning: Improving Understanding of How to Increase Efficacy of Adoption

    ERIC Educational Resources Information Center

    Ketelhut, Diane Jass; Schifter, Catherine C.

    2011-01-01

    Interest in game-based learning for K-12 is growing. Thus, helping teachers understand how to use these new pedagogies is important. This paper presents a cross-case study of the development of teacher professional development for the River City project, a games-based multi-user virtual environment science curriculum project for middle school…

  6. Multi-User Virtual Environments Fostering Collaboration in Formal Education

    ERIC Educational Resources Information Center

    Di Blas, Nicoletta; Paolini, Paolo

    2014-01-01

    This paper is about how serious games based on MUVEs in formal education can foster collaboration. More specifically, it is about a large case-study with four different programs which took place from 2002 to 2009 and involved more than 9,000 students, aged between 12 and 18, from various nations (18 European countries, Israel and the USA). These…

  7. Exploring the Use of Individualized, Reflective Guidance In an Educational Multi-User Virtual Environment

    NASA Astrophysics Data System (ADS)

    Nelson, Brian C.

    2007-02-01

    This study examines the patterns of use and potential impact of individualized, reflective guidance in an educational Multi-User Virtual Environment (MUVE). A guidance system embedded within a MUVE-based scientific inquiry curriculum was implemented with a sample of middle school students in an exploratory study investigating (a) whether access to the guidance system was associated with improved learning, (b) whether students viewing more guidance messages saw greater improvement on content tests than those viewing less, and (c) whether there were any differences in guidance use among boys and girls. Initial experimental findings showed that basic access to individualized guidance used with a MUVE had no measurable impact on learning. However, post-hoc exploratory analyses indicated that increased use of the system among those with access to it was positively associated with content test score gains. In addition, differences were found in overall learning outcomes by gender and in patterns of guidance use by boys and girls, with girls outperforming boys across a spectrum of guidance system use. Based on these exploratory findings, the paper suggests design guidelines for the development of guidance systems embedded in MUVEs and outlines directions for further research.

  8. A Study of Multi-Representation of Geometry Problem Solving with Virtual Manipulatives and Whiteboard System

    ERIC Educational Resources Information Center

    Hwang, Wu-Yuin; Su, Jia-Han; Huang, Yueh-Min; Dong, Jian-Jie

    2009-01-01

    In this paper, the development of an innovative Virtual Manipulatives and Whiteboard (VMW) system is described. The VMW system allowed users to manipulate virtual objects in 3D space and find clues to solve geometry problems. To assist with multi-representation transformation, translucent multimedia whiteboards were used to provide a virtual 3D…

  9. Internet-based distributed collaborative environment for engineering education and design

    NASA Astrophysics Data System (ADS)

    Sun, Qiuli

    2001-07-01

    This research investigates the use of the Internet for engineering education, design, and analysis through the presentation of a Virtual City environment. The main focus of this research was to provide an infrastructure for engineering education, test the concept of distributed collaborative design and analysis, develop and implement the Virtual City environment, and assess the environment's effectiveness in the real world. A three-tier architecture was adopted in the development of the prototype, which contains an online database server, a Web server as well as multi-user servers, and client browsers. The environment is composed of five components, a 3D virtual world, multiple Internet-based multimedia modules, an online database, a collaborative geometric modeling module, and a collaborative analysis module. The environment was designed using multiple Intenet-based technologies, such as Shockwave, Java, Java 3D, VRML, Perl, ASP, SQL, and a database. These various technologies together formed the basis of the environment and were programmed to communicate smoothly with each other. Three assessments were conducted over a period of three semesters. The Virtual City is open to the public at www.vcity.ou.edu. The online database was designed to manage the changeable data related to the environment. The virtual world was used to implement 3D visualization and tie the multimedia modules together. Students are allowed to build segments of the 3D virtual world upon completion of appropriate undergraduate courses in civil engineering. The end result is a complete virtual world that contains designs from all of their coursework and is viewable on the Internet. The environment is a content-rich educational system, which can be used to teach multiple engineering topics with the help of 3D visualization, animations, and simulations. The concept of collaborative design and analysis using the Internet was investigated and implemented. Geographically dispersed users can build the same geometric model simultaneously over the Internet and communicate with each other through a chat room. They can also conduct finite element analysis collaboratively on the same object over the Internet. They can mesh the same object, apply and edit the same boundary conditions and forces, obtain the same analysis results, and then discuss the results through the Internet.

  10. Impossible spaces: maximizing natural walking in virtual environments with self-overlapping architecture.

    PubMed

    Suma, Evan A; Lipps, Zachary; Finkelstein, Samantha; Krum, David M; Bolas, Mark

    2012-04-01

    Walking is only possible within immersive virtual environments that fit inside the boundaries of the user's physical workspace. To reduce the severity of the restrictions imposed by limited physical area, we introduce "impossible spaces," a new design mechanic for virtual environments that wish to maximize the size of the virtual environment that can be explored with natural locomotion. Such environments make use of self-overlapping architectural layouts, effectively compressing comparatively large interior environments into smaller physical areas. We conducted two formal user studies to explore the perception and experience of impossible spaces. In the first experiment, we showed that reasonably small virtual rooms may overlap by as much as 56% before users begin to detect that they are in an impossible space, and that the larger virtual rooms that expanded to maximally fill our available 9.14 m x 9.14 m workspace may overlap by up to 31%. Our results also demonstrate that users perceive distances to objects in adjacent overlapping rooms as if the overall space was uncompressed, even at overlap levels that were overtly noticeable. In our second experiment, we combined several well-known redirection techniques to string together a chain of impossible spaces in an expansive outdoor scene. We then conducted an exploratory analysis of users' verbal feedback during exploration, which indicated that impossible spaces provide an even more powerful illusion when users are naive to the manipulation.

  11. Virtual reality hardware for use in interactive 3D data fusion and visualization

    NASA Astrophysics Data System (ADS)

    Gourley, Christopher S.; Abidi, Mongi A.

    1997-09-01

    Virtual reality has become a tool for use in many areas of research. We have designed and built a VR system for use in range data fusion and visualization. One major VR tool is the CAVE. This is the ultimate visualization tool, but comes with a large price tag. Our design uses a unique CAVE whose graphics are powered by a desktop computer instead of a larger rack machine making it much less costly. The system consists of a screen eight feet tall by twenty-seven feet wide giving a variable field-of-view currently set at 160 degrees. A silicon graphics Indigo2 MaxImpact with the impact channel option is used for display. This gives the capability to drive three projectors at a resolution of 640 by 480 for use in displaying the virtual environment and one 640 by 480 display for a user control interface. This machine is also the first desktop package which has built-in hardware texture mapping. This feature allows us to quickly fuse the range and intensity data and other multi-sensory data. The final goal is a complete 3D texture mapped model of the environment. A dataglove, magnetic tracker, and spaceball are to be used for manipulation of the data and navigation through the virtual environment. This system gives several users the ability to interactively create 3D models from multiple range images.

  12. Do students with higher self-efficacy exhibit greater and more diverse scientific inquiry skills: An exploratory investigation in "River City", a multi-user virtual environment

    NASA Astrophysics Data System (ADS)

    Ketelhut, Diane Jass

    In this thesis, I conduct an exploratory study to investigate the relationship between students' self-efficacy on entry into authentic scientific activity and the scientific inquiry behaviors they employ while engaged in that process, over time. Scientific inquiry has been a major standard in most science education policy doctrines for the past two decades and is exemplified by activities such as making observations, formulating hypotheses, gathering and analyzing data, and forming conclusions from that data. The self-efficacy literature, however, indicates that self-efficacy levels affect perseverance and engagement. This study investigated the relationship between these two constructs. The study is conducted in a novel setting, using an innovative science curriculum delivered through an interactive computer technology that recorded each student's conversations, movements, and activities while behaving as a practicing scientist in a "virtual world" called River City. River City is a Multi-User Virtual Environment designed to engage students in a collaborative scientific inquiry-based learning experience. As a result, I was able to follow students' moment-by-moment choices of behavior while they were behaving as scientists. I collected data on students' total scientific inquiry behaviors over three visits to River City, as well as the number of sources from which they gathered their scientific data. I analyzed my longitudinal data on the 96 seventh-graders using individual growth modeling. I found that self-efficacy played a role in the number of data-gathering behaviors students engaged in initially, with high self-efficacy students engaging in more data gathering than students with low self-efficacy. However, the impact of student self-efficacy on rate of change in data gathering behavior differed by gender; by the end of the study, student self-efficacy did not impact data gathering. In addition, students' level of self-efficacy did not affect how many different sources from which they chose to gather data. There are indications in my results that novel interventions like a Multi-user Virtual Environment might act as a catalyst for change in student learning. Further research using these techniques may enable a better understanding of the interaction between self-efficacy and scientific inquiry, and eventually science learning outcomes.

  13. An Activity Theoretical Perspective towards the Design of an ICT-Enhanced After-School Programme for Academically At-Risk Students

    ERIC Educational Resources Information Center

    Tay, Lee Yong; Lim, Cher Ping

    2010-01-01

    This paper examines how a game-like 3D Multi-User Virtual Environment (MUVE), Quest Atlantis (QA), is used in an after-school programme to engage a group of 14 academically at-risk primary students in their learning. It adopts an activity theoretical perspective to identify the disturbances and contradictions during the implementation of the…

  14. VAGUE: a graphical user interface for the Velvet assembler.

    PubMed

    Powell, David R; Seemann, Torsten

    2013-01-15

    Velvet is a popular open-source de novo genome assembly software tool, which is run from the Unix command line. Most of the problems experienced by new users of Velvet revolve around constructing syntactically and semantically correct command lines, getting input files into acceptable formats and assessing the output. Here, we present Velvet Assembler Graphical User Environment (VAGUE), a multi-platform graphical front-end for Velvet. VAGUE aims to make sequence assembly accessible to a wider audience and to facilitate better usage amongst existing users of Velvet. VAGUE is implemented in JRuby and targets the Java Virtual Machine. It is available under an open-source GPLv2 licence from http://www.vicbioinformatics.com/. torsten.seemann@monash.edu.

  15. Facilitating 3D Virtual World Learning Environments Creation by Non-Technical End Users through Template-Based Virtual World Instantiation

    ERIC Educational Resources Information Center

    Liu, Chang; Zhong, Ying; Ozercan, Sertac; Zhu, Qing

    2013-01-01

    This paper presents a template-based solution to overcome technical barriers non-technical computer end users face when developing functional learning environments in three-dimensional virtual worlds (3DVW). "iVirtualWorld," a prototype of a platform-independent 3DVW creation tool that implements the proposed solution, facilitates 3DVW…

  16. Cognitive Aspects of Collaboration in 3d Virtual Environments

    NASA Astrophysics Data System (ADS)

    Juřík, V.; Herman, L.; Kubíček, P.; Stachoň, Z.; Šašinka, Č.

    2016-06-01

    Human-computer interaction has entered the 3D era. The most important models representing spatial information — maps — are transferred into 3D versions regarding the specific content to be displayed. Virtual worlds (VW) become promising area of interest because of possibility to dynamically modify content and multi-user cooperation when solving tasks regardless to physical presence. They can be used for sharing and elaborating information via virtual images or avatars. Attractiveness of VWs is emphasized also by possibility to measure operators' actions and complex strategies. Collaboration in 3D environments is the crucial issue in many areas where the visualizations are important for the group cooperation. Within the specific 3D user interface the operators' ability to manipulate the displayed content is explored regarding such phenomena as situation awareness, cognitive workload and human error. For such purpose, the VWs offer a great number of tools for measuring the operators' responses as recording virtual movement or spots of interest in the visual field. Study focuses on the methodological issues of measuring the usability of 3D VWs and comparing them with the existing principles of 2D maps. We explore operators' strategies to reach and interpret information regarding the specific type of visualization and different level of immersion.

  17. Small satellite multi mission C2 for maximum effect

    USGS Publications Warehouse

    Miller, E.; Medina, O.; Lane, C.R.; Kirkham, A.; Ivancic, W.; Jones, B.; Risty, R.

    2006-01-01

    This paper discusses US Air Force, US Army, US Navy, and NASA demonstrations based around the Virtual Mission Operations Center (VMOC) and its application in fielding a Multi Mission Satellite Operations Center (MMSOC) designed to integrate small satellites into the inherently tiered system environment of operations. The intent is to begin standardizing the spacecraft to ground interfaces needed to reduce costs, maximize space effects to the user, and allow the generation of Tactics, Techniques and Procedures (TTPs) that lead to Responsive Space employment. Combining the US Air Force/Army focus of theater command and control of payloads with the US Navy's user collaboration and FORCEnet consistent approach lays the groundwork for the fundamental change needed to maximize responsive space effects.

  18. Multithreaded hybrid feature tracking for markerless augmented reality.

    PubMed

    Lee, Taehee; Höllerer, Tobias

    2009-01-01

    We describe a novel markerless camera tracking approach and user interaction methodology for augmented reality (AR) on unprepared tabletop environments. We propose a real-time system architecture that combines two types of feature tracking. Distinctive image features of the scene are detected and tracked frame-to-frame by computing optical flow. In order to achieve real-time performance, multiple operations are processed in a synchronized multi-threaded manner: capturing a video frame, tracking features using optical flow, detecting distinctive invariant features, and rendering an output frame. We also introduce user interaction methodology for establishing a global coordinate system and for placing virtual objects in the AR environment by tracking a user's outstretched hand and estimating a camera pose relative to it. We evaluate the speed and accuracy of our hybrid feature tracking approach, and demonstrate a proof-of-concept application for enabling AR in unprepared tabletop environments, using bare hands for interaction.

  19. Exploring the User Experience of Three-Dimensional Virtual Learning Environments

    ERIC Educational Resources Information Center

    Shin, Dong-Hee; Biocca, Frank; Choo, Hyunseung

    2013-01-01

    This study examines the users' experiences with three-dimensional (3D) virtual environments to investigate the areas of development as a learning application. For the investigation, the modified technology acceptance model (TAM) is used with constructs from expectation-confirmation theory (ECT). Users' responses to questions about cognitive…

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ragan, Eric D; Bowman, Doug A; Scerbo, Siroberto

    Virtual reality (VR) systems have been proposed for use in numerous training scenarios, such as room clearing, which require the trainee to maintain spatial awareness. But many VR training systems lack a fully surrounding display, requiring trainees to use a combination of physical and virtual turns to view the environment, thus decreasing spatial awareness. One solution to this problem is to amplify head rotations, such that smaller physical turns are mapped to larger virtual turns, allowing trainees to view the surrounding environment with head movements alone. For example, in a multi-monitor system covering only a 90-degree field of regard, headmore » rotations could be amplified four times to allow the user to see the entire 360-degree surrounding environment. This solution is attractive because it can be used with lower-cost VR systems and does not require virtual turning. However, the effects of amplified head rotations on spatial awareness and training transfer are not well understood. We hypothesized that small amounts of amplification might be tolerable, but that larger amplifications might cause trainees to become disoriented and to have decreased task performance and training transfer. In this paper, we will present our findings from an experiment designed to investigate these hypotheses. The experiment placed users in a virtual warehouse and asked them to move from room to room, counting objects placed around them in space. We varied the amount of amplification applied during these trials, and also varied the type of display used (head-mounted display or CAVE). We measured task performance and spatial awareness. We then assessed training transfer in an assessment environment with a fully surrounding display and no amplification. The results of this study will inform VR training system developers about the potential negative effects of using head rotation amplification and contribute to more effective VR training system design.« less

  1. 3D multiplayer virtual pets game using Google Card Board

    NASA Astrophysics Data System (ADS)

    Herumurti, Darlis; Riskahadi, Dimas; Kuswardayan, Imam

    2017-08-01

    Virtual Reality (VR) is a technology which allows user to interact with the virtual environment. This virtual environment is generated and simulated by computer. This technology can make user feel the sensation when they are in the virtual environment. The VR technology provides real virtual environment view for user and it is not viewed from screen. But it needs another additional device to show the view of virtual environment. This device is known as Head Mounted Device (HMD). Oculust Rift and Microsoft Hololens are the most famous HMD devices used in VR. And in 2014, Google Card Board was introduced at Google I/O developers conference. Google Card Board is VR platform which allows user to enjoy the VR with simple and cheap way. In this research, we explore Google Card Board to develop simulation game of raising pet. The Google Card Board is used to create view for the VR environment. The view and control in VR environment is built using Unity game engine. And the simulation process is designed using Finite State Machine (FSM). This FSM can help to design the process clearly. So the simulation process can describe the simulation of raising pet well. Raising pet is fun activity. But sometimes, there are many conditions which cause raising pet become difficult to do, i.e. environment condition, disease, high cost, etc. this research aims to explore and implement Google Card Board in simulation of raising pet.

  2. Adaptive User Model for Web-Based Learning Environment.

    ERIC Educational Resources Information Center

    Garofalakis, John; Sirmakessis, Spiros; Sakkopoulos, Evangelos; Tsakalidis, Athanasios

    This paper describes the design of an adaptive user model and its implementation in an advanced Web-based Virtual University environment that encompasses combined and synchronized adaptation between educational material and well-known communication facilities. The Virtual University environment has been implemented to support a postgraduate…

  3. A Comparison of the Effects of Classroom and Multi-User Virtual Environments on the Perceived Speaking Anxiety of Adult Post-Secondary English Language Learners

    ERIC Educational Resources Information Center

    Abal, Abdulaziz

    2013-01-01

    The population of English Language Learners (ELLs) globally has been increasing substantially every year. In the United States alone, adult ELLs are the fastest growing portion of learners in adult education programs (Yang, 2005). There is a significant need to improve the teaching of English to ELLs in the United States and other English-speaking…

  4. Virtual reality and telerobotics applications of an Address Recalculation Pipeline

    NASA Technical Reports Server (NTRS)

    Regan, Matthew; Pose, Ronald

    1994-01-01

    The technology described in this paper was designed to reduce latency to user interactions in immersive virtual reality environments. It is also ideally suited to telerobotic applications such as interaction with remote robotic manipulators in space or in deep sea operations. in such circumstances the significant latency is observed response to user stimulus which is due to communications delays, and the disturbing jerkiness due to low and unpredictable frame rates on compressed video user feedback or computationally limited virtual worlds, can be masked by our techniques. The user is provided with highly responsive visual feedback independent of communication or computational delays in providing physical video feedback or in rendering virtual world images. Virtual and physical environments can be combined seamlessly using these techniques.

  5. The CAVE (TM) automatic virtual environment: Characteristics and applications

    NASA Technical Reports Server (NTRS)

    Kenyon, Robert V.

    1995-01-01

    Virtual reality may best be defined as the wide-field presentation of computer-generated, multi-sensory information that tracks a user in real time. In addition to the more well-known modes of virtual reality -- head-mounted displays and boom-mounted displays -- the Electronic Visualization Laboratory at the University of Illinois at Chicago recently introduced a third mode: a room constructed from large screens on which the graphics are projected on to three walls and the floor. The CAVE is a multi-person, room sized, high resolution, 3D video and audio environment. Graphics are rear projected in stereo onto three walls and the floor, and viewed with stereo glasses. As a viewer wearing a location sensor moves within its display boundaries, the correct perspective and stereo projections of the environment are updated, and the image moves with and surrounds the viewer. The other viewers in the CAVE are like passengers in a bus, along for the ride. 'CAVE,' the name selected for the virtual reality theater, is both a recursive acronym (Cave Automatic Virtual Environment) and a reference to 'The Simile of the Cave' found in Plato's 'Republic,' in which the philosopher explores the ideas of perception, reality, and illusion. Plato used the analogy of a person facing the back of a cave alive with shadows that are his/her only basis for ideas of what real objects are. Rather than having evolved from video games or flight simulation, the CAVE has its motivation rooted in scientific visualization and the SIGGRAPH 92 Showcase effort. The CAVE was designed to be a useful tool for scientific visualization. The Showcase event was an experiment; the Showcase chair and committee advocated an environment for computational scientists to interactively present their research at a major professional conference in a one-to-many format on high-end workstations attached to large projection screens. The CAVE was developed as a 'virtual reality theater' with scientific content and projection that met the criteria of Showcase.

  6. Real and virtual explorations of the environment and interactive tracking of movable objects for the blind on the basis of tactile-acoustical maps and 3D environment models.

    PubMed

    Hub, Andreas; Hartter, Tim; Kombrink, Stefan; Ertl, Thomas

    2008-01-01

    PURPOSE.: This study describes the development of a multi-functional assistant system for the blind which combines localisation, real and virtual navigation within modelled environments and the identification and tracking of fixed and movable objects. The approximate position of buildings is determined with a global positioning sensor (GPS), then the user establishes exact position at a specific landmark, like a door. This location initialises indoor navigation, based on an inertial sensor, a step recognition algorithm and map. Tracking of movable objects is provided by another inertial sensor and a head-mounted stereo camera, combined with 3D environmental models. This study developed an algorithm based on shape and colour to identify objects and used a common face detection algorithm to inform the user of the presence and position of others. The system allows blind people to determine their position with approximately 1 metre accuracy. Virtual exploration of the environment can be accomplished by moving one's finger on a touch screen of a small portable tablet PC. The name of rooms, building features and hazards, modelled objects and their positions are presented acoustically or in Braille. Given adequate environmental models, this system offers blind people the opportunity to navigate independently and safely, even within unknown environments. Additionally, the system facilitates education and rehabilitation by providing, in several languages, object names, features and relative positions.

  7. VERSE - Virtual Equivalent Real-time Simulation

    NASA Technical Reports Server (NTRS)

    Zheng, Yang; Martin, Bryan J.; Villaume, Nathaniel

    2005-01-01

    Distributed real-time simulations provide important timing validation and hardware in the- loop results for the spacecraft flight software development cycle. Occasionally, the need for higher fidelity modeling and more comprehensive debugging capabilities - combined with a limited amount of computational resources - calls for a non real-time simulation environment that mimics the real-time environment. By creating a non real-time environment that accommodates simulations and flight software designed for a multi-CPU real-time system, we can save development time, cut mission costs, and reduce the likelihood of errors. This paper presents such a solution: Virtual Equivalent Real-time Simulation Environment (VERSE). VERSE turns the real-time operating system RTAI (Real-time Application Interface) into an event driven simulator that runs in virtual real time. Designed to keep the original RTAI architecture as intact as possible, and therefore inheriting RTAI's many capabilities, VERSE was implemented with remarkably little change to the RTAI source code. This small footprint together with use of the same API allows users to easily run the same application in both real-time and virtual time environments. VERSE has been used to build a workstation testbed for NASA's Space Interferometry Mission (SIM PlanetQuest) instrument flight software. With its flexible simulation controls and inexpensive setup and replication costs, VERSE will become an invaluable tool in future mission development.

  8. Developing effective serious games: the effect of background sound on visual fidelity perception with varying texture resolution.

    PubMed

    Rojas, David; Kapralos, Bill; Cristancho, Sayra; Collins, Karen; Hogue, Andrew; Conati, Cristina; Dubrowski, Adam

    2012-01-01

    Despite the benefits associated with virtual learning environments and serious games, there are open, fundamental issues regarding simulation fidelity and multi-modal cue interaction and their effect on immersion, transfer of knowledge, and retention. Here we describe the results of a study that examined the effect of ambient (background) sound on the perception of visual fidelity (defined with respect to texture resolution). Results suggest that the perception of visual fidelity is dependent on ambient sound and more specifically, white noise can have detrimental effects on our perception of high quality visuals. The results of this study will guide future studies that will ultimately aid in developing an understanding of the role that fidelity, and multi-modal interactions play with respect to knowledge transfer and retention for users of virtual simulations and serious games.

  9. iVirtualWorld: A Domain-Oriented End-User Development Environment for Building 3D Virtual Chemistry Experiments

    ERIC Educational Resources Information Center

    Zhong, Ying

    2013-01-01

    Virtual worlds are well-suited for building virtual laboratories for educational purposes to complement hands-on physical laboratories. However, educators may face technical challenges because developing virtual worlds requires skills in programming and 3D design. Current virtual world building tools are developed for users who have programming…

  10. Virtual Environments in Scientific Visualization

    NASA Technical Reports Server (NTRS)

    Bryson, Steve; Lisinski, T. A. (Technical Monitor)

    1994-01-01

    Virtual environment technology is a new way of approaching the interface between computers and humans. Emphasizing display and user control that conforms to the user's natural ways of perceiving and thinking about space, virtual environment technologies enhance the ability to perceive and interact with computer generated graphic information. This enhancement potentially has a major effect on the field of scientific visualization. Current examples of this technology include the Virtual Windtunnel being developed at NASA Ames Research Center. Other major institutions such as the National Center for Supercomputing Applications and SRI International are also exploring this technology. This talk will be describe several implementations of virtual environments for use in scientific visualization. Examples include the visualization of unsteady fluid flows (the virtual windtunnel), the visualization of geodesics in curved spacetime, surface manipulation, and examples developed at various laboratories.

  11. Validation of virtual reality as a tool to understand and prevent child pedestrian injury.

    PubMed

    Schwebel, David C; Gaines, Joanna; Severson, Joan

    2008-07-01

    In recent years, virtual reality has emerged as an innovative tool for health-related education and training. Among the many benefits of virtual reality is the opportunity for novice users to engage unsupervised in a safe environment when the real environment might be dangerous. Virtual environments are only useful for health-related research, however, if behavior in the virtual world validly matches behavior in the real world. This study was designed to test the validity of an immersive, interactive virtual pedestrian environment. A sample of 102 children and 74 adults was recruited to complete simulated road-crossings in both the virtual environment and the identical real environment. In both the child and adult samples, construct validity was demonstrated via significant correlations between behavior in the virtual and real worlds. Results also indicate construct validity through developmental differences in behavior; convergent validity by showing correlations between parent-reported child temperament and behavior in the virtual world; internal reliability of various measures of pedestrian safety in the virtual world; and face validity, as measured by users' self-reported perception of realism in the virtual world. We discuss issues of generalizability to other virtual environments, and the implications for application of virtual reality to understanding and preventing pediatric pedestrian injuries.

  12. Altering User Movement Behaviour in Virtual Environments.

    PubMed

    Simeone, Adalberto L; Mavridou, Ifigeneia; Powell, Wendy

    2017-04-01

    In immersive Virtual Reality systems, users tend to move in a Virtual Environment as they would in an analogous physical environment. In this work, we investigated how user behaviour is affected when the Virtual Environment differs from the physical space. We created two sets of four environments each, plus a virtual replica of the physical environment as a baseline. The first focused on aesthetic discrepancies, such as a water surface in place of solid ground. The second focused on mixing immaterial objects together with those paired to tangible objects. For example, barring an area with walls or obstacles. We designed a study where participants had to reach three waypoints laid out in such a way to prompt a decision on which path to follow based on the conflict between the mismatching visual stimuli and their awareness of the real layout of the room. We analysed their performances to determine whether their trajectories were altered significantly from the shortest route. Our results indicate that participants altered their trajectories in presence of surfaces representing higher walking difficulty (for example, water instead of grass). However, when the graphical appearance was found to be ambiguous, there was no significant trajectory alteration. The environments mixing immaterial with physical objects had the most impact on trajectories with a mean deviation from the shortest route of 60 cm against the 37 cm of environments with aesthetic alterations. The co-existance of paired and unpaired virtual objects was reported to support the idea that all objects participants saw were backed by physical props. From these results and our observations, we derive guidelines on how to alter user movement behaviour in Virtual Environments.

  13. The Immersive Virtual Reality Experience: A Typology of Users Revealed Through Multiple Correspondence Analysis Combined with Cluster Analysis Technique.

    PubMed

    Rosa, Pedro J; Morais, Diogo; Gamito, Pedro; Oliveira, Jorge; Saraiva, Tomaz

    2016-03-01

    Immersive virtual reality is thought to be advantageous by leading to higher levels of presence. However, and despite users getting actively involved in immersive three-dimensional virtual environments that incorporate sound and motion, there are individual factors, such as age, video game knowledge, and the predisposition to immersion, that may be associated with the quality of virtual reality experience. Moreover, one particular concern for users engaged in immersive virtual reality environments (VREs) is the possibility of side effects, such as cybersickness. The literature suggests that at least 60% of virtual reality users report having felt symptoms of cybersickness, which reduces the quality of the virtual reality experience. The aim of this study was thus to profile the right user to be involved in a VRE through head-mounted display. To examine which user characteristics are associated with the most effective virtual reality experience (lower cybersickness), a multiple correspondence analysis combined with cluster analysis technique was performed. Results revealed three distinct profiles, showing that the PC gamer profile is more associated with higher levels of virtual reality effectiveness, that is, higher predisposition to be immersed and reduced cybersickness symptoms in the VRE than console gamer and nongamer. These findings can be a useful orientation in clinical practice and future research as they help identify which users are more predisposed to benefit from immersive VREs.

  14. Can virtual reality be used to conduct mass prophylaxis clinic training? A pilot program.

    PubMed

    Yellowlees, Peter; Cook, James N; Marks, Shayna L; Wolfe, Daniel; Mangin, Elanor

    2008-03-01

    To create and evaluate a pilot bioterrorism defense training environment using virtual reality technology. The present pilot project used Second Life, an internet-based virtual world system, to construct a virtual reality environment to mimic an actual setting that might be used as a Strategic National Stockpile (SNS) distribution site for northern California in the event of a bioterrorist attack. Scripted characters were integrated into the system as mock patients to analyze various clinic workflow scenarios. Users tested the virtual environment over two sessions. Thirteen users who toured the environment were asked to complete an evaluation survey. Respondents reported that the virtual reality system was relevant to their practice and had potential as a method of bioterrorism defense training. Computer simulations of bioterrorism defense training scenarios are feasible with existing personal computer technology. The use of internet-connected virtual environments holds promise for bioterrorism defense training. Recommendations are made for public health agencies regarding the implementation and benefits of using virtual reality for mass prophylaxis clinic training.

  15. Librarians without Borders? Virtual Reference Service to Unaffiliated Users

    ERIC Educational Resources Information Center

    Kibbee, Jo

    2006-01-01

    The author investigates issues faced by academic research libraries in providing virtual reference services to unaffiliated users. These libraries generally welcome visitors who use on-site collections and reference services, but are these altruistic policies feasible in a virtual environment? This paper reviews the use of virtual reference…

  16. Virtual community centre for power wheelchair training: Experience of children and clinicians.

    PubMed

    Torkia, Caryne; Ryan, Stephen E; Reid, Denise; Boissy, Patrick; Lemay, Martin; Routhier, François; Contardo, Resi; Woodhouse, Janet; Archambault, Phillipe S

    2017-11-02

    To: 1) characterize the overall experience in using the McGill immersive wheelchair - community centre (miWe-CC) simulator; and 2) investigate the experience of presence (i.e., sense of being in the virtual rather than in the real, physical environment) while driving a PW in the miWe-CC. A qualitative research design with structured interviews was used. Fifteen clinicians and 11 children were interviewed after driving a power wheelchair (PW) in the miWe-CC simulator. Data were analyzed using the conventional and directed content analysis approaches. Overall, participants enjoyed using the simulator and experienced a sense of presence in the virtual space. They felt a sense of being in the virtual environment, involved and focused on driving the virtual PW rather than on the surroundings of the actual room where they were. Participants reported several similarities between the virtual community centre layout and activities of the miWe-CC and the day-to-day reality of paediatric PW users. The simulator replicated participants' expectations of real-life PW use and promises to have an effect on improving the driving skills of new PW users. Implications for rehabilitation Among young users, the McGill immersive wheelchair (miWe) simulator provides an experience of presence within the virtual environment. This experience of presence is generated by a sense of being in the virtual scene, a sense of being involved, engaged, and focused on interacting within the virtual environment, and by the perception that the virtual environment is consistent with the real world. The miWe is a relevant and accessible approach, complementary to real world power wheelchair training for young users.

  17. STRIVE: Stress Resilience In Virtual Environments: a pre-deployment VR system for training emotional coping skills and assessing chronic and acute stress responses.

    PubMed

    Rizzo, Albert; Buckwalter, J Galen; John, Bruce; Newman, Brad; Parsons, Thomas; Kenny, Patrick; Williams, Josh

    2012-01-01

    The incidence of posttraumatic stress disorder (PTSD) in returning OEF/OIF military personnel is creating a significant healthcare challenge. This has served to motivate research on how to better develop and disseminate evidence-based treatments for PTSD. One emerging form of treatment for combat-related PTSD that has shown promise involves the delivery of exposure therapy using immersive Virtual Reality (VR). Initial outcomes from open clinical trials have been positive and fully randomized controlled trials are currently in progress to further validate this approach. Based on our research group's initial positive outcomes using VR to emotionally engage and successfully treat persons undergoing exposure therapy for PTSD, we have begun development in a similar VR-based approach to deliver stress resilience training with military service members prior to their initial deployment. The Stress Resilience In Virtual Environments (STRIVE) project aims to create a set of combat simulations (derived from our existing Virtual Iraq/Afghanistan exposure therapy system) that are part of a multi-episode narrative experience. Users can be immersed within challenging combat contexts and interact with virtual characters within these episodes as part of an experiential learning approach for training a range of psychoeducational and cognitive-behavioral emotional coping strategies believed to enhance stress resilience. The STRIVE project aims to present this approach to service members prior to deployment as part of a program designed to better prepare military personnel for the types of emotional challenges that are inherent in the combat environment. During these virtual training experiences users are monitored physiologically as part of a larger investigation into the biomarkers of the stress response. One such construct, Allostatic Load, is being directly investigated via physiological and neuro-hormonal analysis from specimen collections taken immediately before and after engagement in the STRIVE virtual experience.

  18. DEEP SPACE: High Resolution VR Platform for Multi-user Interactive Narratives

    NASA Astrophysics Data System (ADS)

    Kuka, Daniela; Elias, Oliver; Martins, Ronald; Lindinger, Christopher; Pramböck, Andreas; Jalsovec, Andreas; Maresch, Pascal; Hörtner, Horst; Brandl, Peter

    DEEP SPACE is a large-scale platform for interactive, stereoscopic and high resolution content. The spatial and the system design of DEEP SPACE are facing constraints of CAVETM-like systems in respect to multi-user interactive storytelling. To be used as research platform and as public exhibition space for many people, DEEP SPACE is capable to process interactive, stereoscopic applications on two projection walls with a size of 16 by 9 meters and a resolution of four times 1080p (4K) each. The processed applications are ranging from Virtual Reality (VR)-environments to 3D-movies to computationally intensive 2D-productions. In this paper, we are describing DEEP SPACE as an experimental VR platform for multi-user interactive storytelling. We are focusing on the system design relevant for the platform, including the integration of the Apple iPod Touch technology as VR control, and a special case study that is demonstrating the research efforts in the field of multi-user interactive storytelling. The described case study, entitled "Papyrate's Island", provides a prototypical scenario of how physical drawings may impact on digital narratives. In this special case, DEEP SPACE helps us to explore the hypothesis that drawing, a primordial human creative skill, gives us access to entirely new creative possibilities in the domain of interactive storytelling.

  19. The Effect of Desktop Illumination Realism on a User's Sense of Presence in a Virtual Learning Environment

    ERIC Educational Resources Information Center

    Ehrlich, Justin

    2010-01-01

    The application of virtual reality is becoming ever more important as technology reaches new heights allowing virtual environments (VE) complete with global illumination. One successful application of virtual environments is educational interventions meant to treat individuals with autism spectrum disorder (ASD). VEs are effective with these…

  20. Virtual goods recommendations in virtual worlds.

    PubMed

    Chen, Kuan-Yu; Liao, Hsiu-Yu; Chen, Jyun-Hung; Liu, Duen-Ren

    2015-01-01

    Virtual worlds (VWs) are computer-simulated environments which allow users to create their own virtual character as an avatar. With the rapidly growing user volume in VWs, platform providers launch virtual goods in haste and stampede users to increase sales revenue. However, the rapidity of development incurs virtual unrelated items which will be difficult to remarket. It not only wastes virtual global companies' intelligence resources, but also makes it difficult for users to find suitable virtual goods fit for their virtual home in daily virtual life. In the VWs, users decorate their houses, visit others' homes, create families, host parties, and so forth. Users establish their social life circles through these activities. This research proposes a novel virtual goods recommendation method based on these social interactions. The contact strength and contact influence result from interactions with social neighbors and influence users' buying intention. Our research highlights the importance of social interactions in virtual goods recommendation. The experiment's data were retrieved from an online VW platform, and the results show that the proposed method, considering social interactions and social life circle, has better performance than existing recommendation methods.

  1. Virtual Goods Recommendations in Virtual Worlds

    PubMed Central

    Chen, Kuan-Yu; Liao, Hsiu-Yu; Chen, Jyun-Hung; Liu, Duen-Ren

    2015-01-01

    Virtual worlds (VWs) are computer-simulated environments which allow users to create their own virtual character as an avatar. With the rapidly growing user volume in VWs, platform providers launch virtual goods in haste and stampede users to increase sales revenue. However, the rapidity of development incurs virtual unrelated items which will be difficult to remarket. It not only wastes virtual global companies' intelligence resources, but also makes it difficult for users to find suitable virtual goods fit for their virtual home in daily virtual life. In the VWs, users decorate their houses, visit others' homes, create families, host parties, and so forth. Users establish their social life circles through these activities. This research proposes a novel virtual goods recommendation method based on these social interactions. The contact strength and contact influence result from interactions with social neighbors and influence users' buying intention. Our research highlights the importance of social interactions in virtual goods recommendation. The experiment's data were retrieved from an online VW platform, and the results show that the proposed method, considering social interactions and social life circle, has better performance than existing recommendation methods. PMID:25834837

  2. The Impact of Student Self-efficacy on Scientific Inquiry Skills: An Exploratory Investigation in River City, a Multi-user Virtual Environment

    NASA Astrophysics Data System (ADS)

    Ketelhut, Diane Jass

    2007-02-01

    This exploratory study investigated data-gathering behaviors exhibited by 100 seventh-grade students as they participated in a scientific inquiry-based curriculum project delivered by a multi-user virtual environment (MUVE). This research examined the relationship between students' self-efficacy on entry into the authentic scientific activity and the longitudinal data-gathering behaviors they employed while engaged in that process. Three waves of student behavior data were gathered from a server-side database that recorded all student activity in the MUVE; these data were analyzed using individual growth modeling. The study found that self-efficacy correlated with the number of data-gathering behaviors in which students initially engaged, with high self-efficacy students engaging in more data gathering than students with low self-efficacy. Also, the impact of student self-efficacy on rate of change in data gathering behavior differed by gender. However, by the end of their time in the MUVE, initial student self-efficacy no longer correlated with data gathering behaviors. In addition, students' level of self-efficacy did not affect how many different sources from which they chose to gather data. These results suggest that embedding science inquiry curricula in novel platforms like a MUVE might act as a catalyst for change in students' self-efficacy and learning processes.

  3. 3-Dimensional and Interactive Istanbul University Virtual Laboratory Based on Active Learning Methods

    ERIC Educational Resources Information Center

    Ince, Elif; Kirbaslar, Fatma Gulay; Yolcu, Ergun; Aslan, Ayse Esra; Kayacan, Zeynep Cigdem; Alkan Olsson, Johanna; Akbasli, Ayse Ceylan; Aytekin, Mesut; Bauer, Thomas; Charalambis, Dimitris; Gunes, Zeliha Ozsoy; Kandemir, Ceyhan; Sari, Umit; Turkoglu, Suleyman; Yaman, Yavuz; Yolcu, Ozgu

    2014-01-01

    The purpose of this study is to develop a 3-dimensional interactive multi-user and multi-admin IUVIRLAB featuring active learning methods and techniques for university students and to introduce the Virtual Laboratory of Istanbul University and to show effects of IUVIRLAB on students' attitudes on communication skills and IUVIRLAB. Although there…

  4. Software Architecture for a Virtual Environment for Nano Scale Assembly (VENSA).

    PubMed

    Lee, Yong-Gu; Lyons, Kevin W; Feng, Shaw C

    2004-01-01

    A Virtual Environment (VE) uses multiple computer-generated media to let a user experience situations that are temporally and spatially prohibiting. The information flow between the user and the VE is bidirectional and the user can influence the environment. The software development of a VE requires orchestrating multiple peripherals and computers in a synchronized way in real time. Although a multitude of useful software components for VEs exists, many of these are packaged within a complex framework and can not be used separately. In this paper, an architecture is presented which is designed to let multiple frameworks work together while being shielded from the application program. This architecture, which is called the Virtual Environment for Nano Scale Assembly (VENSA), has been constructed for interfacing with an optical tweezers instrument for nanotechnology development. However, this approach can be generalized for most virtual environments. Through the use of VENSA, the programmer can rely on existing solutions and concentrate more on the application software design.

  5. Software Architecture for a Virtual Environment for Nano Scale Assembly (VENSA)

    PubMed Central

    Lee, Yong-Gu; Lyons, Kevin W.; Feng, Shaw C.

    2004-01-01

    A Virtual Environment (VE) uses multiple computer-generated media to let a user experience situations that are temporally and spatially prohibiting. The information flow between the user and the VE is bidirectional and the user can influence the environment. The software development of a VE requires orchestrating multiple peripherals and computers in a synchronized way in real time. Although a multitude of useful software components for VEs exists, many of these are packaged within a complex framework and can not be used separately. In this paper, an architecture is presented which is designed to let multiple frameworks work together while being shielded from the application program. This architecture, which is called the Virtual Environment for Nano Scale Assembly (VENSA), has been constructed for interfacing with an optical tweezers instrument for nanotechnology development. However, this approach can be generalized for most virtual environments. Through the use of VENSA, the programmer can rely on existing solutions and concentrate more on the application software design. PMID:27366610

  6. The User Community and a Multi-Mission Data Project: Services, Experiences and Directions of the Space Physics Data Facility

    NASA Technical Reports Server (NTRS)

    Fung, Shing F.; Bilitza, D.; Candey, R.; Chimiak, R.; Cooper, John; Fung, Shing; Harris, B.; Johnson R.; King, J.; Kovalick, T.; hide

    2008-01-01

    From a user's perspective, the multi-mission data and orbit services of NASA's Space Physics Data Facility (SPDF) project offer a unique range of important data and services highly complementary to other services presently available or now evolving in the international heliophysics data environment. The VSP (Virtual Space Physics Observatory) service is an active portal to a wide range of distributed data sources. CDAWeb (Coordinate Data Analysis Web) enables plots, listings and file downloads for current data cross the boundaries of missions and instrument types (and now including data from THEMIS and STEREO). SSCWeb, Helioweb and our 3D Animated Orbit Viewer (TIPSOD) provide position data and query logic for most missions currently important to heliophysics science. OMNIWeb with its new extension to 1- and 5-minute resolution provides interplanetary parameters at the Earth's bow shock as a unique value-added data product. SPDF also maintains NASA's CDF (common Data Format) standard and a range of associated tools including translation services. These capabilities are all now available through webservices-based APIs as well as through our direct user interfaces. In this paper, we will demonstrate the latest data and capabilities now supported in these multi-mission services, review the lessons we continue to learn in what science users need and value in this class of services, and discuss out current thinking to the future role and appropriate focus of the SPDF effort in the evolving and increasingly distributed heliophysics data environment.

  7. Evaluating the use of augmented reality to support undergraduate student learning in geomorphology

    NASA Astrophysics Data System (ADS)

    Ockelford, A.; Bullard, J. E.; Burton, E.; Hackney, C. R.

    2016-12-01

    Augmented Reality (AR) supports the understanding of complex phenomena by providing unique visual and interactive experiences that combine real and virtual information and help communicate abstract problems to learners. With AR, designers can superimpose virtual graphics over real objects, allowing users to interact with digital content through physical manipulation. One of the most significant pedagogic features of AR is that it provides an essentially student-centred and flexible space in which students can learn. By actively engaging participants using a design-thinking approach, this technology has the potential to provide a more productive and engaging learning environment than real or virtual learning environments alone. AR is increasingly being used in support of undergraduate learning and public engagement activities across engineering, medical and humanities disciplines but it is not widely used across the geosciences disciplines despite the obvious applicability. This paper presents preliminary results from a multi-institutional project which seeks to evaluate the benefits and challenges of using an augmented reality sand box to support undergraduate learning in geomorphology. The sandbox enables users to create and visualise topography. As the sand is sculpted, contours are projected onto the miniature landscape. By hovering a hand over the box, users can make it `rain' over the landscape and the water `flows' down in to rivers and valleys. At undergraduate level, the sand-box is an ideal focus for problem-solving exercises, for example exploring how geomorphology controls hydrological processes, how such processes can be altered and the subsequent impacts of the changes for environmental risk. It is particularly valuable for students who favour a visual or kinesthetic learning style. Results presented in this paper discuss how the sandbox provides a complex interactive environment that encourages communication, collaboration and co-design.

  8. Ergonomic aspects of a virtual environment.

    PubMed

    Ahasan, M R; Väyrynen, S

    1999-01-01

    A virtual environment is an interactive graphic system mediated through computer technology that allows a certain level of reality or a sense of presence to access virtual information. To create reality in a virtual environment, ergonomics issues are explored in this paper, aiming to develop the design of presentation formats with related information, that is possible to attain and to maintain user-friendly application.

  9. Chemistry in Second Life

    PubMed Central

    Lang, Andrew SID; Bradley, Jean-Claude

    2009-01-01

    This review will focus on the current level on chemistry research, education, and visualization possible within the multi-user virtual environment of Second Life. We discuss how Second Life has been used as a platform for the interactive and collaborative visualization of data from molecules and proteins to spectra and experimental data. We then review how these visualizations can be scripted for immersive educational activities and real-life collaborative research. We also discuss the benefits of the social networking affordances of Second Life for both chemists and chemistry students. PMID:19852781

  10. Chemistry in second life.

    PubMed

    Lang, Andrew S I D; Bradley, Jean-Claude

    2009-10-23

    This review will focus on the current level on chemistry research, education, and visualization possible within the multi-user virtual environment of Second Life. We discuss how Second Life has been used as a platform for the interactive and collaborative visualization of data from molecules and proteins to spectra and experimental data. We then review how these visualizations can be scripted for immersive educational activities and real-life collaborative research. We also discuss the benefits of the social networking affordances of Second Life for both chemists and chemistry students.

  11. An interactive VR system based on full-body tracking and gesture recognition

    NASA Astrophysics Data System (ADS)

    Zeng, Xia; Sang, Xinzhu; Chen, Duo; Wang, Peng; Guo, Nan; Yan, Binbin; Wang, Kuiru

    2016-10-01

    Most current virtual reality (VR) interactions are realized with the hand-held input device which leads to a low degree of presence. There is other solutions using sensors like Leap Motion to recognize the gestures of users in order to interact in a more natural way, but the navigation in these systems is still a problem, because they fail to map the actual walking to virtual walking only with a partial body of the user represented in the synthetic environment. Therefore, we propose a system in which users can walk around in the virtual environment as a humanoid model, selecting menu items and manipulating with the virtual objects using natural hand gestures. With a Kinect depth camera, the system tracks the joints of the user, mapping them to a full virtual body which follows the move of the tracked user. The movements of the feet can be detected to determine whether the user is in walking state, so that the walking of model in the virtual world can be activated and stopped by means of animation control in Unity engine. This method frees the hands of users comparing to traditional navigation way using hand-held device. We use the point cloud data getting from Kinect depth camera to recognize the gestures of users, such as swiping, pressing and manipulating virtual objects. Combining the full body tracking and gestures recognition using Kinect, we achieve our interactive VR system in Unity engine with a high degree of presence.

  12. Personal stories within virtual environments: embodiments of a model for cancer patient information software.

    PubMed

    Greene, D D; Heeter, C

    1998-01-01

    Two new cancer patient information CD-ROMs extend the personal stories within virtual environments model of cancer patient information developed for Breast Cancer Lighthouse. Cancer Pain Retreat and Cancer Prevention Park: Games for Life are intended to inform and inspire users in an emotionally calming and intimately informative manner. The software offers users an experience--of visiting a virtual place and meeting and talking with patients and health care professionals.

  13. Rapid prototyping, astronaut training, and experiment control and supervision: distributed virtual worlds for COLUMBUS, the European Space Laboratory module

    NASA Astrophysics Data System (ADS)

    Freund, Eckhard; Rossmann, Juergen

    2002-02-01

    In 2004, the European COLUMBUS Module is to be attached to the International Space Station. On the way to the successful planning, deployment and operation of the module, computer generated and animated models are being used to optimize performance. Under contract of the German Space Agency DLR, it has become IRF's task to provide a Projective Virtual Reality System to provide a virtual world built after the planned layout of the COLUMBUS module let astronauts and experimentators practice operational procedures and the handling of experiments. The key features of the system currently being realized comprise the possibility for distributed multi-user access to the virtual lab and the visualization of real-world experiment data. Through the capabilities to share the virtual world, cooperative operations can be practiced easily, but also trainers and trainees can work together more effectively sharing the virtual environment. The capability to visualize real-world data will be used to introduce measured data of experiments into the virtual world online in order to realistically interact with the science-reference model hardware: The user's actions in the virtual world are translated into corresponding changes of the inputs of the science reference model hardware; the measured data is than in turn fed back into the virtual world. During the operation of COLUMBUS, the capabilities for distributed access and the capabilities to visualize measured data through the use of metaphors and augmentations of the virtual world may be used to provide virtual access to the COLUMBUS module, e.g. via Internet. Currently, finishing touches are being put to the system. In November 2001 the virtual world shall be operational, so that besides the design and the key ideas, first experimental results can be presented.

  14. Design strategies and functionality of the Visual Interface for Virtual Interaction Development (VIVID) tool

    NASA Technical Reports Server (NTRS)

    Nguyen, Lac; Kenney, Patrick J.

    1993-01-01

    Development of interactive virtual environments (VE) has typically consisted of three primary activities: model (object) development, model relationship tree development, and environment behavior definition and coding. The model and relationship tree development activities are accomplished with a variety of well-established graphic library (GL) based programs - most utilizing graphical user interfaces (GUI) with point-and-click interactions. Because of this GUI format, little programming expertise on the part of the developer is necessary to create the 3D graphical models or to establish interrelationships between the models. However, the third VE development activity, environment behavior definition and coding, has generally required the greatest amount of time and programmer expertise. Behaviors, characteristics, and interactions between objects and the user within a VE must be defined via command line C coding prior to rendering the environment scenes. In an effort to simplify this environment behavior definition phase for non-programmers, and to provide easy access to model and tree tools, a graphical interface and development tool has been created. The principal thrust of this research is to effect rapid development and prototyping of virtual environments. This presentation will discuss the 'Visual Interface for Virtual Interaction Development' (VIVID) tool; an X-Windows based system employing drop-down menus for user selection of program access, models, and trees, behavior editing, and code generation. Examples of these selection will be highlighted in this presentation, as will the currently available program interfaces. The functionality of this tool allows non-programming users access to all facets of VE development while providing experienced programmers with a collection of pre-coded behaviors. In conjunction with its existing, interfaces and predefined suite of behaviors, future development plans for VIVID will be described. These include incorporation of dual user virtual environment enhancements, tool expansion, and additional behaviors.

  15. User-Centered Iterative Design of a Collaborative Virtual Environment

    DTIC Science & Technology

    2001-03-01

    cognitive task analysis methods to study land navigators. This study was intended to validate the use of user-centered design methodologies for the design of...have explored the cognitive aspects of collaborative human way finding and design for collaborative virtual environments. Further investigation of design paradigms should include cognitive task analysis and behavioral task analysis.

  16. What Do Context Aware Electronic Alerts from Virtual Learning Environments Tell Us about User Time & Location?

    ERIC Educational Resources Information Center

    Crane, Laura; Benachour, Phillip

    2013-01-01

    The paper describes the analysis of user location and time stamp information automatically logged when students receive and interact with electronic updates from the University's virtual learning environment. The electronic updates are sent to students' mobile devices using RSS feeds. The mobile reception of such information can be received in…

  17. Framing the magic

    NASA Astrophysics Data System (ADS)

    Tsoupikova, Daria

    2006-02-01

    This paper will explore how the aesthetics of the virtual world affects, transforms, and enhances the immersive emotional experience of the user. What we see and what we do upon entering the virtual environment influences our feelings, mental state, physiological changes and sensibility. To create a unique virtual experience the important component to design is the beauty of the virtual world based on the aesthetics of the graphical objects such as textures, models, animation, and special effects. The aesthetic potency of the images that comprise the virtual environment can make the immersive experience much stronger and more compelling. The aesthetic qualities of the virtual world as born out through images and graphics can influence the user's state of mind. Particular changes and effects on the user can be induced through the application of techniques derived from the research fields of psychology, anthropology, biology, color theory, education, art therapy, music, and art history. Many contemporary artists and developers derive much inspiration for their work from their experience with traditional arts such as painting, sculpture, design, architecture and music. This knowledge helps them create a higher quality of images and stereo graphics in the virtual world. The understanding of the close relation between the aesthetic quality of the virtual environment and the resulting human perception is the key to developing an impressive virtual experience.

  18. Training wheelchair navigation in immersive virtual environments for patients with spinal cord injury - end-user input to design an effective system.

    PubMed

    Nunnerley, Joanne; Gupta, Swati; Snell, Deborah; King, Marcus

    2017-05-01

    A user-centred design was used to develop and test the feasibility of an immersive 3D virtual reality wheelchair training tool for people with spinal cord injury (SCI). A Wheelchair Training System was designed and modelled using the Oculus Rift headset and a Dynamic Control wheelchair joystick. The system was tested by clinicians and expert wheelchair users with SCI. Data from focus groups and individual interviews were analysed using a general inductive approach to thematic analysis. Four themes emerged: Realistic System, which described the advantages of a realistic virtual environment; a Wheelchair Training System, which described participants' thoughts on the wheelchair training applications; Overcoming Resistance to Technology, the obstacles to introducing technology within the clinical setting; and Working outside the Rehabilitation Bubble which described the protective hospital environment. The Oculus Rift Wheelchair Training System has the potential to provide a virtual rehabilitation setting which could allow wheelchair users to learn valuable community wheelchair use in a safe environment. Nausea appears to be a side effect of the system, which will need to be resolved before this can be a viable clinical tool. Implications for Rehabilitation Immersive virtual reality shows promising benefit for wheelchair training in a rehabilitation setting. Early engagement with consumers can improve product development.

  19. Evaluation of Pseudo-Haptic Interactions with Soft Objects in Virtual Environments.

    PubMed

    Li, Min; Sareh, Sina; Xu, Guanghua; Ridzuan, Maisarah Binti; Luo, Shan; Xie, Jun; Wurdemann, Helge; Althoefer, Kaspar

    2016-01-01

    This paper proposes a pseudo-haptic feedback method conveying simulated soft surface stiffness information through a visual interface. The method exploits a combination of two feedback techniques, namely visual feedback of soft surface deformation and control of the indenter avatar speed, to convey stiffness information of a simulated surface of a soft object in virtual environments. The proposed method was effective in distinguishing different sizes of virtual hard nodules integrated into the simulated soft bodies. To further improve the interactive experience, the approach was extended creating a multi-point pseudo-haptic feedback system. A comparison with regards to (a) nodule detection sensitivity and (b) elapsed time as performance indicators in hard nodule detection experiments to a tablet computer incorporating vibration feedback was conducted. The multi-point pseudo-haptic interaction is shown to be more time-efficient than the single-point pseudo-haptic interaction. It is noted that multi-point pseudo-haptic feedback performs similarly well when compared to a vibration-based feedback method based on both performance measures elapsed time and nodule detection sensitivity. This proves that the proposed method can be used to convey detailed haptic information for virtual environmental tasks, even subtle ones, using either a computer mouse or a pressure sensitive device as an input device. This pseudo-haptic feedback method provides an opportunity for low-cost simulation of objects with soft surfaces and hard inclusions, as, for example, occurring in ever more realistic video games with increasing emphasis on interaction with the physical environment and minimally invasive surgery in the form of soft tissue organs with embedded cancer nodules. Hence, the method can be used in many low-budget applications where haptic sensation is required, such as surgeon training or video games, either using desktop computers or portable devices, showing reasonably high fidelity in conveying stiffness perception to the user.

  20. Guidelines for developing distributed virtual environment applications

    NASA Astrophysics Data System (ADS)

    Stytz, Martin R.; Banks, Sheila B.

    1998-08-01

    We have conducted a variety of projects that served to investigate the limits of virtual environments and distributed virtual environment (DVE) technology for the military and medical professions. The projects include an application that allows the user to interactively explore a high-fidelity, dynamic scale model of the Solar System and a high-fidelity, photorealistic, rapidly reconfigurable aircraft simulator. Additional projects are a project for observing, analyzing, and understanding the activity in a military distributed virtual environment, a project to develop a distributed threat simulator for training Air Force pilots, a virtual spaceplane to determine user interface requirements for a planned military spaceplane system, and an automated wingman for use in supplementing or replacing human-controlled systems in a DVE. The last two projects are a virtual environment user interface framework; and a project for training hospital emergency department personnel. In the process of designing and assembling the DVE applications in support of these projects, we have developed rules of thumb and insights into assembling DVE applications and the environment itself. In this paper, we open with a brief review of the applications that were the source for our insights and then present the lessons learned as a result of these projects. The lessons we have learned fall primarily into five areas. These areas are requirements development, software architecture, human-computer interaction, graphical database modeling, and construction of computer-generated forces.

  1. Virtual Environments Supporting Learning and Communication in Special Needs Education

    ERIC Educational Resources Information Center

    Cobb, Sue V. G.

    2007-01-01

    Virtual reality (VR) describes a set of technologies that allow users to explore and experience 3-dimensional computer-generated "worlds" or "environments." These virtual environments can contain representations of real or imaginary objects on a small or large scale (from modeling of molecular structures to buildings, streets, and scenery of a…

  2. Visualizing vascular structures in virtual environments

    NASA Astrophysics Data System (ADS)

    Wischgoll, Thomas

    2013-01-01

    In order to learn more about the cause of coronary heart diseases and develop diagnostic tools, the extraction and visualization of vascular structures from volumetric scans for further analysis is an important step. By determining a geometric representation of the vasculature, the geometry can be inspected and additional quantitative data calculated and incorporated into the visualization of the vasculature. To provide a more user-friendly visualization tool, virtual environment paradigms can be utilized. This paper describes techniques for interactive rendering of large-scale vascular structures within virtual environments. This can be applied to almost any virtual environment configuration, such as CAVE-type displays. Specifically, the tools presented in this paper were tested on a Barco I-Space and a large 62x108 inch passive projection screen with a Kinect sensor for user tracking.

  3. Efficient Learning Using a Virtual Learning Environment in a University Class

    ERIC Educational Resources Information Center

    Stricker, Daniel; Weibel, David; Wissmath, Bartholomaus

    2011-01-01

    This study examines a blended learning setting in an undergraduate course in psychology. A virtual learning environment (VLE) complemented the face-to-face lecture. The usage was voluntary and the VLE was designed to support the learning process of the students. Data from users (N = 80) and non-users (N = 82) from two cohorts were collected.…

  4. Development of an audio-based virtual gaming environment to assist with navigation skills in the blind.

    PubMed

    Connors, Erin C; Yazzolino, Lindsay A; Sánchez, Jaime; Merabet, Lotfi B

    2013-03-27

    Audio-based Environment Simulator (AbES) is virtual environment software designed to improve real world navigation skills in the blind. Using only audio based cues and set within the context of a video game metaphor, users gather relevant spatial information regarding a building's layout. This allows the user to develop an accurate spatial cognitive map of a large-scale three-dimensional space that can be manipulated for the purposes of a real indoor navigation task. After game play, participants are then assessed on their ability to navigate within the target physical building represented in the game. Preliminary results suggest that early blind users were able to acquire relevant information regarding the spatial layout of a previously unfamiliar building as indexed by their performance on a series of navigation tasks. These tasks included path finding through the virtual and physical building, as well as a series of drop off tasks. We find that the immersive and highly interactive nature of the AbES software appears to greatly engage the blind user to actively explore the virtual environment. Applications of this approach may extend to larger populations of visually impaired individuals.

  5. Direct Manipulation in Virtual Reality

    NASA Technical Reports Server (NTRS)

    Bryson, Steve

    2003-01-01

    Virtual Reality interfaces offer several advantages for scientific visualization such as the ability to perceive three-dimensional data structures in a natural way. The focus of this chapter is direct manipulation, the ability for a user in virtual reality to control objects in the virtual environment in a direct and natural way, much as objects are manipulated in the real world. Direct manipulation provides many advantages for the exploration of complex, multi-dimensional data sets, by allowing the investigator the ability to intuitively explore the data environment. Because direct manipulation is essentially a control interface, it is better suited for the exploration and analysis of a data set than for the publishing or communication of features found in that data set. Thus direct manipulation is most relevant to the analysis of complex data that fills a volume of three-dimensional space, such as a fluid flow data set. Direct manipulation allows the intuitive exploration of that data, which facilitates the discovery of data features that would be difficult to find using more conventional visualization methods. Using a direct manipulation interface in virtual reality, an investigator can, for example, move a data probe about in space, watching the results and getting a sense of how the data varies within its spatial volume.

  6. Low cost heads-up virtual reality (HUVR) with optical tracking and haptic feedback

    NASA Astrophysics Data System (ADS)

    Margolis, Todd; DeFanti, Thomas A.; Dawe, Greg; Prudhomme, Andrew; Schulze, Jurgen P.; Cutchin, Steve

    2011-03-01

    Researchers at the University of California, San Diego, have created a new, relatively low-cost augmented reality system that enables users to touch the virtual environment they are immersed in. The Heads-Up Virtual Reality device (HUVR) couples a consumer 3D HD flat screen TV with a half-silvered mirror to project any graphic image onto the user's hands and into the space surrounding them. With his or her head position optically tracked to generate the correct perspective view, the user maneuvers a force-feedback (haptic) device to interact with the 3D image, literally 'touching' the object's angles and contours as if it was a tangible physical object. HUVR can be used for training and education in structural and mechanical engineering, archaeology and medicine as well as other tasks that require hand-eye coordination. One of the most unique characteristics of HUVR is that a user can place their hands inside of the virtual environment without occluding the 3D image. Built using open-source software and consumer level hardware, HUVR offers users a tactile experience in an immersive environment that is functional, affordable and scalable.

  7. Rapid prototyping 3D virtual world interfaces within a virtual factory environment

    NASA Technical Reports Server (NTRS)

    Kosta, Charles Paul; Krolak, Patrick D.

    1993-01-01

    On-going work into user requirements analysis using CLIPS (NASA/JSC) expert systems as an intelligent event simulator has led to research into three-dimensional (3D) interfaces. Previous work involved CLIPS and two-dimensional (2D) models. Integral to this work was the development of the University of Massachusetts Lowell parallel version of CLIPS, called PCLIPS. This allowed us to create both a Software Bus and a group problem-solving environment for expert systems development. By shifting the PCLIPS paradigm to use the VEOS messaging protocol we have merged VEOS (HlTL/Seattle) and CLIPS into a distributed virtual worlds prototyping environment (VCLIPS). VCLIPS uses the VEOS protocol layer to allow multiple experts to cooperate on a single problem. We have begun to look at the control of a virtual factory. In the virtual factory there are actors and objects as found in our Lincoln Logs Factory of the Future project. In this artificial reality architecture there are three VCLIPS entities in action. One entity is responsible for display and user events in the 3D virtual world. Another is responsible for either simulating the virtual factory or communicating with the real factory. The third is a user interface expert. The interface expert maps user input levels, within the current prototype, to control information for the factory. The interface to the virtual factory is based on a camera paradigm. The graphics subsystem generates camera views of the factory on standard X-Window displays. The camera allows for view control and object control. Control or the factory is accomplished by the user reaching into the camera views to perform object interactions. All communication between the separate CLIPS expert systems is done through VEOS.

  8. Meeting and Serving Users in Their New Work (and Play) Spaces

    ERIC Educational Resources Information Center

    Peters, Tom

    2008-01-01

    This article examines the public services component of digital and virtual libraries, focusing on the end-user experience. As the number and types of "places" where library users access library collections and services continue to expand (now including cell phones, iPods, and three-dimensional virtual reality environments populated by avatars),…

  9. Multi-User Domain Object Oriented (MOO) as a High School Procedure for Foreign Language Acquisition.

    ERIC Educational Resources Information Center

    Backer, James A.

    Foreign language students experience added difficulty when they are isolated from native speakers and from the culture of the target language. It has been posited that MOO (Multi-User Domain Object Oriented) may help overcome the geographical isolation of these students. MOOs are Internet-based virtual worlds in which people from all over the real…

  10. Using Virtual Worlds in Education: Second Life[R] as an Educational Tool

    ERIC Educational Resources Information Center

    Baker, Suzanne C.; Wentz, Ryan K.; Woods, Madison M.

    2009-01-01

    The online virtual world Second Life (www.secondlife.com) has multiple potential uses in teaching. In Second Life (SL), users create avatars that represent them in the virtual world. Within SL, avatars can interact with each other and with objects and environments. SL offers tremendous creative potential in that users can create content within the…

  11. Novel interactive virtual showcase based on 3D multitouch technology

    NASA Astrophysics Data System (ADS)

    Yang, Tao; Liu, Yue; Lu, You; Wang, Yongtian

    2009-11-01

    A new interactive virtual showcase is proposed in this paper. With the help of virtual reality technology, the user of the proposed system can watch the virtual objects floating in the air from all four sides and interact with the virtual objects by touching the four surfaces of the virtual showcase. Unlike traditional multitouch system, this system cannot only realize multi-touch on a plane to implement 2D translation, 2D scaling, and 2D rotation of the objects; it can also realize the 3D interaction of the virtual objects by recognizing and analyzing the multi-touch that can be simultaneously captured from the four planes. Experimental results show the potential of the proposed system to be applied in the exhibition of historical relics and other precious goods.

  12. An Improvement of Robust Biometrics-Based Authentication and Key Agreement Scheme for Multi-Server Environments Using Smart Cards.

    PubMed

    Moon, Jongho; Choi, Younsung; Jung, Jaewook; Won, Dongho

    2015-01-01

    In multi-server environments, user authentication is a very important issue because it provides the authorization that enables users to access their data and services; furthermore, remote user authentication schemes for multi-server environments have solved the problem that has arisen from user's management of different identities and passwords. For this reason, numerous user authentication schemes that are designed for multi-server environments have been proposed over recent years. In 2015, Lu et al. improved upon Mishra et al.'s scheme, claiming that their remote user authentication scheme is more secure and practical; however, we found that Lu et al.'s scheme is still insecure and incorrect. In this paper, we demonstrate that Lu et al.'s scheme is vulnerable to outsider attack and user impersonation attack, and we propose a new biometrics-based scheme for authentication and key agreement that can be used in multi-server environments; then, we show that our proposed scheme is more secure and supports the required security properties.

  13. A MOO-Based Virtual Training Environment.

    ERIC Educational Resources Information Center

    Mateas, Michael; Lewis, Scott

    1996-01-01

    Describes the implementation of a virtual environment to support the training of engineers in Panels of Experts (POE), a vehicle for gathering customer data. Describes the environment, discusses some issues of communication and interaction raised by the technology, and relays the experiences of new users within this environment. (RS)

  14. Ambient clumsiness in virtual environments

    NASA Astrophysics Data System (ADS)

    Ruzanka, Silvia; Behar, Katherine

    2010-01-01

    A fundamental pursuit of Virtual Reality is the experience of a seamless connection between the user's body and actions within the simulation. Virtual worlds often mediate the relationship between the physical and virtual body through creating an idealized representation of the self in an idealized space. This paper argues that the very ubiquity of the medium of virtual environments, such as the massively popular Second Life, has now made them mundane, and that idealized representations are no longer appropriate. In our artwork we introduce the attribute of clumsiness to Second Life by creating and distributing scripts that cause users' avatars to exhibit unpredictable stumbling, tripping, and momentary poor coordination, thus subtly and unexpectedly intervening with, rather than amplifying, a user's intent. These behaviors are publicly distributed, and manifest only occasionally - rather than intentional, conscious actions, they are involuntary and ambient. We suggest that the physical human body is itself an imperfect interface, and that the continued blurring of distinctions between the physical body and virtual representations calls for the introduction of these mundane, clumsy elements.

  15. Building a Generic Virtual Research Environment Framework for Multiple Earth and Space Science Domains and a Diversity of Users.

    NASA Astrophysics Data System (ADS)

    Wyborn, L. A.; Fraser, R.; Evans, B. J. K.; Friedrich, C.; Klump, J. F.; Lescinsky, D. T.

    2017-12-01

    Virtual Research Environments (VREs) are now part of academic infrastructures. Online research workflows can be orchestrated whereby data can be accessed from multiple external repositories with processing taking place on public or private clouds, and centralised supercomputers using a mixture of user codes, and well-used community software and libraries. VREs enable distributed members of research teams to actively work together to share data, models, tools, software, workflows, best practices, infrastructures, etc. These environments and their components are increasingly able to support the needs of undergraduate teaching. External to the research sector, they can also be reused by citizen scientists, and be repurposed for industry users to help accelerate the diffusion and hence enable the translation of research innovations. The Virtual Geophysics Laboratory (VGL) in Australia was started in 2012, built using a collaboration between CSIRO, the National Computational Infrastructure (NCI) and Geoscience Australia, with support funding from the Australian Government Department of Education. VGL comprises three main modules that provide an interface to enable users to first select their required data; to choose a tool to process that data; and then access compute infrastructure for execution. VGL was initially built to enable a specific set of researchers in government agencies access to specific data sets and a limited number of tools. Over the years it has evolved into a multi-purpose Earth science platform with access to an increased variety of data (e.g., Natural Hazards, Geochemistry), a broader range of software packages, and an increasing diversity of compute infrastructures. This expansion has been possible because of the approach to loosely couple data, tools and compute resources via interfaces that are built on international standards and accessed as network-enabled services wherever possible. Built originally for researchers that were not fussy about general usability, increasing emphasis on User Interfaces (UIs) and stability will lead to increased uptake in the education and industry sectors. Simultaneously, improvements are being added to facilitate access to data and tools by experienced researchers who want direct access to both data and flexible workflows.

  16. Presence Personalization and Persistence: A New Approach to Building Archives to Support Collaborative Research

    NASA Technical Reports Server (NTRS)

    McGlynn, Thomas A.

    2008-01-01

    We discuss approaches to building archives that support the way most science is done. Today research is done in formal teams and informal groups. However our on-line services are designed to work with a single user. We have begun prototyping a new approach to building archives in which support for collaborative research is built in from the start. We organize the discussion along three elements that we believe to be necessary for effective support: We must enable user presence in the archive environment; users must be able to interact. Users must be able to personalize the environment, adding data and capabilities useful to themselves and their team. These changes must be persistent: subsequent sessions must be able to build upon previous sessions. In building the archive we see the large multi-player interactive games as a paradigm of how this approach can work. These three 'P's are essential in gaming as well and we shall use insights from the gaming world and virtual reality systems like Second Life in our prototype.

  17. Meal-Maker: A Virtual Meal Preparation Environment for Children with Cerebral Palsy

    ERIC Educational Resources Information Center

    Kirshner, Sharon; Weiss, Patrice L.; Tirosh, Emanuel

    2011-01-01

    Virtual reality (VR) technology enables evaluation and practice of specific skills in a motivating, user-friendly and safe way. The implementation of virtual game environments within clinical settings has increased substantially in recent years. However, the psychometric properties and feasibility of many applications have not been fully…

  18. Virtual Beach v2.2 User Guide

    EPA Science Inventory

    Virtual Beach version 2.2 (VB 2.2) is a decision support tool. It is designed to construct site-specific Multi-Linear Regression (MLR) models to predict pathogen indicator levels (or fecal indicator bacteria, FIB) at recreational beaches. MLR analysis has outperformed persisten...

  19. Virtually-augmented interfaces for tactical aircraft.

    PubMed

    Haas, M W

    1995-05-01

    The term Fusion Interface is defined as a class of interface which integrally incorporates both virtual and non-virtual concepts and devices across the visual, auditory and haptic sensory modalities. A fusion interface is a multi-sensory virtually-augmented synthetic environment. A new facility has been developed within the Human Engineering Division of the Armstrong Laboratory dedicated to exploratory development of fusion-interface concepts. One of the virtual concepts to be investigated in the Fusion Interfaces for Tactical Environments facility (FITE) is the application of EEG and other physiological measures for virtual control of functions within the flight environment. FITE is a specialized flight simulator which allows efficient concept development through the use of rapid prototyping followed by direct experience of new fusion concepts. The FITE facility also supports evaluation of fusion concepts by operational fighter pilots in a high fidelity simulated air combat environment. The facility was utilized by a multi-disciplinary team composed of operational pilots, human-factors engineers, electronics engineers, computer scientists, and experimental psychologists to prototype and evaluate the first multi-sensory, virtually-augmented cockpit. The cockpit employed LCD-based head-down displays, a helmet-mounted display, three-dimensionally localized audio displays, and a haptic display. This paper will endeavor to describe the FITE facility architecture, some of the characteristics of the FITE virtual display and control devices, and the potential application of EEG and other physiological measures within the FITE facility.

  20. Introducing and Evaluating the Behavior of Non-Verbal Features in the Virtual Learning

    ERIC Educational Resources Information Center

    Dharmawansa, Asanka D.; Fukumura, Yoshimi; Marasinghe, Ashu; Madhuwanthi, R. A. M.

    2015-01-01

    The objective of this research is to introduce the behavior of non-verbal features of e-Learners in the virtual learning environment to establish a fair representation of the real user by an avatar who represents the e-Learner in the virtual environment and to distinguish the deportment of the non-verbal features during the virtual learning…

  1. Real-time tracking of visually attended objects in virtual environments and its application to LOD.

    PubMed

    Lee, Sungkil; Kim, Gerard Jounghyun; Choi, Seungmoon

    2009-01-01

    This paper presents a real-time framework for computationally tracking objects visually attended by the user while navigating in interactive virtual environments. In addition to the conventional bottom-up (stimulus-driven) saliency map, the proposed framework uses top-down (goal-directed) contexts inferred from the user's spatial and temporal behaviors, and identifies the most plausibly attended objects among candidates in the object saliency map. The computational framework was implemented using GPU, exhibiting high computational performance adequate for interactive virtual environments. A user experiment was also conducted to evaluate the prediction accuracy of the tracking framework by comparing objects regarded as visually attended by the framework to actual human gaze collected with an eye tracker. The results indicated that the accuracy was in the level well supported by the theory of human cognition for visually identifying single and multiple attentive targets, especially owing to the addition of top-down contextual information. Finally, we demonstrate how the visual attention tracking framework can be applied to managing the level of details in virtual environments, without any hardware for head or eye tracking.

  2. An efficient and scalable deformable model for virtual reality-based medical applications.

    PubMed

    Choi, Kup-Sze; Sun, Hanqiu; Heng, Pheng-Ann

    2004-09-01

    Modeling of tissue deformation is of great importance to virtual reality (VR)-based medical simulations. Considerable effort has been dedicated to the development of interactively deformable virtual tissues. In this paper, an efficient and scalable deformable model is presented for virtual-reality-based medical applications. It considers deformation as a localized force transmittal process which is governed by algorithms based on breadth-first search (BFS). The computational speed is scalable to facilitate real-time interaction by adjusting the penetration depth. Simulated annealing (SA) algorithms are developed to optimize the model parameters by using the reference data generated with the linear static finite element method (FEM). The mechanical behavior and timing performance of the model have been evaluated. The model has been applied to simulate the typical behavior of living tissues and anisotropic materials. Integration with a haptic device has also been achieved on a generic personal computer (PC) platform. The proposed technique provides a feasible solution for VR-based medical simulations and has the potential for multi-user collaborative work in virtual environment.

  3. Scalable metadata environments (MDE): artistically impelled immersive environments for large-scale data exploration

    NASA Astrophysics Data System (ADS)

    West, Ruth G.; Margolis, Todd; Prudhomme, Andrew; Schulze, Jürgen P.; Mostafavi, Iman; Lewis, J. P.; Gossmann, Joachim; Singh, Rajvikram

    2014-02-01

    Scalable Metadata Environments (MDEs) are an artistic approach for designing immersive environments for large scale data exploration in which users interact with data by forming multiscale patterns that they alternatively disrupt and reform. Developed and prototyped as part of an art-science research collaboration, we define an MDE as a 4D virtual environment structured by quantitative and qualitative metadata describing multidimensional data collections. Entire data sets (e.g.10s of millions of records) can be visualized and sonified at multiple scales and at different levels of detail so they can be explored interactively in real-time within MDEs. They are designed to reflect similarities and differences in the underlying data or metadata such that patterns can be visually/aurally sorted in an exploratory fashion by an observer who is not familiar with the details of the mapping from data to visual, auditory or dynamic attributes. While many approaches for visual and auditory data mining exist, MDEs are distinct in that they utilize qualitative and quantitative data and metadata to construct multiple interrelated conceptual coordinate systems. These "regions" function as conceptual lattices for scalable auditory and visual representations within virtual environments computationally driven by multi-GPU CUDA-enabled fluid dyamics systems.

  4. Determinants of Presence in 3D Virtual Worlds: A Structural Equation Modelling Analysis

    ERIC Educational Resources Information Center

    Chow, Meyrick

    2016-01-01

    There is a growing body of evidence that feeling present in virtual environments contributes to effective learning. Presence is a psychological state of the user; hence, it is generally agreed that individual differences in user characteristics can lead to different experiences of presence. Despite the fact that user characteristics can play a…

  5. Possibilities and Determinants of Using Low-Cost Devices in Virtual Education Applications

    ERIC Educational Resources Information Center

    Bun, Pawel Kazimierz; Wichniarek, Radoslaw; Górski, Filip; Grajewski, Damian; Zawadzki, Przemyslaw; Hamrol, Adam

    2017-01-01

    Virtual reality (VR) may be used as an innovative educational tool. However, in order to fully exploit its potential, it is essential to achieve the effect of immersion. To more completely submerge the user in a virtual environment, it is necessary to ensure that the user's actions are directly translated into the image generated by the…

  6. A Mobile Virtual Butler to Bridge the Gap between Users and Ambient Assisted Living: A Smart Home Case Study

    PubMed Central

    Costa, Nuno; Domingues, Patricio; Fdez-Riverola, Florentino; Pereira, António

    2014-01-01

    Ambient Intelligence promises to transform current spaces into electronic environments that are responsive, assistive and sensitive to human presence. Those electronic environments will be fully populated with dozens, hundreds or even thousands of connected devices that share information and thus become intelligent. That massive wave of electronic devices will also invade everyday objects, turning them into smart entities, keeping their native features and characteristics while seamlessly promoting them to a new class of thinking and reasoning everyday objects. Although there are strong expectations that most of the users' needs can be fulfilled without their intervention, there are still situations where interaction is required. This paper presents work being done in the field of human-computer interaction, focusing on smart home environments, while being a part of a larger project called Aging Inside a Smart Home. This initiative arose as a way to deal with a large scourge of our country, where lots of elderly persons live alone in their homes, often with limited or no physical mobility. The project relies on the mobile agent computing paradigm in order to create a Virtual Butler that provides the interface between the elderly and the smart home infrastructure. The Virtual Butler is receptive to user questions, answering them according to the context and knowledge of the AISH. It is also capable of interacting with the user whenever it senses that something has gone wrong, notifying next of kin and/or medical services, etc. The Virtual Butler is aware of the user location and moves to the computing device which is closest to the user, in order to be always present. Its avatar can also run in handheld devices keeping its main functionality in order to track user when s/he goes out. According to the evaluation carried out, the Virtual Butler is assessed as a very interesting and loved digital friend, filling the gap between the user and the smart home. The evaluation also showed that the Virtual Butler concept can be easily ported to other types of possible smart and assistive environments like airports, hospitals, shopping malls, offices, etc. PMID:25102342

  7. A mobile Virtual Butler to bridge the gap between users and ambient assisted living: a Smart Home case study.

    PubMed

    Costa, Nuno; Domingues, Patricio; Fdez-Riverola, Florentino; Pereira, António

    2014-08-06

    Ambient Intelligence promises to transform current spaces into electronic environments that are responsive, assistive and sensitive to human presence. Those electronic environments will be fully populated with dozens, hundreds or even thousands of connected devices that share information and thus become intelligent. That massive wave of electronic devices will also invade everyday objects, turning them into smart entities, keeping their native features and characteristics while seamlessly promoting them to a new class of thinking and reasoning everyday objects. Although there are strong expectations that most of the users' needs can be fulfilled without their intervention, there are still situations where interaction is required. This paper presents work being done in the field of human-computer interaction, focusing on smart home environments, while being a part of a larger project called Aging Inside a Smart Home. This initiative arose as a way to deal with a large scourge of our country, where lots of elderly persons live alone in their homes, often with limited or no physical mobility. The project relies on the mobile agent computing paradigm in order to create a Virtual Butler that provides the interface between the elderly and the smart home infrastructure. The Virtual Butler is receptive to user questions, answering them according to the context and knowledge of the AISH. It is also capable of interacting with the user whenever it senses that something has gone wrong, notifying next of kin and/or medical services, etc. The Virtual Butler is aware of the user location and moves to the computing device which is closest to the user, in order to be always present. Its avatar can also run in handheld devices keeping its main functionality in order to track user when s/he goes out. According to the evaluation carried out, the Virtual Butler is assessed as a very interesting and loved digital friend, filling the gap between the user and the smart home. The evaluation also showed that the Virtual Butler concept can be easily ported to other types of possible smart and assistive environments like airports, hospitals, shopping malls, offices, etc.

  8. A Model Supported Interactive Virtual Environment for Natural Resource Sharing in Environmental Education

    ERIC Educational Resources Information Center

    Barbalios, N.; Ioannidou, I.; Tzionas, P.; Paraskeuopoulos, S.

    2013-01-01

    This paper introduces a realistic 3D model supported virtual environment for environmental education, that highlights the importance of water resource sharing by focusing on the tragedy of the commons dilemma. The proposed virtual environment entails simulations that are controlled by a multi-agent simulation model of a real ecosystem consisting…

  9. Active Learning through the Use of Virtual Environments

    ERIC Educational Resources Information Center

    Mayrose, James

    2012-01-01

    Immersive Virtual Reality (VR) has seen explosive growth over the last decade. Immersive VR attempts to give users the sensation of being fully immersed in a synthetic environment by providing them with 3D hardware, and allowing them to interact with objects in virtual worlds. The technology is extremely effective for learning and exploration, and…

  10. Virtual Tour Environment of Cuba's National School of Art

    NASA Astrophysics Data System (ADS)

    Napolitano, R. K.; Douglas, I. P.; Garlock, M. E.; Glisic, B.

    2017-08-01

    Innovative technologies have enabled new opportunities for collecting, analyzing, and sharing information about cultural heritage sites. Through a combination of two of these technologies, spherical imaging and virtual tour environment, we preliminarily documented one of Cuba's National Schools of Art, the National Ballet School.The Ballet School is one of the five National Art Schools built in Havana, Cuba after the revolution. Due to changes in the political climate, construction was halted on the schools before completion. The Ballet School in particular was partially completed but never used for the intended purpose. Over the years, the surrounding vegetation and environment have started to overtake the buildings; damages such as missing bricks, corroded rebar, and broken tie bars can be seen. We created a virtual tour through the Ballet School which highlights key satellite classrooms and the main domed performance spaces. Scenes of the virtual tour were captured utilizing the Ricoh Theta S spherical imaging camera and processed with Kolor Panotour virtual environment software. Different forms of data can be included in this environment in order to provide a user with pertinent information. Image galleries, hyperlinks to websites, videos, PDFs, and links to databases can be embedded within the scene and interacted with by a user. By including this information within the virtual tour, a user can better understand how the site was constructed as well as the existing types of damage. The results of this work are recommendations for how a site can be preliminarily documented and information can be initially organized and shared.

  11. The Virtual Tablet: Virtual Reality as a Control System

    NASA Technical Reports Server (NTRS)

    Chronister, Andrew

    2016-01-01

    In the field of human-computer interaction, Augmented Reality (AR) and Virtual Reality (VR) have been rapidly growing areas of interest and concerted development effort thanks to both private and public research. At NASA, a number of groups have explored the possibilities afforded by AR and VR technology, among which is the IT Advanced Concepts Lab (ITACL). Within ITACL, the AVR (Augmented/Virtual Reality) Lab focuses on VR technology specifically for its use in command and control. Previous work in the AVR lab includes the Natural User Interface (NUI) project and the Virtual Control Panel (VCP) project, which created virtual three-dimensional interfaces that users could interact with while wearing a VR headset thanks to body- and hand-tracking technology. The Virtual Tablet (VT) project attempts to improve on these previous efforts by incorporating a physical surrogate which is mirrored in the virtual environment, mitigating issues with difficulty of visually determining the interface location and lack of tactile feedback discovered in the development of previous efforts. The physical surrogate takes the form of a handheld sheet of acrylic glass with several infrared-range reflective markers and a sensor package attached. Using the sensor package to track orientation and a motion-capture system to track the marker positions, a model of the surrogate is placed in the virtual environment at a position which corresponds with the real-world location relative to the user's VR Head Mounted Display (HMD). A set of control mechanisms is then projected onto the surface of the surrogate such that to the user, immersed in VR, the control interface appears to be attached to the object they are holding. The VT project was taken from an early stage where the sensor package, motion-capture system, and physical surrogate had been constructed or tested individually but not yet combined or incorporated into the virtual environment. My contribution was to combine the pieces of hardware, write software to incorporate each piece of position or orientation data into a coherent description of the object's location in space, place the virtual analogue accordingly, and project the control interface onto it, resulting in a functioning object which has both a physical and a virtual presence. Additionally, the virtual environment was enhanced with two live video feeds from cameras mounted on the robotic device being used as an example target of the virtual interface. The working VT allows users to naturally interact with a control interface with little to no training and without the issues found in previous efforts.

  12. Performance-Driven Hybrid Full-Body Character Control for Navigation and Interaction in Virtual Environments

    NASA Astrophysics Data System (ADS)

    Mousas, Christos; Anagnostopoulos, Christos-Nikolaos

    2017-06-01

    This paper presents a hybrid character control interface that provides the ability to synthesize in real-time a variety of actions based on the user's performance capture. The proposed methodology enables three different performance interaction modules: the performance animation control that enables the direct mapping of the user's pose to the character, the motion controller that synthesizes the desired motion of the character based on an activity recognition methodology, and the hybrid control that lies within the performance animation and the motion controller. With the methodology presented, the user will have the freedom to interact within the virtual environment, as well as the ability to manipulate the character and to synthesize a variety of actions that cannot be performed directly by him/her, but which the system synthesizes. Therefore, the user is able to interact with the virtual environment in a more sophisticated fashion. This paper presents examples of different scenarios based on the three different full-body character control methodologies.

  13. Rehabilitation Program Integrating Virtual Environment to Improve Orientation and Mobility Skills for People Who Are Blind

    PubMed Central

    Lahav, Orly; Schloerb, David W.; Srinivasan, Mandayam A.

    2014-01-01

    This paper presents the integration of a virtual environment (BlindAid) in an orientation and mobility rehabilitation program as a training aid for people who are blind. BlindAid allows the users to interact with different virtual structures and objects through auditory and haptic feedback. This research explores if and how use of the BlindAid in conjunction with a rehabilitation program can help people who are blind train themselves in familiar and unfamiliar spaces. The study, focused on nine participants who were congenitally, adventitiously, and newly blind, during their orientation and mobility rehabilitation program at the Carroll Center for the Blind (Newton, Massachusetts, USA). The research was implemented using virtual environment (VE) exploration tasks and orientation tasks in virtual environments and real spaces. The methodology encompassed both qualitative and quantitative methods, including interviews, a questionnaire, videotape recording, and user computer logs. The results demonstrated that the BlindAid training gave participants additional time to explore the virtual environment systematically. Secondly, it helped elucidate several issues concerning the potential strengths of the BlindAid system as a training aid for orientation and mobility for both adults and teenagers who are congenitally, adventitiously, and newly blind. PMID:25284952

  14. Comparative study on collaborative interaction in non-immersive and immersive systems

    NASA Astrophysics Data System (ADS)

    Shahab, Qonita M.; Kwon, Yong-Moo; Ko, Heedong; Mayangsari, Maria N.; Yamasaki, Shoko; Nishino, Hiroaki

    2007-09-01

    This research studies the Virtual Reality simulation for collaborative interaction so that different people from different places can interact with one object concurrently. Our focus is the real-time handling of inputs from multiple users, where object's behavior is determined by the combination of the multiple inputs. Issues addressed in this research are: 1) The effects of using haptics on a collaborative interaction, 2) The possibilities of collaboration between users from different environments. We conducted user tests on our system in several cases: 1) Comparison between non-haptics and haptics collaborative interaction over LAN, 2) Comparison between non-haptics and haptics collaborative interaction over Internet, and 3) Analysis of collaborative interaction between non-immersive and immersive display environments. The case studies are the interaction of users in two cases: collaborative authoring of a 3D model by two users, and collaborative haptic interaction by multiple users. In Virtual Dollhouse, users can observe physics law while constructing a dollhouse using existing building blocks, under gravity effects. In Virtual Stretcher, multiple users can collaborate on moving a stretcher together while feeling each other's haptic motions.

  15. The Effect of Procedural Guidance on Students' Skill Enhancement in a Virtual Chemistry Laboratory

    ERIC Educational Resources Information Center

    Ullah, Sehat; Ali, Numan; Rahman, Sami Ur

    2016-01-01

    Various cognitive aids (such as change of color, arrows, etc.) are provided in virtual environments to assist users in task realization. These aids increase users' performance but lead to reduced learning because there is less cognitive load on the users. In this paper we present a new concept of procedural guidance in which textual information…

  16. Cognitive Styles and Virtual Environments.

    ERIC Educational Resources Information Center

    Ford, Nigel

    2000-01-01

    Discussion of navigation through virtual information environments focuses on the need for robust user models that take into account individual differences. Considers Pask's information processing styles and strategies; deep (transformational) and surface (reproductive) learning; field dependence/independence; divergent/convergent thinking;…

  17. DHM simulation in virtual environments: a case-study on control room design.

    PubMed

    Zamberlan, M; Santos, V; Streit, P; Oliveira, J; Cury, R; Negri, T; Pastura, F; Guimarães, C; Cid, G

    2012-01-01

    This paper will present the workflow developed for the application of serious games in the design of complex cooperative work settings. The project was based on ergonomic studies and development of a control room among participative design process. Our main concerns were the 3D human virtual representation acquired from 3D scanning, human interaction, workspace layout and equipment designed considering ergonomics standards. Using Unity3D platform to design the virtual environment, the virtual human model can be controlled by users on dynamic scenario in order to evaluate the new work settings and simulate work activities. The results obtained showed that this virtual technology can drastically change the design process by improving the level of interaction between final users and, managers and human factors team.

  18. Characterizing Student Navigation in Educational Multiuser Virtual Environments: A Case Study Using Data from the River City Project

    ERIC Educational Resources Information Center

    Dukas, Georg

    2009-01-01

    Though research in emerging technologies is vital to fulfilling their incredible potential for educational applications, it is often fraught with analytic challenges related to large datasets. This thesis explores these challenges in researching multiuser virtual environments (MUVEs). In a MUVE, users assume a persona and traverse a virtual space…

  19. EduMOOs: Virtual Learning Centers.

    ERIC Educational Resources Information Center

    Woods, Judy C.

    1998-01-01

    Multi-user Object Oriented Internet activities (MOOs) permit real time interaction in a text-based virtual reality via the Internet. This article explains EduMOOs (educational MOOs) and provides brief descriptions, World Wide Web addresses, and telnet addresses for selected EduMOOs. Instructions for connecting to a MOO and a list of related Web…

  20. The virtual windtunnel: Visualizing modern CFD datasets with a virtual environment

    NASA Technical Reports Server (NTRS)

    Bryson, Steve

    1993-01-01

    This paper describes work in progress on a virtual environment designed for the visualization of pre-computed fluid flows. The overall problems involved in the visualization of fluid flow are summarized, including computational, data management, and interface issues. Requirements for a flow visualization are summarized. Many aspects of the implementation of the virtual windtunnel were uniquely determined by these requirements. The user interface is described in detail.

  1. User modeling for distributed virtual environment intelligent agents

    NASA Astrophysics Data System (ADS)

    Banks, Sheila B.; Stytz, Martin R.

    1999-07-01

    This paper emphasizes the requirement for user modeling by presenting the necessary information to motivate the need for and use of user modeling for intelligent agent development. The paper will present information on our current intelligent agent development program, the Symbiotic Information Reasoning and Decision Support (SIRDS) project. We then discuss the areas of intelligent agents and user modeling, which form the foundation of the SIRDS project. Included in the discussion of user modeling are its major components, which are cognitive modeling and behavioral modeling. We next motivate the need for and user of a methodology to develop user models to encompass work within cognitive task analysis. We close the paper by drawing conclusions from our current intelligent agent research project and discuss avenues of future research in the utilization of user modeling for the development of intelligent agents for virtual environments.

  2. Cranial implant design using augmented reality immersive system.

    PubMed

    Ai, Zhuming; Evenhouse, Ray; Leigh, Jason; Charbel, Fady; Rasmussen, Mary

    2007-01-01

    Software tools that utilize haptics for sculpting precise fitting cranial implants are utilized in an augmented reality immersive system to create a virtual working environment for the modelers. The virtual environment is designed to mimic the traditional working environment as closely as possible, providing more functionality for the users. The implant design process uses patient CT data of a defective area. This volumetric data is displayed in an implant modeling tele-immersive augmented reality system where the modeler can build a patient specific implant that precisely fits the defect. To mimic the traditional sculpting workspace, the implant modeling augmented reality system includes stereo vision, viewer centered perspective, sense of touch, and collaboration. To achieve optimized performance, this system includes a dual-processor PC, fast volume rendering with three-dimensional texture mapping, the fast haptic rendering algorithm, and a multi-threading architecture. The system replaces the expensive and time consuming traditional sculpting steps such as physical sculpting, mold making, and defect stereolithography. This augmented reality system is part of a comprehensive tele-immersive system that includes a conference-room-sized system for tele-immersive small group consultation and an inexpensive, easily deployable networked desktop virtual reality system for surgical consultation, evaluation and collaboration. This system has been used to design patient-specific cranial implants with precise fit.

  3. Using smartphone technology to deliver a virtual pedestrian environment: usability and validation.

    PubMed

    Schwebel, David C; Severson, Joan; He, Yefei

    2017-09-01

    Various programs effectively teach children to cross streets more safely, but all are labor- and cost-intensive. Recent developments in mobile phone technology offer opportunity to deliver virtual reality pedestrian environments to mobile smartphone platforms. Such an environment may offer a cost- and labor-effective strategy to teach children to cross streets safely. This study evaluated usability, feasibility, and validity of a smartphone-based virtual pedestrian environment. A total of 68 adults completed 12 virtual crossings within each of two virtual pedestrian environments, one delivered by smartphone and the other a semi-immersive kiosk virtual environment. Participants completed self-report measures of perceived realism and simulator sickness experienced in each virtual environment, plus self-reported demographic and personality characteristics. All participants followed system instructions and used the smartphone-based virtual environment without difficulty. No significant simulator sickness was reported or observed. Users rated the smartphone virtual environment as highly realistic. Convergent validity was detected, with many aspects of pedestrian behavior in the smartphone-based virtual environment matching behavior in the kiosk virtual environment. Anticipated correlations between personality and kiosk virtual reality pedestrian behavior emerged for the smartphone-based system. A smartphone-based virtual environment can be usable and valid. Future research should develop and evaluate such a training system.

  4. Augmenting the access grid using augmented reality

    NASA Astrophysics Data System (ADS)

    Li, Ying

    2012-01-01

    The Access Grid (AG) targets an advanced collaboration environment, with which multi-party group of people from remote sites can collaborate over high-performance networks. However, current AG still employs VIC (Video Conferencing Tool) to offer only pure video for remote communication, while most AG users expect to collaboratively refer and manipulate the 3D geometric models of grid services' results in live videos of AG session. Augmented Reality (AR) technique can overcome the deficiencies with its characteristics of combining virtual and real, real-time interaction and 3D registration, so it is necessary for AG to utilize AR to better assist the advanced collaboration environment. This paper introduces an effort to augment the AG by adding support for AR capability, which is encapsulated in the node service infrastructure, named as Augmented Reality Service (ARS). The ARS can merge the 3D geometric models of grid services' results and real video scene of AG into one AR environment, and provide the opportunity for distributed AG users to interactively and collaboratively participate in the AR environment with better experience.

  5. JAMSTEC E-library of Deep-sea Images (J-EDI) Realizes a Virtual Journey to the Earth's Unexplored Deep Ocean

    NASA Astrophysics Data System (ADS)

    Sasaki, T.; Azuma, S.; Matsuda, S.; Nagayama, A.; Ogido, M.; Saito, H.; Hanafusa, Y.

    2016-12-01

    The Japan Agency for Marine-Earth Science and Technology (JAMSTEC) archives a large amount of deep-sea research videos and photos obtained by JAMSTEC's research submersibles and vehicles with cameras. The web site "JAMSTEC E-library of Deep-sea Images : J-EDI" (http://www.godac.jamstec.go.jp/jedi/e/) has made videos and photos available to the public via the Internet since 2011. Users can search for target videos and photos by keywords, easy-to-understand icons, and dive information at J-EDI because operating staffs classify videos and photos as to contents, e.g. living organism and geological environment, and add comments to them.Dive survey data including videos and photos are not only valiant academically but also helpful for education and outreach activities. With the aim of the improvement of visibility for broader communities, we added new functions of 3-dimensional display synchronized various dive survey data with videos in this year.New Functions Users can search for dive survey data by 3D maps with plotted dive points using the WebGL virtual map engine "Cesium". By selecting a dive point, users can watch deep-sea videos and photos and associated environmental data, e.g. water temperature, salinity, rock and biological sample photos, obtained by the dive survey. Users can browse a dive track visualized in 3D virtual spaces using the WebGL JavaScript library. By synchronizing this virtual dive track with videos, users can watch deep-sea videos recorded at a point on a dive track. Users can play an animation which a submersible-shaped polygon automatically traces a 3D virtual dive track and displays of dive survey data are synchronized with tracing a dive track. Users can directly refer to additional information of other JAMSTEC data sites such as marine biodiversity database, marine biological sample database, rock sample database, and cruise and dive information database, on each page which a 3D virtual dive track is displayed. A 3D visualization of a dive track makes users experience a virtual dive survey. In addition, by synchronizing a virtual dive track with videos, it is easy to understand living organisms and geological environments of a dive point. Therefore, these functions will visually support understanding of deep-sea environments in lectures and educational activities.

  6. Increasing Accessibility to the Blind of Virtual Environments, Using a Virtual Mobility Aid Based On the "EyeCane": Feasibility Study

    PubMed Central

    Maidenbaum, Shachar; Levy-Tzedek, Shelly; Chebat, Daniel-Robert; Amedi, Amir

    2013-01-01

    Virtual worlds and environments are becoming an increasingly central part of our lives, yet they are still far from accessible to the blind. This is especially unfortunate as such environments hold great potential for them for uses such as social interaction, online education and especially for use with familiarizing the visually impaired user with a real environment virtually from the comfort and safety of his own home before visiting it in the real world. We have implemented a simple algorithm to improve this situation using single-point depth information, enabling the blind to use a virtual cane, modeled on the “EyeCane” electronic travel aid, within any virtual environment with minimal pre-processing. Use of the Virtual-EyeCane, enables this experience to potentially be later used in real world environments with identical stimuli to those from the virtual environment. We show the fast-learned practical use of this algorithm for navigation in simple environments. PMID:23977316

  7. An Improvement of Robust Biometrics-Based Authentication and Key Agreement Scheme for Multi-Server Environments Using Smart Cards

    PubMed Central

    Moon, Jongho; Choi, Younsung; Jung, Jaewook; Won, Dongho

    2015-01-01

    In multi-server environments, user authentication is a very important issue because it provides the authorization that enables users to access their data and services; furthermore, remote user authentication schemes for multi-server environments have solved the problem that has arisen from user’s management of different identities and passwords. For this reason, numerous user authentication schemes that are designed for multi-server environments have been proposed over recent years. In 2015, Lu et al. improved upon Mishra et al.’s scheme, claiming that their remote user authentication scheme is more secure and practical; however, we found that Lu et al.’s scheme is still insecure and incorrect. In this paper, we demonstrate that Lu et al.’s scheme is vulnerable to outsider attack and user impersonation attack, and we propose a new biometrics-based scheme for authentication and key agreement that can be used in multi-server environments; then, we show that our proposed scheme is more secure and supports the required security properties. PMID:26709702

  8. Generating Contextual Descriptions of Virtual Reality (VR) Spaces

    NASA Astrophysics Data System (ADS)

    Olson, D. M.; Zaman, C. H.; Sutherland, A.

    2017-12-01

    Virtual reality holds great potential for science communication, education, and research. However, interfaces for manipulating data and environments in virtual worlds are limited and idiosyncratic. Furthermore, speech and vision are the primary modalities by which humans collect information about the world, but the linking of visual and natural language domains is a relatively new pursuit in computer vision. Machine learning techniques have been shown to be effective at image and speech classification, as well as at describing images with language (Karpathy 2016), but have not yet been used to describe potential actions. We propose a technique for creating a library of possible context-specific actions associated with 3D objects in immersive virtual worlds based on a novel dataset generated natively in virtual reality containing speech, image, gaze, and acceleration data. We will discuss the design and execution of a user study in virtual reality that enabled the collection and the development of this dataset. We will also discuss the development of a hybrid machine learning algorithm linking vision data with environmental affordances in natural language. Our findings demonstrate that it is possible to develop a model which can generate interpretable verbal descriptions of possible actions associated with recognized 3D objects within immersive VR environments. This suggests promising applications for more intuitive user interfaces through voice interaction within 3D environments. It also demonstrates the potential to apply vast bodies of embodied and semantic knowledge to enrich user interaction within VR environments. This technology would allow for applications such as expert knowledge annotation of 3D environments, complex verbal data querying and object manipulation in virtual spaces, and computer-generated, dynamic 3D object affordances and functionality during simulations.

  9. Enhancing Navigation Skills through Audio Gaming.

    PubMed

    Sánchez, Jaime; Sáenz, Mauricio; Pascual-Leone, Alvaro; Merabet, Lotfi

    2010-01-01

    We present the design, development and initial cognitive evaluation of an Audio-based Environment Simulator (AbES). This software allows a blind user to navigate through a virtual representation of a real space for the purposes of training orientation and mobility skills. Our findings indicate that users feel satisfied and self-confident when interacting with the audio-based interface, and the embedded sounds allow them to correctly orient themselves and navigate within the virtual world. Furthermore, users are able to transfer spatial information acquired through virtual interactions into real world navigation and problem solving tasks.

  10. Enhancing Navigation Skills through Audio Gaming

    PubMed Central

    Sánchez, Jaime; Sáenz, Mauricio; Pascual-Leone, Alvaro; Merabet, Lotfi

    2014-01-01

    We present the design, development and initial cognitive evaluation of an Audio-based Environment Simulator (AbES). This software allows a blind user to navigate through a virtual representation of a real space for the purposes of training orientation and mobility skills. Our findings indicate that users feel satisfied and self-confident when interacting with the audio-based interface, and the embedded sounds allow them to correctly orient themselves and navigate within the virtual world. Furthermore, users are able to transfer spatial information acquired through virtual interactions into real world navigation and problem solving tasks. PMID:25505796

  11. The expert surgical assistant. An intelligent virtual environment with multimodal input.

    PubMed

    Billinghurst, M; Savage, J; Oppenheimer, P; Edmond, C

    1996-01-01

    Virtual Reality has made computer interfaces more intuitive but not more intelligent. This paper shows how an expert system can be coupled with multimodal input in a virtual environment to provide an intelligent simulation tool or surgical assistant. This is accomplished in three steps. First, voice and gestural input is interpreted and represented in a common semantic form. Second, a rule-based expert system is used to infer context and user actions from this semantic representation. Finally, the inferred user actions are matched against steps in a surgical procedure to monitor the user's progress and provide automatic feedback. In addition, the system can respond immediately to multimodal commands for navigational assistance and/or identification of critical anatomical structures. To show how these methods are used we present a prototype sinus surgery interface. The approach described here may easily be extended to a wide variety of medical and non-medical training applications by making simple changes to the expert system database and virtual environment models. Successful implementation of an expert system in both simulated and real surgery has enormous potential for the surgeon both in training and clinical practice.

  12. The social computing room: a multi-purpose collaborative visualization environment

    NASA Astrophysics Data System (ADS)

    Borland, David; Conway, Michael; Coposky, Jason; Ginn, Warren; Idaszak, Ray

    2010-01-01

    The Social Computing Room (SCR) is a novel collaborative visualization environment for viewing and interacting with large amounts of visual data. The SCR consists of a square room with 12 projectors (3 per wall) used to display a single 360-degree desktop environment that provides a large physical real estate for arranging visual information. The SCR was designed to be cost-effective, collaborative, configurable, widely applicable, and approachable for naive users. Because the SCR displays a single desktop, a wide range of applications is easily supported, making it possible for a variety of disciplines to take advantage of the room. We provide a technical overview of the room and highlight its application to scientific visualization, arts and humanities projects, research group meetings, and virtual worlds, among other uses.

  13. Scripting human animations in a virtual environment

    NASA Technical Reports Server (NTRS)

    Goldsby, Michael E.; Pandya, Abhilash K.; Maida, James C.

    1994-01-01

    The current deficiencies of virtual environment (VE) are well known: annoying lag time in drawing the current view, drastically simplified environments to reduce that time lag, low resolution and narrow field of view. Animation scripting is an application of VE technology which can be carried out successfully despite these deficiencies. The final product is a smoothly moving high resolution animation displaying detailed models. In this system, the user is represented by a human computer model with the same body proportions. Using magnetic tracking, the motions of the model's upper torso, head and arms are controlled by the user's movements (18 degrees of freedom). The model's lower torso and global position and orientation are controlled by a spaceball and keypad (12 degrees of freedom). Using this system human motion scripts can be extracted from the user's movements while immersed in a simplified virtual environment. Recorded data is used to define key frames; motion is interpolated between them and post processing adds a more detailed environment. The result is a considerable savings in time and a much more natural-looking movement of a human figure in a smooth and seamless animation.

  14. Manually locating physical and virtual reality objects.

    PubMed

    Chen, Karen B; Kimmel, Ryan A; Bartholomew, Aaron; Ponto, Kevin; Gleicher, Michael L; Radwin, Robert G

    2014-09-01

    In this study, we compared how users locate physical and equivalent three-dimensional images of virtual objects in a cave automatic virtual environment (CAVE) using the hand to examine how human performance (accuracy, time, and approach) is affected by object size, location, and distance. Virtual reality (VR) offers the promise to flexibly simulate arbitrary environments for studying human performance. Previously, VR researchers primarily considered differences between virtual and physical distance estimation rather than reaching for close-up objects. Fourteen participants completed manual targeting tasks that involved reaching for corners on equivalent physical and virtual boxes of three different sizes. Predicted errors were calculated from a geometric model based on user interpupillary distance, eye location, distance from the eyes to the projector screen, and object. Users were 1.64 times less accurate (p < .001) and spent 1.49 times more time (p = .01) targeting virtual versus physical box corners using the hands. Predicted virtual targeting errors were on average 1.53 times (p < .05) greater than the observed errors for farther virtual targets but not significantly different for close-up virtual targets. Target size, location, and distance, in addition to binocular disparity, affected virtual object targeting inaccuracy. Observed virtual box inaccuracy was less than predicted for farther locations, suggesting possible influence of cues other than binocular vision. Human physical interaction with objects in VR for simulation, training, and prototyping involving reaching and manually handling virtual objects in a CAVE are more accurate than predicted when locating farther objects.

  15. An Energy-Efficient Approach to Enhance Virtual Sensors Provisioning in Sensor Clouds Environments.

    PubMed

    Lemos, Marcus Vinícius de S; Filho, Raimir Holanda; Rabêlo, Ricardo de Andrade L; de Carvalho, Carlos Giovanni N; Mendes, Douglas Lopes de S; Costa, Valney da Gama

    2018-02-26

    Virtual sensors provisioning is a central issue for sensors cloud middleware since it is responsible for selecting physical nodes, usually from Wireless Sensor Networks (WSN) of different owners, to handle user's queries or applications. Recent works perform provisioning by clustering sensor nodes based on the correlation measurements and then selecting as few nodes as possible to preserve WSN energy. However, such works consider only homogeneous nodes (same set of sensors). Therefore, those works are not entirely appropriate for sensor clouds, which in most cases comprises heterogeneous sensor nodes. In this paper, we propose ACxSIMv2, an approach to enhance the provisioning task by considering heterogeneous environments. Two main algorithms form ACxSIMv2. The first one, ACASIMv1, creates multi-dimensional clusters of sensor nodes, taking into account the measurements correlations instead of the physical distance between nodes like most works on literature. Then, the second algorithm, ACOSIMv2, based on an Ant Colony Optimization system, selects an optimal set of sensors nodes from to respond user's queries while attending all parameters and preserving the overall energy consumption. Results from initial experiments show that the approach reduces significantly the sensor cloud energy consumption compared to traditional works, providing a solution to be considered in sensor cloud scenarios.

  16. Individual Differences in a Spatial-Semantic Virtual Environment.

    ERIC Educational Resources Information Center

    Chen, Chaomei

    2000-01-01

    Presents two empirical case studies concerning the role of individual differences in searching through a spatial-semantic virtual environment. Discusses information visualization in information systems; cognitive factors, including associative memory, spatial ability, and visual memory; user satisfaction; and cognitive abilities and search…

  17. Virtual acoustic environments for comprehensive evaluation of model-based hearing devices.

    PubMed

    Grimm, Giso; Luberadzka, Joanna; Hohmann, Volker

    2018-06-01

    Create virtual acoustic environments (VAEs) with interactive dynamic rendering for applications in audiology. A toolbox for creation and rendering of dynamic virtual acoustic environments (TASCAR) that allows direct user interaction was developed for application in hearing aid research and audiology. The software architecture and the simulation methods used to produce VAEs are outlined. Example environments are described and analysed. With the proposed software, a tool for simulation of VAEs is available. A set of VAEs rendered with the proposed software was described.

  18. Information Virtulization in Virtual Environments

    NASA Technical Reports Server (NTRS)

    Bryson, Steve; Kwak, Dochan (Technical Monitor)

    2001-01-01

    Virtual Environments provide a natural setting for a wide range of information visualization applications, particularly wlieit the information to be visualized is defined on a three-dimensional domain (Bryson, 1996). This chapter provides an overview of the issues that arise when designing and implementing an information visualization application in a virtual environment. Many design issues that arise, such as, e.g., issues of display, user tracking are common to any application of virtual environments. In this chapter we focus on those issues that are special to information visualization applications, as issues of wider concern are addressed elsewhere in this book.

  19. Building intuitive 3D interfaces for virtual reality systems

    NASA Astrophysics Data System (ADS)

    Vaidya, Vivek; Suryanarayanan, Srikanth; Seitel, Mathias; Mullick, Rakesh

    2007-03-01

    An exploration of techniques for developing intuitive, and efficient user interfaces for virtual reality systems. Work seeks to understand which paradigms from the better-understood world of 2D user interfaces remain viable within 3D environments. In order to establish this a new user interface was created that applied various understood principles of interface design. A user study was then performed where it was compared with an earlier interface for a series of medical visualization tasks.

  20. Constructing Virtual Training Demonstrations

    DTIC Science & Technology

    2008-12-01

    virtual environments have been shown to be effective for training, and distributed game -based architectures contribute an added benefit of wide...investigation of how a demonstration authoring toolset can be constructed from existing virtual training environments using 3-D multiplayer gaming ...intelligent agents project to create AI middleware for simulations and videogames . The result was SimBionic®, which enables users to graphically author

  1. Measurement Tools for the Immersive Visualization Environment: Steps Toward the Virtual Laboratory.

    PubMed

    Hagedorn, John G; Dunkers, Joy P; Satterfield, Steven G; Peskin, Adele P; Kelso, John T; Terrill, Judith E

    2007-01-01

    This paper describes a set of tools for performing measurements of objects in a virtual reality based immersive visualization environment. These tools enable the use of the immersive environment as an instrument for extracting quantitative information from data representations that hitherto had be used solely for qualitative examination. We provide, within the virtual environment, ways for the user to analyze and interact with the quantitative data generated. We describe results generated by these methods to obtain dimensional descriptors of tissue engineered medical products. We regard this toolbox as our first step in the implementation of a virtual measurement laboratory within an immersive visualization environment.

  2. Tuning self-motion perception in virtual reality with visual illusions.

    PubMed

    Bruder, Gerd; Steinicke, Frank; Wieland, Phil; Lappe, Markus

    2012-07-01

    Motion perception in immersive virtual environments significantly differs from the real world. For example, previous work has shown that users tend to underestimate travel distances in virtual environments (VEs). As a solution to this problem, researchers proposed to scale the mapped virtual camera motion relative to the tracked real-world movement of a user until real and virtual motion are perceived as equal, i.e., real-world movements could be mapped with a larger gain to the VE in order to compensate for the underestimation. However, introducing discrepancies between real and virtual motion can become a problem, in particular, due to misalignments of both worlds and distorted space cognition. In this paper, we describe a different approach that introduces apparent self-motion illusions by manipulating optic flow fields during movements in VEs. These manipulations can affect self-motion perception in VEs, but omit a quantitative discrepancy between real and virtual motions. In particular, we consider to which regions of the virtual view these apparent self-motion illusions can be applied, i.e., the ground plane or peripheral vision. Therefore, we introduce four illusions and show in experiments that optic flow manipulation can significantly affect users' self-motion judgments. Furthermore, we show that with such manipulations of optic flow fields the underestimation of travel distances can be compensated.

  3. Multi-Agent Framework for Virtual Learning Spaces.

    ERIC Educational Resources Information Center

    Sheremetov, Leonid; Nunez, Gustavo

    1999-01-01

    Discussion of computer-supported collaborative learning, distributed artificial intelligence, and intelligent tutoring systems focuses on the concept of agents, and describes a virtual learning environment that has a multi-agent system. Describes a model of interactions in collaborative learning and discusses agents for Web-based virtual…

  4. Virtual interface environment workstations

    NASA Technical Reports Server (NTRS)

    Fisher, S. S.; Wenzel, E. M.; Coler, C.; Mcgreevy, M. W.

    1988-01-01

    A head-mounted, wide-angle, stereoscopic display system controlled by operator position, voice and gesture has been developed at NASA's Ames Research Center for use as a multipurpose interface environment. This Virtual Interface Environment Workstation (VIEW) system provides a multisensory, interactive display environment in which a user can virtually explore a 360-degree synthesized or remotely sensed environment and can viscerally interact with its components. Primary applications of the system are in telerobotics, management of large-scale integrated information systems, and human factors research. System configuration, research scenarios, and research directions are described.

  5. A Virtual Walk through London: Culture Learning through a Cultural Immersion Experience

    ERIC Educational Resources Information Center

    Shih, Ya-Chun

    2015-01-01

    Integrating Google Street View into a three-dimensional virtual environment in which users control personal avatars provides these said users with access to an innovative, interactive, and real-world context for communication and culture learning. We have selected London, a city famous for its rich historical, architectural, and artistic heritage,…

  6. The SEE Experience: Edutainment in 3D Virtual Worlds.

    ERIC Educational Resources Information Center

    Di Blas, Nicoletta; Paolini, Paolo; Hazan, Susan

    Shared virtual worlds are innovative applications where several users, represented by Avatars, simultaneously access via Internet a 3D space. Users cooperate through interaction with the environment and with each other, manipulating objects and chatting as they go. Apart from in the well documented online action games industry, now often played…

  7. Libraries' Place in Virtual Social Networks

    ERIC Educational Resources Information Center

    Mathews, Brian S.

    2007-01-01

    Do libraries belong in the virtual world of social networking? With more than 100 million users, this environment is impossible to ignore. A rising philosophy for libraries, particularly in blog-land, involves the concept of being where the users are. Simply using new media to deliver an old message is not progress. Instead, librarians should…

  8. Eodataservice.org: Big Data Platform to Enable Multi-disciplinary Information Extraction from Geospatial Data

    NASA Astrophysics Data System (ADS)

    Natali, S.; Mantovani, S.; Barboni, D.; Hogan, P.

    2017-12-01

    In 1999, US Vice-President Al Gore outlined the concept of `Digital Earth' as a multi-resolution, three-dimensional representation of the planet to find, visualise and make sense of vast amounts of geo- referenced information on physical and social environments, allowing to navigate through space and time, accessing historical and forecast data to support scientists, policy-makers, and any other user. The eodataservice platform (http://eodataservice.org/) implements the Digital Earth Concept: eodatasevice is a cross-domain platform that makes available a large set of multi-year global environmental collections allowing data discovery, visualization, combination, processing and download. It implements a "virtual datacube" approach where data stored on distributed data centers are made available via standardized OGC-compliant interfaces. Dedicated web-based Graphic User Interfaces (based on the ESA-NASA WebWorldWind technology) as well as web-based notebooks (e.g. Jupyter notebook), deskop GIS tools and command line interfaces can be used to access and manipulate the data. The platform can be fully customized on users' needs. So far eodataservice has been used for the following thematic applications: High resolution satellite data distribution Land surface monitoring using SAR surface deformation data Atmosphere, ocean and climate applications Climate-health applications Urban Environment monitoring Safeguard of cultural heritage sites Support to farmers and (re)-insurances in the agriculturés field In the current work, the EO Data Service concept is presented as key enabling technology; furthermore various examples are provided to demonstrate the high level of interdisciplinarity of the platform.

  9. A Theoretical Cybernetic Macro-Script to Articulate Collaborative Interactions of Cyber Entities in Virtual Worlds

    ERIC Educational Resources Information Center

    Pellas, Nikolaos

    2014-01-01

    Nowadays, the dissemination and exploitation of three-dimensional (3D) multi-user virtual worlds in higher education have been disclosed from their widespread acceptance as candidate learning platforms. However, it is still lacking a theoretical cybernetic macro-script to elaborate the coordination of multiple complex interactions among…

  10. "Mooving" to a Virtual Curriculum.

    ERIC Educational Resources Information Center

    LaRoe, R. John

    Three writing classes at the University of Missouri (freshman, sophomore, and senior) spent much or most of the semester on the virtual campus of the Diversity University (DU) MOO (multi-user object oriented). The freshman class wrote one paper on Internet exploration, another on their favorite Internet destination, and for the third were given a…

  11. HyFinBall: A Two-Handed, Hybrid 2D/3D Desktop VR Interface for Visualization

    DTIC Science & Technology

    2013-01-01

    user study . This is done in the context of a rich, visual analytics interface containing coordinated views with 2D and 3D visualizations and...the user interface (hardware and software), the design space, as well as preliminary results of a formal user study . This is done in the context of a ... virtual reality , user interface , two-handed interface , hybrid user interface , multi-touch, gesture,

  12. Adaptation of a Multi-Block Structured Solver for Effective Use in a Hybrid CPU/GPU Massively Parallel Environment

    NASA Astrophysics Data System (ADS)

    Gutzwiller, David; Gontier, Mathieu; Demeulenaere, Alain

    2014-11-01

    Multi-Block structured solvers hold many advantages over their unstructured counterparts, such as a smaller memory footprint and efficient serial performance. Historically, multi-block structured solvers have not been easily adapted for use in a High Performance Computing (HPC) environment, and the recent trend towards hybrid GPU/CPU architectures has further complicated the situation. This paper will elaborate on developments and innovations applied to the NUMECA FINE/Turbo solver that have allowed near-linear scalability with real-world problems on over 250 hybrid GPU/GPU cluster nodes. Discussion will focus on the implementation of virtual partitioning and load balancing algorithms using a novel meta-block concept. This implementation is transparent to the user, allowing all pre- and post-processing steps to be performed using a simple, unpartitioned grid topology. Additional discussion will elaborate on developments that have improved parallel performance, including fully parallel I/O with the ADIOS API and the GPU porting of the computationally heavy CPUBooster convergence acceleration module. Head of HPC and Release Management, Numeca International.

  13. Implementing Advanced Characteristics of X3D Collaborative Virtual Environments for Supporting e-Learning: The Case of EVE Platform

    ERIC Educational Resources Information Center

    Bouras, Christos; Triglianos, Vasileios; Tsiatsos, Thrasyvoulos

    2014-01-01

    Three dimensional Collaborative Virtual Environments are a powerful form of collaborative telecommunication applications, enabling the users to share a common three-dimensional space and interact with each other as well as with the environment surrounding them, in order to collaboratively solve problems or aid learning processes. Such an…

  14. The Influences of the 2D Image-Based Augmented Reality and Virtual Reality on Student Learning

    ERIC Educational Resources Information Center

    Liou, Hsin-Hun; Yang, Stephen J. H.; Chen, Sherry Y.; Tarng, Wernhuar

    2017-01-01

    Virtual reality (VR) learning environments can provide students with concepts of the simulated phenomena, but users are not allowed to interact with real elements. Conversely, augmented reality (AR) learning environments blend real-world environments so AR could enhance the effects of computer simulation and promote students' realistic experience.…

  15. Interactivity in the Online Learning Environment: A Study of Users of the North Carolina Virtual Public School

    ERIC Educational Resources Information Center

    Ingerham, Laura

    2012-01-01

    Recent studies of online learning environments reveal the importance of interaction within the virtual environment. Abrami, Bernard, Bures, Borokhovski, and Tamim (2011) identify and study 3 types of student interactions: student-content, student-teacher, and student-student. This article builds on this classification of interactions as it…

  16. DELIVERing Library Resources to the Virtual Learning Environment

    ERIC Educational Resources Information Center

    Secker, Jane

    2005-01-01

    Purpose: Examines a project to integrate digital libraries and virtual learning environments (VLE) focusing on requirements for online reading list systems. Design/methodology/approach: Conducted a user needs analysis using interviews and focus groups and evaluated three reading or resource list management systems. Findings: Provides a technical…

  17. Virtual Environment Training: Auxiliary Machinery Room (AMR) Watchstation Trainer.

    ERIC Educational Resources Information Center

    Hriber, Dennis C.; And Others

    1993-01-01

    Describes a project implemented at Newport News Shipbuilding that used Virtual Environment Training to improve the performance of submarine crewmen. Highlights include development of the Auxiliary Machine Room (AMR) Watchstation Trainer; Digital Video Interactive (DVI); screen layout; test design and evaluation; user reactions; authoring language;…

  18. Virtual Reality: You Are There

    NASA Technical Reports Server (NTRS)

    1993-01-01

    Telepresence or "virtual reality," allows a person, with assistance from advanced technology devices, to figuratively project himself into another environment. This technology is marketed by several companies, among them Fakespace, Inc., a former Ames Research Center contractor. Fakespace developed a teleoperational motion platform for transmitting sounds and images from remote locations. The "Molly" matches the user's head motion and, when coupled with a stereo viewing device and appropriate software, creates the telepresence experience. Its companion piece is the BOOM-the user's viewing device that provides the sense of involvement in the virtual environment. Either system may be used alone. Because suits, gloves, headphones, etc. are not needed, a whole range of commercial applications is possible, including computer-aided design techniques and virtual reality visualizations. Customers include Sandia National Laboratories, Stanford Research Institute and Mattel Toys.

  19. The Virtual Environment for Rapid Prototyping of the Intelligent Environment

    PubMed Central

    Bouzouane, Abdenour; Gaboury, Sébastien

    2017-01-01

    Advances in domains such as sensor networks and electronic and ambient intelligence have allowed us to create intelligent environments (IEs). However, research in IE is being held back by the fact that researchers face major difficulties, such as a lack of resources for their experiments. Indeed, they cannot easily build IEs to evaluate their approaches. This is mainly because of economic and logistical issues. In this paper, we propose a simulator to build virtual IEs. Simulators are a good alternative to physical IEs because they are inexpensive, and experiments can be conducted easily. Our simulator is open source and it provides users with a set of virtual sensors that simulates the behavior of real sensors. This simulator gives the user the capacity to build their own environment, providing a model to edit inhabitants’ behavior and an interactive mode. In this mode, the user can directly act upon IE objects. This simulator gathers data generated by the interactions in order to produce datasets. These datasets can be used by scientists to evaluate several approaches in IEs. PMID:29112175

  20. The Virtual Environment for Rapid Prototyping of the Intelligent Environment.

    PubMed

    Francillette, Yannick; Boucher, Eric; Bouzouane, Abdenour; Gaboury, Sébastien

    2017-11-07

    Advances in domains such as sensor networks and electronic and ambient intelligence have allowed us to create intelligent environments (IEs). However, research in IE is being held back by the fact that researchers face major difficulties, such as a lack of resources for their experiments. Indeed, they cannot easily build IEs to evaluate their approaches. This is mainly because of economic and logistical issues. In this paper, we propose a simulator to build virtual IEs. Simulators are a good alternative to physical IEs because they are inexpensive, and experiments can be conducted easily. Our simulator is open source and it provides users with a set of virtual sensors that simulates the behavior of real sensors. This simulator gives the user the capacity to build their own environment, providing a model to edit inhabitants' behavior and an interactive mode. In this mode, the user can directly act upon IE objects. This simulator gathers data generated by the interactions in order to produce datasets. These datasets can be used by scientists to evaluate several approaches in IEs.

  1. The Users' Views on Different Types of Instructional Materials Provided in Virtual Reality Technologies

    ERIC Educational Resources Information Center

    Yildirim, Gürkan

    2017-01-01

    Today, it is seen that developing technologies are tried to be used continuously in the learning environments. These technologies have rapidly been diversifying and changing. Recently, virtual reality technology has become one of the technologies that experts have often been dwelling on. The present research tries to determine users' opinions and…

  2. Virtual environments simulation in research reactor

    NASA Astrophysics Data System (ADS)

    Muhamad, Shalina Bt. Sheik; Bahrin, Muhammad Hannan Bin

    2017-01-01

    Virtual reality based simulations are interactive and engaging. It has the useful potential in improving safety training. Virtual reality technology can be used to train workers who are unfamiliar with the physical layout of an area. In this study, a simulation program based on the virtual environment at research reactor was developed. The platform used for virtual simulation is 3DVia software for which it's rendering capabilities, physics for movement and collision and interactive navigation features have been taken advantage of. A real research reactor was virtually modelled and simulated with the model of avatars adopted to simulate walking. Collision detection algorithms were developed for various parts of the 3D building and avatars to restrain the avatars to certain regions of the virtual environment. A user can control the avatar to move around inside the virtual environment. Thus, this work can assist in the training of personnel, as in evaluating the radiological safety of the research reactor facility.

  3. Bimanual Interaction with Interscopic Multi-Touch Surfaces

    NASA Astrophysics Data System (ADS)

    Schöning, Johannes; Steinicke, Frank; Krüger, Antonio; Hinrichs, Klaus; Valkov, Dimitar

    Multi-touch interaction has received considerable attention in the last few years, in particular for natural two-dimensional (2D) interaction. However, many application areas deal with three-dimensional (3D) data and require intuitive 3D interaction techniques therefore. Indeed, virtual reality (VR) systems provide sophisticated 3D user interface, but then lack efficient 2D interaction, and are therefore rarely adopted by ordinary users or even by experts. Since multi-touch interfaces represent a good trade-off between intuitive, constrained interaction on a touch surface providing tangible feedback, and unrestricted natural interaction without any instrumentation, they have the potential to form the foundation of the next generation user interface for 2D as well as 3D interaction. In particular, stereoscopic display of 3D data provides an additional depth cue, but until now the challenges and limitations for multi-touch interaction in this context have not been considered. In this paper we present new multi-touch paradigms and interactions that combine both traditional 2D interaction and novel 3D interaction on a touch surface to form a new class of multi-touch systems, which we refer to as interscopic multi-touch surfaces (iMUTS). We discuss iMUTS-based user interfaces that support interaction with 2D content displayed in monoscopic mode and 3D content usually displayed stereoscopically. In order to underline the potential of the proposed iMUTS setup, we have developed and evaluated two example interaction metaphors for different domains. First, we present intuitive navigation techniques for virtual 3D city models, and then we describe a natural metaphor for deforming volumetric datasets in a medical context.

  4. Digital fabrication of multi-material biomedical objects.

    PubMed

    Cheung, H H; Choi, S H

    2009-12-01

    This paper describes a multi-material virtual prototyping (MMVP) system for modelling and digital fabrication of discrete and functionally graded multi-material objects for biomedical applications. The MMVP system consists of a DMMVP module, an FGMVP module and a virtual reality (VR) simulation module. The DMMVP module is used to model discrete multi-material (DMM) objects, while the FGMVP module is for functionally graded multi-material (FGM) objects. The VR simulation module integrates these two modules to perform digital fabrication of multi-material objects, which can be subsequently visualized and analysed in a virtual environment to optimize MMLM processes for fabrication of product prototypes. Using the MMVP system, two biomedical objects, including a DMM human spine and an FGM intervertebral disc spacer are modelled and digitally fabricated for visualization and analysis in a VR environment. These studies show that the MMVP system is a practical tool for modelling, visualization, and subsequent fabrication of biomedical objects of discrete and functionally graded multi-materials for biomedical applications. The system may be adapted to control MMLM machines with appropriate hardware for physical fabrication of biomedical objects.

  5. If You Can Make It There, You Can Make It Anywhere: Providing Reference and Instructional Library Services in the Virtual Environment

    ERIC Educational Resources Information Center

    Leonard, Elizabeth; Morasch, Maureen J.

    2012-01-01

    Despite the old-fashioned view of the academic library as a static institution, libraries can and do change in response to the needs of users and stakeholders. Perhaps the most dramatic shift in services has been the transition from a purely physical to a combination physical/virtual or even virtual-only environment. This article examines how…

  6. Virtual reality games for movement rehabilitation in neurological conditions: how do we meet the needs and expectations of the users?

    PubMed

    Lewis, Gwyn N; Rosie, Juliet A

    2012-01-01

    To review quantitative and qualitative studies that have examined the users' response to virtual reality game-based interventions in people with movement disorders associated with chronic neurological conditions. We aimed to determine key themes that influenced users' enjoyment and engagement in the games and develop suggestions as to how future systems could best address their needs and expectations. There were a limited number of studies that evaluated user opinions. From those found, seven common themes emerged: technology limitations, user control and therapist assistance, the novel physical and cognitive challenge, feedback, social interaction, game purpose and expectations, and the virtual environments. Our key recommendations derived from the review were to avoid technology failure, maintain overt therapeutic principles within the games, encompass progression to promote continuing physical and cognitive challenge, and to provide feedback that is easily and readily associated with success. While there have been few studies that have evaluated the users' perspective of virtual rehabilitation games, our findings indicate that canvassing these experiences provides valuable information on the needs of the intended users. Incorporating our recommendations may enhance the efficacy of future systems to optimize the rehabilitation benefits of virtual reality games.

  7. A Data Management System for International Space Station Simulation Tools

    NASA Technical Reports Server (NTRS)

    Betts, Bradley J.; DelMundo, Rommel; Elcott, Sharif; McIntosh, Dawn; Niehaus, Brian; Papasin, Richard; Mah, Robert W.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Groups associated with the design, operational, and training aspects of the International Space Station make extensive use of modeling and simulation tools. Users of these tools often need to access and manipulate large quantities of data associated with the station, ranging from design documents to wiring diagrams. Retrieving and manipulating this data directly within the simulation and modeling environment can provide substantial benefit to users. An approach for providing these kinds of data management services, including a database schema and class structure, is presented. Implementation details are also provided as a data management system is integrated into the Intelligent Virtual Station, a modeling and simulation tool developed by the NASA Ames Smart Systems Research Laboratory. One use of the Intelligent Virtual Station is generating station-related training procedures in a virtual environment, The data management component allows users to quickly and easily retrieve information related to objects on the station, enhancing their ability to generate accurate procedures. Users can associate new information with objects and have that information stored in a database.

  8. Virtual Vision

    NASA Astrophysics Data System (ADS)

    Terzopoulos, Demetri; Qureshi, Faisal Z.

    Computer vision and sensor networks researchers are increasingly motivated to investigate complex multi-camera sensing and control issues that arise in the automatic visual surveillance of extensive, highly populated public spaces such as airports and train stations. However, they often encounter serious impediments to deploying and experimenting with large-scale physical camera networks in such real-world environments. We propose an alternative approach called "Virtual Vision", which facilitates this type of research through the virtual reality simulation of populated urban spaces, camera sensor networks, and computer vision on commodity computers. We demonstrate the usefulness of our approach by developing two highly automated surveillance systems comprising passive and active pan/tilt/zoom cameras that are deployed in a virtual train station environment populated by autonomous, lifelike virtual pedestrians. The easily reconfigurable virtual cameras distributed in this environment generate synthetic video feeds that emulate those acquired by real surveillance cameras monitoring public spaces. The novel multi-camera control strategies that we describe enable the cameras to collaborate in persistently observing pedestrians of interest and in acquiring close-up videos of pedestrians in designated areas.

  9. Immersive virtual reality for visualization of abdominal CT

    NASA Astrophysics Data System (ADS)

    Lin, Qiufeng; Xu, Zhoubing; Li, Bo; Baucom, Rebeccah; Poulose, Benjamin; Landman, Bennett A.; Bodenheimer, Robert E.

    2013-03-01

    Immersive virtual environments use a stereoscopic head-mounted display and data glove to create high fidelity virtual experiences in which users can interact with three-dimensional models and perceive relationships at their true scale. This stands in stark contrast to traditional PACS-based infrastructure in which images are viewed as stacks of two dimensional slices, or, at best, disembodied renderings. Although there has substantial innovation in immersive virtual environments for entertainment and consumer media, these technologies have not been widely applied in clinical applications. Here, we consider potential applications of immersive virtual environments for ventral hernia patients with abdominal computed tomography imaging data. Nearly a half million ventral hernias occur in the United States each year, and hernia repair is the most commonly performed general surgery operation worldwide. A significant problem in these conditions is communicating the urgency, degree of severity, and impact of a hernia (and potential repair) on patient quality of life. Hernias are defined by ruptures in the abdominal wall (i.e., the absence of healthy tissues) rather than a growth (e.g., cancer); therefore, understanding a hernia necessitates understanding the entire abdomen. Our environment allows surgeons and patients to view body scans at scale and interact with these virtual models using a data glove. This visualization and interaction allows users to perceive the relationship between physical structures and medical imaging data. The system provides close integration of PACS-based CT data with immersive virtual environments and creates opportunities to study and optimize interfaces for patient communication, operative planning, and medical education.

  10. Immersive Virtual Reality for Visualization of Abdominal CT.

    PubMed

    Lin, Qiufeng; Xu, Zhoubing; Li, Bo; Baucom, Rebeccah; Poulose, Benjamin; Landman, Bennett A; Bodenheimer, Robert E

    2013-03-28

    Immersive virtual environments use a stereoscopic head-mounted display and data glove to create high fidelity virtual experiences in which users can interact with three-dimensional models and perceive relationships at their true scale. This stands in stark contrast to traditional PACS-based infrastructure in which images are viewed as stacks of two-dimensional slices, or, at best, disembodied renderings. Although there has substantial innovation in immersive virtual environments for entertainment and consumer media, these technologies have not been widely applied in clinical applications. Here, we consider potential applications of immersive virtual environments for ventral hernia patients with abdominal computed tomography imaging data. Nearly a half million ventral hernias occur in the United States each year, and hernia repair is the most commonly performed general surgery operation worldwide. A significant problem in these conditions is communicating the urgency, degree of severity, and impact of a hernia (and potential repair) on patient quality of life. Hernias are defined by ruptures in the abdominal wall (i.e., the absence of healthy tissues) rather than a growth (e.g., cancer); therefore, understanding a hernia necessitates understanding the entire abdomen. Our environment allows surgeons and patients to view body scans at scale and interact with these virtual models using a data glove. This visualization and interaction allows users to perceive the relationship between physical structures and medical imaging data. The system provides close integration of PACS-based CT data with immersive virtual environments and creates opportunities to study and optimize interfaces for patient communication, operative planning, and medical education.

  11. Latency and User Performance in Virtual Environments and Augmented Reality

    NASA Technical Reports Server (NTRS)

    Ellis, Stephen R.

    2009-01-01

    System rendering latency has been recognized by senior researchers, such as Professor Fredrick Brooks of UNC (Turing Award 1999), as a major factor limiting the realism and utility of head-referenced displays systems. Latency has been shown to reduce the user's sense of immersion within a virtual environment, disturb user interaction with virtual objects, and to contribute to motion sickness during some simulation tasks. Latency, however, is not just an issue for external display systems since finite nerve conduction rates and variation in transduction times in the human body's sensors also pose problems for latency management within the nervous system. Some of the phenomena arising from the brain's handling of sensory asynchrony due to latency will be discussed as a prelude to consideration of the effects of latency in interactive displays. The causes and consequences of the erroneous movement that appears in displays due to latency will be illustrated with examples of the user performance impact provided by several experiments. These experiments will review the generality of user sensitivity to latency when users judge either object or environment stability. Hardware and signal processing countermeasures will also be discussed. In particular the tuning of a simple extrapolative predictive filter not using a dynamic movement model will be presented. Results show that it is possible to adjust this filter so that the appearance of some latencies may be hidden without the introduction of perceptual artifacts such as overshoot. Several examples of the effects of user performance will be illustrated by three-dimensional tracking and tracing tasks executed in virtual environments. These experiments demonstrate classic phenomena known from work on manual control and show the need for very responsive systems if they are indented to support precise manipulation. The practical benefits of removing interfering latencies from interactive systems will be emphasized with some classic final examples from surgical telerobotics, and human-computer interaction.

  12. Desktop supercomputer: what can it do?

    NASA Astrophysics Data System (ADS)

    Bogdanov, A.; Degtyarev, A.; Korkhov, V.

    2017-12-01

    The paper addresses the issues of solving complex problems that require using supercomputers or multiprocessor clusters available for most researchers nowadays. Efficient distribution of high performance computing resources according to actual application needs has been a major research topic since high-performance computing (HPC) technologies became widely introduced. At the same time, comfortable and transparent access to these resources was a key user requirement. In this paper we discuss approaches to build a virtual private supercomputer available at user's desktop: a virtual computing environment tailored specifically for a target user with a particular target application. We describe and evaluate possibilities to create the virtual supercomputer based on light-weight virtualization technologies, and analyze the efficiency of our approach compared to traditional methods of HPC resource management.

  13. A Virtual Geant4 Environment

    NASA Astrophysics Data System (ADS)

    Iwai, Go

    2015-12-01

    We describe the development of an environment for Geant4 consisting of an application and data that provide users with a more efficient way to access Geant4 applications without having to download and build the software locally. The environment is platform neutral and offers the users near-real time performance. In addition, the environment consists of data and Geant4 libraries built using low-level virtual machine (LLVM) tools which can produce bitcode that can be embedded in HTML and accessed via a browser. The bitcode is downloaded to the local machine via the browser and can then be configured by the user. This approach provides a way of minimising the risk of leaking potentially sensitive data used to construct the Geant4 model and application in the medical domain for treatment planning. We describe several applications that have used this approach and compare their performance with that of native applications. We also describe potential user communities that could benefit from this approach.

  14. Establishing a virtual learning environment: a nursing experience.

    PubMed

    Wood, Anya; McPhee, Carolyn

    2011-11-01

    The use of virtual worlds has exploded in popularity, but getting started may not be easy. In this article, the authors, members of the corporate nursing education team at University Health Network, outline their experience with incorporating virtual technology into their learning environment. Over a period of several months, a virtual hospital, including two nursing units, was created in Second Life®, allowing more than 500 nurses to role-play in a safe environment without the fear of making a mistake. This experience has provided valuable insight into the best ways to develop and learn in a virtual environment. The authors discuss the challenges of installing and building the Second Life® platform and provide guidelines for preparing users and suggestions for crafting educational activities. This article provides a starting point for organizations planning to incorporate virtual worlds into their learning environment. Copyright 2011, SLACK Incorporated.

  15. Designing 3 Dimensional Virtual Reality Using Panoramic Image

    NASA Astrophysics Data System (ADS)

    Wan Abd Arif, Wan Norazlinawati; Wan Ahmad, Wan Fatimah; Nordin, Shahrina Md.; Abdullah, Azrai; Sivapalan, Subarna

    The high demand to improve the quality of the presentation in the knowledge sharing field is to compete with rapidly growing technology. The needs for development of technology based learning and training lead to an idea to develop an Oil and Gas Plant Virtual Environment (OGPVE) for the benefit of our future. Panoramic Virtual Reality learning based environment is essential in order to help educators overcome the limitations in traditional technical writing lesson. Virtual reality will help users to understand better by providing the simulations of real-world and hard to reach environment with high degree of realistic experience and interactivity. Thus, in order to create a courseware which will achieve the objective, accurate images of intended scenarios must be acquired. The panorama shows the OGPVE and helps to generate ideas to users on what they have learnt. This paper discusses part of the development in panoramic virtual reality. The important phases for developing successful panoramic image are image acquisition and image stitching or mosaicing. In this paper, the combination of wide field-of-view (FOV) and close up image used in this panoramic development are also discussed.

  16. Using Virtual Reality For Outreach Purposes in Planetology

    NASA Astrophysics Data System (ADS)

    Civet, François; Le Mouélic, Stéphane; Le Menn, Erwan; Beaunay, Stéphanie

    2016-10-01

    2016 has been a year marked by a technological breakthrough : the availability for the first time to the general public of technologically mature virtual reality devices. Virtual Reality consists in visually immerging a user in a 3D environment reproduced either from real and/or imaginary data, with the possibility to move and eventually interact with the different elements. In planetology, most of the places will remain inaccessible to the public for a while, but a fleet of dedicated spacecraft's such as orbiters, landers and rovers allow the possibility to virtually reconstruct the environments, using image processing, cartography and photogrammetry. Virtual reality can then bridge the gap to virtually "send" any user into the place and enjoy the exploration.We are investigating several type of devices to render orbital or ground based data of planetological interest, mostly from Mars. The most simple system consists of a "cardboard" headset, on which the user can simply use his cellphone as the screen. A more comfortable experience is obtained with more complex systems such as the HTC vive or Oculus Rift headsets, which include a tracking system important to minimize motion sickness. The third environment that we have developed is based on the CAVE concept, were four 3D video projectors are used to project on three 2x3m walls plus the ground. These systems can be used for scientific data analysis, but also prove to be perfectly suited for outreach and education purposes.

  17. Predicting Virtual Learning Environment Adoption: A Case Study

    ERIC Educational Resources Information Center

    Penjor, Sonam; Zander, Pär-Ola

    2016-01-01

    This study investigates the significance of Rogers' Diffusion of Innovations (DOI) theory with regard to the use of a Virtual Learning Environment (VLE) at the Royal University of Bhutan (RUB). The focus is on different adoption types and characteristics of users. Rogers' DOI theory is applied to investigate the influence of five predictors…

  18. Evaluating Technology-Based Educational Interventions: A Review of Two Projects

    ERIC Educational Resources Information Center

    Adamo-Villani, Nicoletta; Dib, Hazar

    2013-01-01

    The article discusses current evaluation methodologies used to assess the usability, user enjoyment, and pedagogical efficacy of virtual learning environments (VLEs) and serious games. It also describes the evaluations of two recently developed projects: a virtual learning environment that employs a fantasy 3D world to engage deaf and hearing…

  19. Novel Web-based Education Platforms for Information Communication utilizing Gamification, Virtual and Immersive Reality

    NASA Astrophysics Data System (ADS)

    Demir, I.

    2015-12-01

    Recent developments in internet technologies make it possible to manage and visualize large data on the web. Novel visualization techniques and interactive user interfaces allow users to create realistic environments, and interact with data to gain insight from simulations and environmental observations. This presentation showcase information communication interfaces, games, and virtual and immersive reality applications for supporting teaching and learning of concepts in atmospheric and hydrological sciences. The information communication platforms utilizes latest web technologies and allow accessing and visualizing large scale data on the web. The simulation system is a web-based 3D interactive learning environment for teaching hydrological and atmospheric processes and concepts. The simulation systems provides a visually striking platform with realistic terrain and weather information, and water simulation. The web-based simulation system provides an environment for students to learn about the earth science processes, and effects of development and human activity on the terrain. Users can access the system in three visualization modes including virtual reality, augmented reality, and immersive reality using heads-up display. The system provides various scenarios customized to fit the age and education level of various users.

  20. A virtual work space for both hands manipulation with coherency between kinesthetic and visual sensation

    NASA Technical Reports Server (NTRS)

    Ishii, Masahiro; Sukanya, P.; Sato, Makoto

    1994-01-01

    This paper describes the construction of a virtual work space for tasks performed by two handed manipulation. We intend to provide a virtual environment that encourages users to accomplish tasks as they usually act in a real environment. Our approach uses a three dimensional spatial interface device that allows the user to handle virtual objects by hand and be able to feel some physical properties such as contact, weight, etc. We investigated suitable conditions for constructing our virtual work space by simulating some basic assembly work, a face and fit task. We then selected the conditions under which the subjects felt most comfortable in performing this task and set up our virtual work space. Finally, we verified the possibility of performing more complex tasks in this virtual work space by providing simple virtual models and then let the subjects create new models by assembling these components. The subjects can naturally perform assembly operations and accomplish the task. Our evaluation shows that this virtual work space has the potential to be used for performing tasks that require two-handed manipulation or cooperation between both hands in a natural manner.

  1. A collaborative molecular modeling environment using a virtual tunneling service.

    PubMed

    Lee, Jun; Kim, Jee-In; Kang, Lin-Woo

    2012-01-01

    Collaborative researches of three-dimensional molecular modeling can be limited by different time zones and locations. A networked virtual environment can be utilized to overcome the problem caused by the temporal and spatial differences. However, traditional approaches did not sufficiently consider integration of different computing environments, which were characterized by types of applications, roles of users, and so on. We propose a collaborative molecular modeling environment to integrate different molecule modeling systems using a virtual tunneling service. We integrated Co-Coot, which is a collaborative crystallographic object-oriented toolkit, with VRMMS, which is a virtual reality molecular modeling system, through a collaborative tunneling system. The proposed system showed reliable quantitative and qualitative results through pilot experiments.

  2. Virtual reality environments for post-stroke arm rehabilitation.

    PubMed

    Subramanian, Sandeep; Knaut, Luiz A; Beaudoin, Christian; McFadyen, Bradford J; Feldman, Anatol G; Levin, Mindy F

    2007-06-22

    Optimal practice and feedback elements are essential requirements for maximal motor recovery in patients with motor deficits due to central nervous system lesions. A virtual environment (VE) was created that incorporates practice and feedback elements necessary for maximal motor recovery. It permits varied and challenging practice in a motivating environment that provides salient feedback. The VE gives the user knowledge of results feedback about motor behavior and knowledge of performance feedback about the quality of pointing movements made in a virtual elevator. Movement distances are related to length of body segments. We describe an immersive and interactive experimental protocol developed in a virtual reality environment using the CAREN system. The VE can be used as a training environment for the upper limb in patients with motor impairments.

  3. Evaluation of the Next-Gen Exercise Software Interface in the NEEMO Analog

    NASA Technical Reports Server (NTRS)

    Hanson, Andrea; Kalogera, Kent; Sandor, Aniko; Hardy, Marc; Frank, Andrew; English, Kirk; Williams, Thomas; Perera, Jeevan; Amonette, William

    2017-01-01

    NSBRI (National Space Biomedical Research Institute) funded research grant to develop the 'NextGen' exercise software for the NEEMO (NASA Extreme Environment Mission Operations) analog. Develop a software architecture to integrate instructional, motivational and socialization techniques into a common portal to enhance exercise countermeasures in remote environments. Increase user efficiency and satisfaction, and institute commonality across multiple exercise systems. Utilized GUI (Graphical User Interface) design principals focused on intuitive ease of use to minimize training time and realize early user efficiency. Project requirement to test the software in an analog environment. Top Level Project Aims: 1) Improve the usability of crew interface software to exercise CMS (Crew Management System) through common app-like interfaces. 2) Introduce virtual instructional motion training. 3) Use virtual environment to provide remote socialization with family and friends, improve exercise technique, adherence, motivation and ultimately performance outcomes.

  4. Using Immersive Virtual Reality for Electrical Substation Training

    ERIC Educational Resources Information Center

    Tanaka, Eduardo H.; Paludo, Juliana A.; Cordeiro, Carlúcio S.; Domingues, Leonardo R.; Gadbem, Edgar V.; Euflausino, Adriana

    2015-01-01

    Usually, distribution electricians are called upon to solve technical problems found in electrical substations. In this project, we apply problem-based learning to a training program for electricians, with the help of a virtual reality environment that simulates a real substation. Using this virtual substation, users may safely practice maneuvers…

  5. On delay adjustment for dynamic load balancing in distributed virtual environments.

    PubMed

    Deng, Yunhua; Lau, Rynson W H

    2012-04-01

    Distributed virtual environments (DVEs) are becoming very popular in recent years, due to the rapid growing of applications, such as massive multiplayer online games (MMOGs). As the number of concurrent users increases, scalability becomes one of the major challenges in designing an interactive DVE system. One solution to address this scalability problem is to adopt a multi-server architecture. While some methods focus on the quality of partitioning the load among the servers, others focus on the efficiency of the partitioning process itself. However, all these methods neglect the effect of network delay among the servers on the accuracy of the load balancing solutions. As we show in this paper, the change in the load of the servers due to network delay would affect the performance of the load balancing algorithm. In this work, we conduct a formal analysis of this problem and discuss two efficient delay adjustment schemes to address the problem. Our experimental results show that our proposed schemes can significantly improve the performance of the load balancing algorithm with neglectable computation overhead.

  6. The Optokinetic Cervical Reflex (OKCR) in Pilots of High-Performance Aircraft.

    DTIC Science & Technology

    1997-04-01

    Coupled System virtual reality - the attempt to create a realistic, three-dimensional environment or synthetic immersive environment in which the user ...factors interface between the pilot and the flight environment. The final section is a case study of head- and helmet-mounted displays (HMD) and the impact...themselves as actually moving (flying) through a virtual environment. However, in the studies of Held, et al. (1975) and Young, et al. (1975) the

  7. A Proposed Framework for Collaborative Design in a Virtual Environment

    NASA Astrophysics Data System (ADS)

    Breland, Jason S.; Shiratuddin, Mohd Fairuz

    This paper describes a proposed framework for a collaborative design in a virtual environment. The framework consists of components that support a true collaborative design in a real-time 3D virtual environment. In support of the proposed framework, a prototype application is being developed. The authors envision the framework will have, but not limited to the following features: (1) real-time manipulation of 3D objects across the network, (2) support for multi-designer activities and information access, (3) co-existence within same virtual space, etc. This paper also discusses a proposed testing to determine the possible benefits of a collaborative design in a virtual environment over other forms of collaboration, and results from a pilot test.

  8. What makes virtual agents believable?

    NASA Astrophysics Data System (ADS)

    Bogdanovych, Anton; Trescak, Tomas; Simoff, Simeon

    2016-01-01

    In this paper we investigate the concept of believability and make an attempt to isolate individual characteristics (features) that contribute to making virtual characters believable. As the result of this investigation we have produced a formalisation of believability and based on this formalisation built a computational framework focused on simulation of believable virtual agents that possess the identified features. In order to test whether the identified features are, in fact, responsible for agents being perceived as more believable, we have conducted a user study. In this study we tested user reactions towards the virtual characters that were created for a simulation of aboriginal inhabitants of a particular area of Sydney, Australia in 1770 A.D. The participants of our user study were exposed to short simulated scenes, in which virtual agents performed some behaviour in two different ways (while possessing a certain aspect of believability vs. not possessing it). The results of the study indicate that virtual agents that appear resource bounded, are aware of their environment, own interaction capabilities and their state in the world, agents that can adapt to changes in the environment and exist in correct social context are those that are being perceived as more believable. Further in the paper we discuss these and other believability features and provide a quantitative analysis of the level of contribution for each such feature to the overall perceived believability of a virtual agent.

  9. Virtual personal assistance

    NASA Astrophysics Data System (ADS)

    Aditya, K.; Biswadeep, G.; Kedar, S.; Sundar, S.

    2017-11-01

    Human computer communication has growing demand recent days. The new generation of autonomous technology aspires to give computer interfaces emotional states that relate and consider user as well as system environment considerations. In the existing computational model is based an artificial intelligent and externally by multi-modal expression augmented with semi human characteristics. But the main problem with is multi-model expression is that the hardware control given to the Artificial Intelligence (AI) is very limited. So, in our project we are trying to give the Artificial Intelligence (AI) more control on the hardware. There are two main parts such as Speech to Text (STT) and Text to Speech (TTS) engines are used accomplish the requirement. In this work, we are using a raspberry pi 3, a speaker and a mic as hardware and for the programing part, we are using python scripting.

  10. Collaborative voxel-based surgical virtual environments.

    PubMed

    Acosta, Eric; Muniz, Gilbert; Armonda, Rocco; Bowyer, Mark; Liu, Alan

    2008-01-01

    Virtual Reality-based surgical simulators can utilize Collaborative Virtual Environments (C-VEs) to provide team-based training. To support real-time interactions, C-VEs are typically replicated on each user's local computer and a synchronization method helps keep all local copies consistent. This approach does not work well for voxel-based C-VEs since large and frequent volumetric updates make synchronization difficult. This paper describes a method that allows multiple users to interact within a voxel-based C-VE for a craniotomy simulator being developed. Our C-VE method requires smaller update sizes and provides faster synchronization update rates than volumetric-based methods. Additionally, we address network bandwidth/latency issues to simulate networked haptic and bone drilling tool interactions with a voxel-based skull C-VE.

  11. Review of Enabling Technologies to Facilitate Secure Compute Customization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aderholdt, Ferrol; Caldwell, Blake A; Hicks, Susan Elaine

    High performance computing environments are often used for a wide variety of workloads ranging from simulation, data transformation and analysis, and complex workflows to name just a few. These systems may process data for a variety of users, often requiring strong separation between job allocations. There are many challenges to establishing these secure enclaves within the shared infrastructure of high-performance computing (HPC) environments. The isolation mechanisms in the system software are the basic building blocks for enabling secure compute enclaves. There are a variety of approaches and the focus of this report is to review the different virtualization technologies thatmore » facilitate the creation of secure compute enclaves. The report reviews current operating system (OS) protection mechanisms and modern virtualization technologies to better understand the performance/isolation properties. We also examine the feasibility of running ``virtualized'' computing resources as non-privileged users, and providing controlled administrative permissions for standard users running within a virtualized context. Our examination includes technologies such as Linux containers (LXC [32], Docker [15]) and full virtualization (KVM [26], Xen [5]). We categorize these different approaches to virtualization into two broad groups: OS-level virtualization and system-level virtualization. The OS-level virtualization uses containers to allow a single OS kernel to be partitioned to create Virtual Environments (VE), e.g., LXC. The resources within the host's kernel are only virtualized in the sense of separate namespaces. In contrast, system-level virtualization uses hypervisors to manage multiple OS kernels and virtualize the physical resources (hardware) to create Virtual Machines (VM), e.g., Xen, KVM. This terminology of VE and VM, detailed in Section 2, is used throughout the report to distinguish between the two different approaches to providing virtualized execution environments. As part of our technology review we analyzed several current virtualization solutions to assess their vulnerabilities. This included a review of common vulnerabilities and exposures (CVEs) for Xen, KVM, LXC and Docker to gauge their susceptibility to different attacks. The complete details are provided in Section 5 on page 33. Based on this review we concluded that system-level virtualization solutions have many more vulnerabilities than OS level virtualization solutions. As such, security mechanisms like sVirt (Section 3.3) should be considered when using system-level virtualization solutions in order to protect the host against exploits. The majority of vulnerabilities related to KVM, LXC, and Docker are in specific regions of the system. Therefore, future "zero day attacks" are likely to be in the same regions, which suggests that protecting these areas can simplify the protection of the host and maintain the isolation between users. The evaluations of virtualization technologies done thus far are discussed in Section 4. This includes experiments with 'user' namespaces in VEs, which provides the ability to isolate user privileges and allow a user to run with different UIDs within the container while mapping them to non-privileged UIDs in the host. We have identified Linux namespaces as a promising mechanism to isolate shared resources, while maintaining good performance. In Section 4.1 we describe our tests with LXC as a non-root user and leveraging namespaces to control UID/GID mappings and support controlled sharing of parallel file-systems. We highlight several of these namespace capabilities in Section 6.2.3. The other evaluations that were performed during this initial phase of work provide baseline performance data for comparing VEs and VMs to purely native execution. In Section 4.2 we performed tests using the High-Performance Computing Conjugate Gradient (HPCCG) benchmark to establish baseline performance for a scientific application when run on the Native (host) machine in contrast with execution under Docker and KVM. Our tests verified prior studies showing roughly 2-4% overheads in application execution time & MFlops when running in hypervisor-base environments (VMs) as compared to near native performance with VEs. For more details, see Figures 4.5 (page 28), 4.6 (page 28), and 4.7 (page 29). Additionally, in Section 4.3 we include network measurements for TCP bandwidth performance over the 10GigE interface in our testbed. The Native and Docker based tests achieved >= ~9Gbits/sec, while the KVM configuration only achieved 2.5Gbits/sec (Table 4.6 on page 32). This may be a configuration issue with our KVM installation, and is a point for further testing as we refine the network settings in the testbed. The initial network tests were done using a bridged networking configuration. The report outline is as follows: - Section 1 introduces the report and clarifies the scope of the proj...« less

  12. Guided exploration in virtual environments

    NASA Astrophysics Data System (ADS)

    Beckhaus, Steffi; Eckel, Gerhard; Strothotte, Thomas

    2001-06-01

    We describe an application supporting alternating interaction and animation for the purpose of exploration in a surround- screen projection-based virtual reality system. The exploration of an environment is a highly interactive and dynamic process in which the presentation of objects of interest can give the user guidance while exploring the scene. Previous systems for automatic presentation of models or scenes need either cinematographic rules, direct human interaction, framesets or precalculation (e.g. precalculation of paths to a predefined goal). We report on the development of a system that can deal with rapidly changing user interest in objects of a scene or model as well as with dynamic models and changes of the camera position introduced interactively by the user. It is implemented as a potential-field based camera data generating system. In this paper we describe the implementation of our approach in a virtual art museum on the CyberStage, our surround-screen projection-based stereoscopic display. The paradigm of guided exploration is introduced describing the freedom of the user to explore the museum autonomously. At the same time, if requested by the user, guided exploration provides just-in-time navigational support. The user controls this support by specifying the current field of interest in high-level search criteria. We also present an informal user study evaluating this approach.

  13. Students' perceptions of constructivist Internet learning environments by a physics virtual laboratory: the gap between ideal and reality and gender differences.

    PubMed

    Chuang, Shih-Chyueh; Hwang, Fu-Kwun; Tsai, Chin-Chung

    2008-04-01

    The purpose of this study was to investigate the perceptions of Internet users of a physics virtual laboratory, Demolab, in Taiwan. Learners' perceptions of Internet-based learning environments were explored and the role of gender was examined by using preferred and actual forms of a revised Constructivist Internet-based Learning Environment Survey (CILES). The students expressed a clear gap between ideal and reality, and they showed higher preferences for many features of constructivist Internet-based learning environments than for features they had actually learned in Demolab. The results further suggested that male users prefer to be involved in the process of discussion and to show critical judgments. In addition, male users indicated they enjoyed the process of negotiation and discussion with others and were able to engage in reflective thoughts while learning in Demolab. In light of these findings, male users seemed to demonstrate better adaptability to the constructivist Internet-based learning approach than female users did. Although this study indicated certain differences between males and females in their responses to Internet-based learning environments, they also shared numerous similarities. A well-established constructivist Internet-based learning environment may encourage more female learners to participate in the science community.

  14. A Standard-Compliant Virtual Meeting System with Active Video Object Tracking

    NASA Astrophysics Data System (ADS)

    Lin, Chia-Wen; Chang, Yao-Jen; Wang, Chih-Ming; Chen, Yung-Chang; Sun, Ming-Ting

    2002-12-01

    This paper presents an H.323 standard compliant virtual video conferencing system. The proposed system not only serves as a multipoint control unit (MCU) for multipoint connection but also provides a gateway function between the H.323 LAN (local-area network) and the H.324 WAN (wide-area network) users. The proposed virtual video conferencing system provides user-friendly object compositing and manipulation features including 2D video object scaling, repositioning, rotation, and dynamic bit-allocation in a 3D virtual environment. A reliable, and accurate scheme based on background image mosaics is proposed for real-time extracting and tracking foreground video objects from the video captured with an active camera. Chroma-key insertion is used to facilitate video objects extraction and manipulation. We have implemented a prototype of the virtual conference system with an integrated graphical user interface to demonstrate the feasibility of the proposed methods.

  15. Integration Head Mounted Display Device and Hand Motion Gesture Device for Virtual Reality Laboratory

    NASA Astrophysics Data System (ADS)

    Rengganis, Y. A.; Safrodin, M.; Sukaridhoto, S.

    2018-01-01

    Virtual Reality Laboratory (VR Lab) is an innovation for conventional learning media which show us whole learning process in laboratory. There are many tools and materials are needed by user for doing practical in it, so user could feel new learning atmosphere by using this innovation. Nowadays, technologies more sophisticated than before. So it would carry in education and it will be more effective, efficient. The Supported technologies are needed us for making VR Lab such as head mounted display device and hand motion gesture device. The integration among them will be used us for making this research. Head mounted display device for viewing 3D environment of virtual reality laboratory. Hand motion gesture device for catching user real hand and it will be visualized in virtual reality laboratory. Virtual Reality will show us, if using the newest technologies in learning process it could make more interesting and easy to understand.

  16. Educational Virtual Environments as a Lens for Understanding both Precise Repeatability and Specific Variation in Learning Ecologies

    ERIC Educational Resources Information Center

    Zuiker, Steven J.

    2012-01-01

    As a global cyberinfrastructure, the Internet makes authentic digital problem spaces like educational virtual environments (EVEs) available to a wide range of classrooms, schools and education systems operating under different circumstantial, practical, social and cultural conditions. And yet, if the makers and users of EVEs both have a hand in…

  17. A Virtual Reality Simulator Prototype for Learning and Assessing Phaco-sculpting Skills

    NASA Astrophysics Data System (ADS)

    Choi, Kup-Sze

    This paper presents a virtual reality based simulator prototype for learning phacoemulsification in cataract surgery, with focus on the skills required for making a cross-shape trench in cataractous lens by an ultrasound probe during the phaco-sculpting procedure. An immersive virtual environment is created with 3D models of the lens and surgical tools. Haptic device is also used as 3D user interface. Phaco-sculpting is simulated by interactively deleting the constituting tetrahedrons of the lens model. Collisions between the virtual probe and the lens are effectively identified by partitioning the space containing the lens hierarchically with an octree. The simulator can be programmed to collect real-time quantitative user data for reviewing and assessing trainee's performance in an objective manner. A game-based learning environment can be created on top of the simulator by incorporating gaming elements based on the quantifiable performance metrics.

  18. A Coalition Approach to Higher-Level Fusion

    DTIC Science & Technology

    2009-07-01

    environment and as a simulation which operates without any human interaction . In a typical wargame, WISE might portray a complex warfighting...the environment , while higher-level fusion is about establishing the story behind the data. Thus, achieving situational awareness for the users of...It is important to manage the relationship between the users and the virtual adviser so that context can be conveyed without disrupting the users

  19. WebVR: an interactive web browser for virtual environments

    NASA Astrophysics Data System (ADS)

    Barsoum, Emad; Kuester, Falko

    2005-03-01

    The pervasive nature of web-based content has lead to the development of applications and user interfaces that port between a broad range of operating systems and databases, while providing intuitive access to static and time-varying information. However, the integration of this vast resource into virtual environments has remained elusive. In this paper we present an implementation of a 3D Web Browser (WebVR) that enables the user to search the internet for arbitrary information and to seamlessly augment this information into virtual environments. WebVR provides access to the standard data input and query mechanisms offered by conventional web browsers, with the difference that it generates active texture-skins of the web contents that can be mapped onto arbitrary surfaces within the environment. Once mapped, the corresponding texture functions as a fully integrated web-browser that will respond to traditional events such as the selection of links or text input. As a result, any surface within the environment can be turned into a web-enabled resource that provides access to user-definable data. In order to leverage from the continuous advancement of browser technology and to support both static as well as streamed content, WebVR uses ActiveX controls to extract the desired texture skin from industry strength browsers, providing a unique mechanism for data fusion and extensibility.

  20. ARC+(Registered Trademark) and ARC PC Welding Simulators: Teach Welders with Virtual Interactive 3D Technologies

    NASA Technical Reports Server (NTRS)

    Choquet, Claude

    2011-01-01

    123 Certification Inc., a Montreal based company, has developed an innovative hands-on welding simulator solution to help build the welding workforce in the most simple way. The solution lies in virtual reality technology, which has been fully tested since the early 90's. President and founder of 123 Certification Inc., Mr. Claude Choquet Ing. Msc. IWE. acts as a bridge between the welding and the programming world. Working in these fields for more than 20 years. he has filed 12 patents world-wide for a gesture control platform with leading edge hardware related to simulation. In the summer of 2006. Mr Choquet was proud to be invited to the annual IIW International Weld ing Congress in Quebec City to launch the ARC+ welding simulator. A 100% virtual reality system and web based training center was developed to simulate multi process. multi-materiaL multi-position and multi pass welding. The simulator is intended to train welding students and apprentices in schools or industries. The welding simulator is composed of a real welding e[eetrode holder (SMAW-GTAW) and gun (GMAW-FCAW). a head mounted display (HMD), a 6 degrees of freedom tracking system for interaction between the user's hands and head. as well as external audio speakers. Both guns and HMD are interacting online and simultaneously. The welding simulation is based on the law of physics and empirical results from detailed analysis of a series of welding tests based on industrial applications tested over the last 20 years. The simulation runs in real-time, using a local logic network to determine the quality and shape of the created weld. These results are based on the orientation distance. and speed of the welding torch and depth of penetration. The welding process and resulting weld bc.1d are displayed in a virtual environment with screenplay interactive training modules. For review. weld quality and recorded process values can be displayed and diagnosed after welding. To help in the le.tming process, a learning curve for each student and each Virtual Welding Class'" can be plotted, for an instructor's review or a required third party evaluation.

  1. Charliecloud

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Priedhorsky, Reid; Randles, Tim

    Charliecloud is a set of scripts to let users run a virtual cluster of virtual machines (VMs) on a desktop or supercomputer. Key functions include: 1. Creating (typically by installing an operating system from vendor media) and updating VM images; 2. Running a single VM; 3. Running multiple VMs in a virtual cluster. The virtual machines can talk to one another over the network and (in some cases) the outside world. This is accomplished by calling external programs such as QEMU and the Virtual Distributed Ethernet (VDE) suite. The goal is to let users have a virtual cluster containing nodesmore » where they have privileged access, while isolating that privilege within the virtual cluster so it cannot affect the physical compute resources. Host configuration enforces security; this is not included in Charliecloud, though security guidelines are included in its documentation and Charliecloud is designed to facilitate such configuration. Charliecloud manages passing information from host computers into and out of the virtual machines, such as parameters of the virtual cluster, input data specified by the user, output data from virtual compute jobs, VM console display, and network connections (e.g., SSH or X11). Parameters for the virtual cluster (number of VMs, RAM and disk per VM, etc.) are specified by the user or gathered from the environment (e.g., SLURM environment variables). Example job scripts are included. These include computation examples (such as a "hello world" MPI job) as well as performance tests. They also include a security test script to verify that the virtual cluster is appropriately sandboxed. Tests include: 1. Pinging hosts inside and outside the virtual cluster to explore connectivity; 2. Port scans (again inside and outside) to see what services are available; 3. Sniffing tests to see what traffic is visible to running VMs; 4. IP address spoofing to test network functionality in this case; 5. File access tests to make sure host access permissions are enforced. This test script is not a comprehensive scanner and does not test for specific vulnerabilities. Importantly, no information about physical hosts or network topology is included in this script (or any of Charliecloud); while part of a sensible test, such information is specified by the user when the test is run. That is, one cannot learn anything about the LANL network or computing infrastructure by examining Charliecloud code.« less

  2. Dynamic provisioning of local and remote compute resources with OpenStack

    NASA Astrophysics Data System (ADS)

    Giffels, M.; Hauth, T.; Polgart, F.; Quast, G.

    2015-12-01

    Modern high-energy physics experiments rely on the extensive usage of computing resources, both for the reconstruction of measured events as well as for Monte-Carlo simulation. The Institut fur Experimentelle Kernphysik (EKP) at KIT is participating in both the CMS and Belle experiments with computing and storage resources. In the upcoming years, these requirements are expected to increase due to growing amount of recorded data and the rise in complexity of the simulated events. It is therefore essential to increase the available computing capabilities by tapping into all resource pools. At the EKP institute, powerful desktop machines are available to users. Due to the multi-core nature of modern CPUs, vast amounts of CPU time are not utilized by common desktop usage patterns. Other important providers of compute capabilities are classical HPC data centers at universities or national research centers. Due to the shared nature of these installations, the standardized software stack required by HEP applications cannot be installed. A viable way to overcome this constraint and offer a standardized software environment in a transparent manner is the usage of virtualization technologies. The OpenStack project has become a widely adopted solution to virtualize hardware and offer additional services like storage and virtual machine management. This contribution will report on the incorporation of the institute's desktop machines into a private OpenStack Cloud. The additional compute resources provisioned via the virtual machines have been used for Monte-Carlo simulation and data analysis. Furthermore, a concept to integrate shared, remote HPC centers into regular HEP job workflows will be presented. In this approach, local and remote resources are merged to form a uniform, virtual compute cluster with a single point-of-entry for the user. Evaluations of the performance and stability of this setup and operational experiences will be discussed.

  3. Design of an immersive simulator for assisted power wheelchair driving.

    PubMed

    Devigne, Louise; Babel, Marie; Nouviale, Florian; Narayanan, Vishnu K; Pasteau, Francois; Gallien, Philippe

    2017-07-01

    Driving a power wheelchair is a difficult and complex visual-cognitive task. As a result, some people with visual and/or cognitive disabilities cannot access the benefits of a power wheelchair because their impairments prevent them from driving safely. In order to improve their access to mobility, we have previously designed a semi-autonomous assistive wheelchair system which progressively corrects the trajectory as the user manually drives the wheelchair and smoothly avoids obstacles. Developing and testing such systems for wheelchair driving assistance requires a significant amount of material resources and clinician time. With Virtual Reality technology, prototypes can be developed and tested in a risk-free and highly flexible Virtual Environment before equipping and testing a physical prototype. Additionally, users can "virtually" test and train more easily during the development process. In this paper, we introduce a power wheelchair driving simulator allowing the user to navigate with a standard wheelchair in an immersive 3D Virtual Environment. The simulation framework is designed to be flexible so that we can use different control inputs. In order to validate the framework, we first performed tests on the simulator with able-bodied participants during which the user's Quality of Experience (QoE) was assessed through a set of questionnaires. Results show that the simulator is a promising tool for future works as it generates a good sense of presence and requires rather low cognitive effort from users.

  4. A Collaborative Molecular Modeling Environment Using a Virtual Tunneling Service

    PubMed Central

    Lee, Jun; Kim, Jee-In; Kang, Lin-Woo

    2012-01-01

    Collaborative researches of three-dimensional molecular modeling can be limited by different time zones and locations. A networked virtual environment can be utilized to overcome the problem caused by the temporal and spatial differences. However, traditional approaches did not sufficiently consider integration of different computing environments, which were characterized by types of applications, roles of users, and so on. We propose a collaborative molecular modeling environment to integrate different molecule modeling systems using a virtual tunneling service. We integrated Co-Coot, which is a collaborative crystallographic object-oriented toolkit, with VRMMS, which is a virtual reality molecular modeling system, through a collaborative tunneling system. The proposed system showed reliable quantitative and qualitative results through pilot experiments. PMID:22927721

  5. An integrated pipeline to create and experience compelling scenarios in virtual reality

    NASA Astrophysics Data System (ADS)

    Springer, Jan P.; Neumann, Carsten; Reiners, Dirk; Cruz-Neira, Carolina

    2011-03-01

    One of the main barriers to create and use compelling scenarios in virtual reality is the complexity and time-consuming efforts for modeling, element integration, and the software development to properly display and interact with the content in the available systems. Still today, most virtual reality applications are tedious to create and they are hard-wired to the specific display and interaction system available to the developers when creating the application. Furthermore, it is not possible to alter the content or the dynamics of the content once the application has been created. We present our research on designing a software pipeline that enables the creation of compelling scenarios with a fair degree of visual and interaction complexity in a semi-automated way. Specifically, we are targeting drivable urban scenarios, ranging from large cities to sparsely populated rural areas that incorporate both static components (e. g., houses, trees) and dynamic components (e. g., people, vehicles) as well as events, such as explosions or ambient noise. Our pipeline has four basic components. First, an environment designer, where users sketch the overall layout of the scenario, and an automated method constructs the 3D environment from the information in the sketch. Second, a scenario editor used for authoring the complete scenario, incorporate the dynamic elements and events, fine tune the automatically generated environment, define the execution conditions of the scenario, and set up any data gathering that may be necessary during the execution of the scenario. Third, a run-time environment for different virtual-reality systems provides users with the interactive experience as designed with the designer and the editor. And fourth, a bi-directional monitoring system that allows for capturing and modification of information from the virtual environment. One of the interesting capabilities of our pipeline is that scenarios can be built and modified on-the-fly as they are being presented in the virtual-reality systems. Users can quickly prototype the basic scene using the designer and the editor on a control workstation. More elements can then be introduced into the scene from both the editor and the virtual-reality display. In this manner, users are able to gradually increase the complexity of the scenario with immediate feedback. The main use of this pipeline is the rapid development of scenarios for human-factors studies. However, it is applicable in a much more general context.

  6. Virtual interface environment

    NASA Technical Reports Server (NTRS)

    Fisher, Scott S.

    1986-01-01

    A head-mounted, wide-angle, stereoscopic display system controlled by operator position, voice and gesture has been developed for use as a multipurpose interface environment. The system provides a multisensory, interactive display environment in which a user can virtually explore a 360-degree synthesized or remotely sensed environment and can viscerally interact with its components. Primary applications of the system are in telerobotics, management of large-scale integrated information systems, and human factors research. System configuration, application scenarios, and research directions are described.

  7. Automatic 3D virtual scenes modeling for multisensors simulation

    NASA Astrophysics Data System (ADS)

    Latger, Jean; Le Goff, Alain; Cathala, Thierry; Larive, Mathieu

    2006-05-01

    SEDRIS that stands for Synthetic Environment Data Representation and Interchange Specification is a DoD/DMSO initiative in order to federate and make interoperable 3D mocks up in the frame of virtual reality and simulation. This paper shows an original application of SEDRIS concept for research physical multi sensors simulation, when SEDRIS is more classically known for training simulation. CHORALE (simulated Optronic Acoustic Radar battlefield) is used by the French DGA/DCE (Directorate for Test and Evaluation of the French Ministry of Defense) to perform multi-sensors simulations. CHORALE enables the user to create virtual and realistic multi spectral 3D scenes, and generate the physical signal received by a sensor, typically an IR sensor. In the scope of this CHORALE workshop, French DGA has decided to introduce a SEDRIS based new 3D terrain modeling tool that enables to create automatically 3D databases, directly usable by the physical sensor simulation CHORALE renderers. This AGETIM tool turns geographical source data (including GIS facilities) into meshed geometry enhanced with the sensor physical extensions, fitted to the ray tracing rendering of CHORALE, both for the infrared, electromagnetic and acoustic spectrum. The basic idea is to enhance directly the 2D source level with the physical data, rather than enhancing the 3D meshed level, which is more efficient (rapid database generation) and more reliable (can be generated many times, changing some parameters only). The paper concludes with the last current evolution of AGETIM in the scope mission rehearsal for urban war using sensors. This evolution includes indoor modeling for automatic generation of inner parts of buildings.

  8. Social Presence and Motivation in a Three-Dimensional Virtual World: An Explanatory Study

    ERIC Educational Resources Information Center

    Yilmaz, Rabia M.; Topu, F. Burcu; Goktas, Yuksel; Coban, Murat

    2013-01-01

    Three-dimensional (3-D) virtual worlds differ from other learning environments in their similarity to real life, providing opportunities for more effective communication and interaction. With these features, 3-D virtual worlds possess considerable potential to enhance learning opportunities. For effective learning, the users' motivation levels and…

  9. How to avoid simulation sickness in virtual environments during user displacement

    NASA Astrophysics Data System (ADS)

    Kemeny, A.; Colombet, F.; Denoual, T.

    2015-03-01

    Driving simulation (DS) and Virtual Reality (VR) share the same technologies for visualization and 3D vision and may use the same technics for head movement tracking. They experience also similar difficulties when rendering the displacements of the observer in virtual environments, especially when these displacements are carried out using driver commands, including steering wheels, joysticks and nomad devices. High values for transport delay, the time lag between the action and the corresponding rendering cues and/or visual-vestibular conflict, due to the discrepancies perceived by the human visual and vestibular systems when driving or displacing using a control device, induces the so-called simulation sickness. While the visual transport delay can be efficiently reduced using high frequency frame rate, the visual-vestibular conflict is inherent to VR, when not using motion platforms. In order to study the impact of displacements on simulation sickness, we have tested various driving scenarios in Renault's 5-sided ultra-high resolution CAVE. First results indicate that low speed displacements with longitudinal and lateral accelerations under a given perception thresholds are well accepted by a large number of users and relatively high values are only accepted by experienced users and induce VR induced symptoms and effects (VRISE) for novice users, with a worst case scenario corresponding to rotational displacements. These results will be used for optimization technics at Arts et Métiers ParisTech for motion sickness reduction in virtual environments for industrial, research, educational or gaming applications.

  10. Evaluation of navigation interfaces in virtual environments

    NASA Astrophysics Data System (ADS)

    Mestre, Daniel R.

    2014-02-01

    When users are immersed in cave-like virtual reality systems, navigational interfaces have to be used when the size of the virtual environment becomes larger than the physical extent of the cave floor. However, using navigation interfaces, physically static users experience self-motion (visually-induced vection). As a consequence, sensorial incoherence between vision (indicating self-motion) and other proprioceptive inputs (indicating immobility) can make them feel dizzy and disoriented. We tested, in two experimental studies, different locomotion interfaces. The objective was twofold: testing spatial learning and cybersickness. In a first experiment, using first-person navigation with a flystick ®, we tested the effect of sensorial aids, a spatialized sound or guiding arrows on the ground, attracting the user toward the goal of the navigation task. Results revealed that sensorial aids tended to impact negatively spatial learning. Moreover, subjects reported significant levels of cybersickness. In a second experiment, we tested whether such negative effects could be due to poorly controlled rotational motion during simulated self-motion. Subjects used a gamepad, in which rotational and translational displacements were independently controlled by two joysticks. Furthermore, we tested first- versus third-person navigation. No significant difference was observed between these two conditions. Overall, cybersickness tended to be lower, as compared to experiment 1, but the difference was not significant. Future research should evaluate further the hypothesis of the role of passively perceived optical flow in cybersickness, but manipulating the virtual environment'sperrot structure. It also seems that video-gaming experience might be involved in the user's sensitivity to cybersickness.

  11. Design and Development of a Virtual Facility Tour Using iPIX(TM) Technology

    NASA Technical Reports Server (NTRS)

    Farley, Douglas L.

    2002-01-01

    The capabilities of the iPIX virtual tour software, in conjunction with a web-based interface create a unique and valuable system that provides users with an efficient virtual capability to tour facilities while being able to acquire the necessary technical content is demonstrated. A users guide to the Mechanics and Durability Branch's virtual tour is presented. The guide provides the user with instruction on operating both scripted and unscripted tours as well as a discussion of the tours for Buildings 1148, 1205 and 1256 and NASA Langley Research Center. Furthermore, an indepth discussion has been presented on how to develop a virtual tour using the iPIX software interface with conventional html and JavaScript. The main aspects for discussion are on network and computing issues associated with using this capability. A discussion of how to take the iPIX pictures, manipulate them and bond them together to form hemispherical images is also presented. Linking of images with additional multimedia content is discussed. Finally, a method to integrate the iPIX software with conventional HTML and JavaScript to facilitate linking with multi-media is presented.

  12. The VERITAS Facility: A Virtual Environment Platform for Human Performance Research

    DTIC Science & Technology

    2016-01-01

    IAE The IAE supports the audio environment that users experience during the course of an experiment. This includes environmental sounds, user-to...future, we are looking towards a database-based system that would use MySQL or an equivalent product to store the large data sets and provide standard

  13. Future Evolution of Virtual Worlds as Communication Environments

    NASA Astrophysics Data System (ADS)

    Prisco, Giulio

    Extensive experience creating locations and activities inside virtual worlds provides the basis for contemplating their future. Users of virtual worlds are diverse in their goals for these online environments; for example, immersionists want them to be alternative realities disconnected from real life, whereas augmentationists want them to be communication media supporting real-life activities. As the technology improves, the diversity of virtual worlds will increase along with their significance. Many will incorporate more advanced virtual reality, or serve as major media for long-distance collaboration, or become the venues for futurist social movements. Key issues are how people can create their own virtual worlds, travel across worlds, and experience a variety of multimedia immersive environments. This chapter concludes by noting the view among some computer scientists that future technologies will permit uploading human personalities to artificial intelligence avatars, thereby enhancing human beings and rendering the virtual worlds entirely real.

  14. Augmented Reality versus Virtual Reality for 3D Object Manipulation.

    PubMed

    Krichenbauer, Max; Yamamoto, Goshiro; Taketom, Takafumi; Sandor, Christian; Kato, Hirokazu

    2018-02-01

    Virtual Reality (VR) Head-Mounted Displays (HMDs) are on the verge of becoming commodity hardware available to the average user and feasible to use as a tool for 3D work. Some HMDs include front-facing cameras, enabling Augmented Reality (AR) functionality. Apart from avoiding collisions with the environment, interaction with virtual objects may also be affected by seeing the real environment. However, whether these effects are positive or negative has not yet been studied extensively. For most tasks it is unknown whether AR has any advantage over VR. In this work we present the results of a user study in which we compared user performance measured in task completion time on a 9 degrees of freedom object selection and transformation task performed either in AR or VR, both with a 3D input device and a mouse. Our results show faster task completion time in AR over VR. When using a 3D input device, a purely VR environment increased task completion time by 22.5 percent on average compared to AR ( ). Surprisingly, a similar effect occurred when using a mouse: users were about 17.3 percent slower in VR than in AR ( ). Mouse and 3D input device produced similar task completion times in each condition (AR or VR) respectively. We further found no differences in reported comfort.

  15. Computer Assisted Virtual Environment - CAVE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erickson, Phillip; Podgorney, Robert; Weingartner,

    Research at the Center for Advanced Energy Studies is taking on another dimension with a 3-D device known as a Computer Assisted Virtual Environment. The CAVE uses projection to display high-end computer graphics on three walls and the floor. By wearing 3-D glasses to create depth perception and holding a wand to move and rotate images, users can delve into data.

  16. Navigating Massively Multiplayer Online Games: Evaluating 21st Century Skills for Learning within Virtual Environments

    ERIC Educational Resources Information Center

    McCreery, Michael P.; Schrader, P. G.; Krach, S. Kathleen

    2011-01-01

    There is a substantial and growing interest in immersive virtual spaces as contexts for 21st century skills like problem solving, communication, and collaboration. However, little consideration has been given to the ways in which users become proficient in these environments or what types of target behaviors are associated with 21st century…

  17. Using Learning Analytics to Identify Medical Student Misconceptions in an Online Virtual Patient Environment

    ERIC Educational Resources Information Center

    Poitras, Eric G.; Naismith, Laura M.; Doleck, Tenzin; Lajoie, Susanne P.

    2016-01-01

    This study aimed to identify misconceptions in medical student knowledge by mining user interactions in the MedU online learning environment. Data from 13000 attempts at a single virtual patient case were extracted from the MedU MySQL database. A subgroup discovery method was applied to identify patterns in learner-generated annotations and…

  18. Computer Assisted Virtual Environment - CAVE

    ScienceCinema

    Erickson, Phillip; Podgorney, Robert; Weingartner,

    2018-05-30

    Research at the Center for Advanced Energy Studies is taking on another dimension with a 3-D device known as a Computer Assisted Virtual Environment. The CAVE uses projection to display high-end computer graphics on three walls and the floor. By wearing 3-D glasses to create depth perception and holding a wand to move and rotate images, users can delve into data.

  19. CyberDeutsch: Language Production and User Preferences in a Moodle Virtual Learning Environment

    ERIC Educational Resources Information Center

    Stickler, Ursula; Hampel, Regine

    2010-01-01

    This case study focuses on two learners who took part in an intensive online German course offered to intermediate level students in the Department of Languages of the Open University. The course piloted the use of a Moodle-based virtual learning environment and a range of new online tools which lend themselves to different types of language…

  20. A collaborative interaction and visualization multi-modal environment for surgical planning.

    PubMed

    Foo, Jung Leng; Martinez-Escobar, Marisol; Peloquin, Catherine; Lobe, Thom; Winer, Eliot

    2009-01-01

    The proliferation of virtual reality visualization and interaction technologies has changed the way medical image data is analyzed and processed. This paper presents a multi-modal environment that combines a virtual reality application with a desktop application for collaborative surgical planning. Both visualization applications can function independently but can also be synced over a network connection for collaborative work. Any changes to either application is immediately synced and updated to the other. This is an efficient collaboration tool that allows multiple teams of doctors with only an internet connection to visualize and interact with the same patient data simultaneously. With this multi-modal environment framework, one team working in the VR environment and another team from a remote location working on a desktop machine can both collaborate in the examination and discussion for procedures such as diagnosis, surgical planning, teaching and tele-mentoring.

  1. Lunar Exploration Island, NASA’s Return to the Moon in Second Life

    NASA Astrophysics Data System (ADS)

    Ireton, F. M.; Bleacher, L.; Day, B.; Hsu, B. C.; Mitchell, B. K.

    2009-12-01

    Second Life is a metaverse—a massively multi-user virtual world (MMVR) community. With over 9 million users worldwide, there are 40,000-50,000 users on line at any one time. Second Life hosts over 200 educational and institutional simulation locations termed “islands” or sims that are developed by users providing support for education and business endeavors. On-line tools are provided to construct structures and landforms simulating a real world in a virtual three-dimensional environment. Users develop a persona and are seen on screen as a human figure or avatar. Avatars move in Second Life by walking, flying, or teleporting and interact with other users via text or voice chat. This poster details the design and creation of the Second Life exhibit hall for NASA’s Lunar Precursor Robotics Program and the LRO/LCROSS missions. The hall has been placed on the Lunar Exploration Island (LEI) in Second Life. Avatars enter via teleportation to an orientation room with information about the project, a simulator map, and other information. A central hall of flight houses exhibits pertaining to the LRO/ LCROSS missions and includes full size models of the two spacecraft and launch vehicle. Storyboards with information about the missions interpret the exhibits while links to external websites provide further information on the missions, both spacecraft instrument suites, and EPO directed to support the missions. The sim includes several sites for meetings, a conference amphitheater with a stage and screen for video links such as live broadcasts of conferences and speakers. A link is provided to NASATV for live viewing LRO/LCROSS launch and impact activities and other NASA events. Recently visitors have viewed the Hubble servicing mission and several shuttle launches as well as the LRO/LCROSS launch. Lunar Exploration Island in Second Life

  2. A compact eyetracked optical see-through head-mounted display

    NASA Astrophysics Data System (ADS)

    Hua, Hong; Gao, Chunyu

    2012-03-01

    An eye-tracked head-mounted display (ET-HMD) system is able to display virtual images as a classical HMD does, while additionally tracking the gaze direction of the user. There is ample evidence that a fully-integrated ETHMD system offers multi-fold benefits, not only to fundamental scientific research but also to emerging applications of such technology. For instance eyetracking capability in HMDs adds a very valuable tool and objective metric for scientists to quantitatively assess user interaction with 3D environments and investigate the effectiveness of various 3D visualization technologies for various specific tasks including training, education, and augmented cognition tasks. In this paper, we present an innovative optical approach to the design of an optical see-through ET-HMD system based on freeform optical technology and an innovative optical scheme that uniquely combines the display optics with the eye imaging optics. A preliminary design of the described ET-HMD system will be presented.

  3. Virtual workstation - A multimodal, stereoscopic display environment

    NASA Astrophysics Data System (ADS)

    Fisher, S. S.; McGreevy, M.; Humphries, J.; Robinett, W.

    1987-01-01

    A head-mounted, wide-angle, stereoscopic display system controlled by operator position, voice and gesture has been developed for use in a multipurpose interface environment. The system provides a multisensory, interactive display environment in which a user can virtually explore a 360-degree synthesized or remotely sensed environment and can viscerally interact with its components. Primary applications of the system are in telerobotics, management of large-scale integrated information systems, and human factors research. System configuration, application scenarios, and research directions are described.

  4. AstroCloud, a Cyber-Infrastructure for Astronomy Research: Cloud Computing Environments

    NASA Astrophysics Data System (ADS)

    Li, C.; Wang, J.; Cui, C.; He, B.; Fan, D.; Yang, Y.; Chen, J.; Zhang, H.; Yu, C.; Xiao, J.; Wang, C.; Cao, Z.; Fan, Y.; Hong, Z.; Li, S.; Mi, L.; Wan, W.; Wang, J.; Yin, S.

    2015-09-01

    AstroCloud is a cyber-Infrastructure for Astronomy Research initiated by Chinese Virtual Observatory (China-VO) under funding support from NDRC (National Development and Reform commission) and CAS (Chinese Academy of Sciences). Based on CloudStack, an open source software, we set up the cloud computing environment for AstroCloud Project. It consists of five distributed nodes across the mainland of China. Users can use and analysis data in this cloud computing environment. Based on GlusterFS, we built a scalable cloud storage system. Each user has a private space, which can be shared among different virtual machines and desktop systems. With this environments, astronomer can access to astronomical data collected by different telescopes and data centers easily, and data producers can archive their datasets safely.

  5. Enhancing Security by System-Level Virtualization in Cloud Computing Environments

    NASA Astrophysics Data System (ADS)

    Sun, Dawei; Chang, Guiran; Tan, Chunguang; Wang, Xingwei

    Many trends are opening up the era of cloud computing, which will reshape the IT industry. Virtualization techniques have become an indispensable ingredient for almost all cloud computing system. By the virtual environments, cloud provider is able to run varieties of operating systems as needed by each cloud user. Virtualization can improve reliability, security, and availability of applications by using consolidation, isolation, and fault tolerance. In addition, it is possible to balance the workloads by using live migration techniques. In this paper, the definition of cloud computing is given; and then the service and deployment models are introduced. An analysis of security issues and challenges in implementation of cloud computing is identified. Moreover, a system-level virtualization case is established to enhance the security of cloud computing environments.

  6. Exploring engagement in a virtual community of practice in pediatric rehabilitation: who are non-users, lurkers, and posters?

    PubMed

    Hurtubise, Karen; Pratte, Gabrielle; Rivard, Lisa; Berbari, Jade; Héguy, Léa; Camden, Chantal

    2017-12-20

    Communities of practice are increasingly recognized in rehabilitation as useful knowledge transfer tools; however, little is known about their users. This exploratory study describes the characteristics of participants and non-participants invited to engage in a pediatric rehabilitation virtual community of practice. In addition, we explored virtual community of practice utilization behaviors, engagement predictors, and the impact of strategies designed to foster engagement. Participants' demographics including information-seeking style and organization e-readiness, as well as online platform frequency of use data were collected and analyzed using descriptive, comparative, and predictive statistics. Seventy-four percent of those invited used the virtual community of practice. Users had less years of experience in pediatric rehabilitation than non-users. Among the users, 71% were classified as "lurkers," who engaged through reading content only; while 29% were classified as "posters," editing online content. Predictive factors were not uncovered, however an increased number of forum visits correlated with being a poster, a non-information seeker, an employee of an organization demonstrating e-readiness, and regularly working with children with the virtual community of practice specific condition. User-engagement strategies increased visits to the forum. These findings will assist rehabilitation leaders in leveraging rehabilitation-specific virtual community of practice to improve knowledge transfer and practice in pediatric rehabilitation and disability management. Implications for Rehabilitation Communities of practice are increasingly recognized as useful knowledge transfer tools for rehabilitation professionals and are made more accessible thanks to virtual technologies. Our virtual community of practice was found to be optimized in health care organizations with an electronic culture, when the topic area had daily relevance to its target audience, and was particularly beneficial for those who have limited years of experience in pediatric rehabilitation. A strongly committed, selected leadership team with the technological skills, content expertise, and designated time to maintain the site and to nurture discussion was deemed vital in fostering knowledge exchange in this context. User-focused engagement strategies showed promise in increasing visits to the virtual community of practice. Our study supports the importance of multi-pronged approaches in enhancing health care professional knowledge and skills Findings from this study will assist rehabilitation leaders in optimally leveraging rehabilitation-specific virtual community of practice to improve knowledge transfer in pediatric rehabilitation and disability management.

  7. Load Balancing in Multi Cloud Computing Environment with Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Vhansure, Fularani; Deshmukh, Apurva; Sumathy, S.

    2017-11-01

    Cloud is a pool of resources that is available on pay per use model. It provides services to the user which is increasing rapidly. Load balancing is an issue because it cannot handle so many requests at a time. It is also known as NP complete problem. In traditional system the functions consist of various parameter values to maximise it in order to achieve best optimal individualsolutions. Challenge is when there are many parameters of solutionsin the system space. Another challenge is to optimize the function which is much more complex. In this paper, various techniques to handle load balancing virtually (VM) as well as physically (nodes) using genetic algorithm is discussed.

  8. Novel Virtual User Models of Mild Cognitive Impairment for Simulating Dementia

    PubMed Central

    Segkouli, Sofia; Tzovaras, Dimitrios; Tsakiris, Thanos; Tsolaki, Magda; Karagiannidis, Charalampos

    2015-01-01

    Virtual user modeling research has attempted to address critical issues of human-computer interaction (HCI) such as usability and utility through a large number of analytic, usability-oriented approaches as cognitive models in order to provide users with experiences fitting to their specific needs. However, there is demand for more specific modules embodied in cognitive architecture that will detect abnormal cognitive decline across new synthetic task environments. Also, accessibility evaluation of graphical user interfaces (GUIs) requires considerable effort for enhancing ICT products accessibility for older adults. The main aim of this study is to develop and test virtual user models (VUM) simulating mild cognitive impairment (MCI) through novel specific modules, embodied at cognitive models and defined by estimations of cognitive parameters. Well-established MCI detection tests assessed users' cognition, elaborated their ability to perform multitasks, and monitored the performance of infotainment related tasks to provide more accurate simulation results on existing conceptual frameworks and enhanced predictive validity in interfaces' design supported by increased tasks' complexity to capture a more detailed profile of users' capabilities and limitations. The final outcome is a more robust cognitive prediction model, accurately fitted to human data to be used for more reliable interfaces' evaluation through simulation on the basis of virtual models of MCI users. PMID:26339282

  9. SU-F-P-18: Development of the Technical Training System for Patient Set-Up Considering Rotational Correction in the Virtual Environment Using Three-Dimensional Computer Graphic Engine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Imura, K; Fujibuchi, T; Hirata, H

    Purpose: Patient set-up skills in radiotherapy treatment room have a great influence on treatment effect for image guided radiotherapy. In this study, we have developed the training system for improving practical set-up skills considering rotational correction in the virtual environment away from the pressure of actual treatment room by using three-dimensional computer graphic (3DCG) engine. Methods: The treatment room for external beam radiotherapy was reproduced in the virtual environment by using 3DCG engine (Unity). The viewpoints to perform patient set-up in the virtual treatment room were arranged in both sides of the virtual operable treatment couch to assume actual performancemore » by two clinical staffs. The position errors to mechanical isocenter considering alignment between skin marker and laser on the virtual patient model were displayed by utilizing numerical values expressed in SI units and the directions of arrow marks. The rotational errors calculated with a point on the virtual body axis as the center of each rotation axis for the virtual environment were corrected by adjusting rotational position of the body phantom wound the belt with gyroscope preparing on table in a real space. These rotational errors were evaluated by describing vector outer product operations and trigonometric functions in the script for patient set-up technique. Results: The viewpoints in the virtual environment allowed individual user to visually recognize the position discrepancy to mechanical isocenter until eliminating the positional errors of several millimeters. The rotational errors between the two points calculated with the center point could be efficiently corrected to display the minimum technique mathematically by utilizing the script. Conclusion: By utilizing the script to correct the rotational errors as well as accurate positional recognition for patient set-up technique, the training system developed for improving patient set-up skills enabled individual user to indicate efficient positional correction methods easily.« less

  10. Augmented reality and photogrammetry: A synergy to visualize physical and virtual city environments

    NASA Astrophysics Data System (ADS)

    Portalés, Cristina; Lerma, José Luis; Navarro, Santiago

    2010-01-01

    Close-range photogrammetry is based on the acquisition of imagery to make accurate measurements and, eventually, three-dimensional (3D) photo-realistic models. These models are a photogrammetric product per se. They are usually integrated into virtual reality scenarios where additional data such as sound, text or video can be introduced, leading to multimedia virtual environments. These environments allow users both to navigate and interact on different platforms such as desktop PCs, laptops and small hand-held devices (mobile phones or PDAs). In very recent years, a new technology derived from virtual reality has emerged: Augmented Reality (AR), which is based on mixing real and virtual environments to boost human interactions and real-life navigations. The synergy of AR and photogrammetry opens up new possibilities in the field of 3D data visualization, navigation and interaction far beyond the traditional static navigation and interaction in front of a computer screen. In this paper we introduce a low-cost outdoor mobile AR application to integrate buildings of different urban spaces. High-accuracy 3D photo-models derived from close-range photogrammetry are integrated in real (physical) urban worlds. The augmented environment that is presented herein requires for visualization a see-through video head mounted display (HMD), whereas user's movement navigation is achieved in the real world with the help of an inertial navigation sensor. After introducing the basics of AR technology, the paper will deal with real-time orientation and tracking in combined physical and virtual city environments, merging close-range photogrammetry and AR. There are, however, some software and complex issues, which are discussed in the paper.

  11. 3D workflow for HDR image capture of projection systems and objects for CAVE virtual environments authoring with wireless touch-sensitive devices

    NASA Astrophysics Data System (ADS)

    Prusten, Mark J.; McIntyre, Michelle; Landis, Marvin

    2006-02-01

    A 3D workflow pipeline is presented for High Dynamic Range (HDR) image capture of projected scenes or objects for presentation in CAVE virtual environments. The methods of HDR digital photography of environments vs. objects are reviewed. Samples of both types of virtual authoring being the actual CAVE environment and a sculpture are shown. A series of software tools are incorporated into a pipeline called CAVEPIPE, allowing for high-resolution objects and scenes to be composited together in natural illumination environments [1] and presented in our CAVE virtual reality environment. We also present a way to enhance the user interface for CAVE environments. The traditional methods of controlling the navigation through virtual environments include: glove, HUD's and 3D mouse devices. By integrating a wireless network that includes both WiFi (IEEE 802.11b/g) and Bluetooth (IEEE 802.15.1) protocols the non-graphical input control device can be eliminated. Therefore wireless devices can be added that would include: PDA's, Smart Phones, TabletPC's, Portable Gaming consoles, and PocketPC's.

  12. Human Motion Tracking and Glove-Based User Interfaces for Virtual Environments in ANVIL

    NASA Technical Reports Server (NTRS)

    Dumas, Joseph D., II

    2002-01-01

    The Army/NASA Virtual Innovations Laboratory (ANVIL) at Marshall Space Flight Center (MSFC) provides an environment where engineers and other personnel can investigate novel applications of computer simulation and Virtual Reality (VR) technologies. Among the many hardware and software resources in ANVIL are several high-performance Silicon Graphics computer systems and a number of commercial software packages, such as Division MockUp by Parametric Technology Corporation (PTC) and Jack by Unigraphics Solutions, Inc. These hardware and software platforms are used in conjunction with various VR peripheral I/O (input / output) devices, CAD (computer aided design) models, etc. to support the objectives of the MSFC Engineering Systems Department/Systems Engineering Support Group (ED42) by studying engineering designs, chiefly from the standpoint of human factors and ergonomics. One of the more time-consuming tasks facing ANVIL personnel involves the testing and evaluation of peripheral I/O devices and the integration of new devices with existing hardware and software platforms. Another important challenge is the development of innovative user interfaces to allow efficient, intuitive interaction between simulation users and the virtual environments they are investigating. As part of his Summer Faculty Fellowship, the author was tasked with verifying the operation of some recently acquired peripheral interface devices and developing new, easy-to-use interfaces that could be used with existing VR hardware and software to better support ANVIL projects.

  13. VRUSE--a computerised diagnostic tool: for usability evaluation of virtual/synthetic environment systems.

    PubMed

    Kalawsky, R S

    1999-02-01

    A special questionnaire (VRUSE) has been designed to measure the usability of a VR system according to the attitude and perception of its users. Important aspects of VR systems were carefully derived to produce key usability factors for the questionnaire. Unlike questionnaires designed for generic interfaces VRUSE is specifically designed to cater for evaluating virtual environments, being a diagnostic tool providing a wealth of information about a user's viewpoint of the interface. VRUSE can be used to great effect with other evaluation techniques to pinpoint problematical areas of a VR interface. Other applications include bench-marking of competitor VR systems.

  14. Shared virtual environments for telerehabilitation.

    PubMed

    Popescu, George V; Burdea, Grigore; Boian, Rares

    2002-01-01

    Current VR telerehabilitation systems use offline remote monitoring from the clinic and patient-therapist videoconferencing. Such "store and forward" and video-based systems cannot implement medical services involving patient therapist direct interaction. Real-time telerehabilitation applications (including remote therapy) can be developed using a shared Virtual Environment (VE) architecture. We developed a two-user shared VE for hand telerehabilitation. Each site has a telerehabilitation workstation with a videocamera and a Rutgers Master II (RMII) force feedback glove. Each user can control a virtual hand and interact hapticly with virtual objects. Simulated physical interactions between therapist and patient are implemented using hand force feedback. The therapist's graphic interface contains several virtual panels, which allow control over the rehabilitation process. These controls start a videoconferencing session, collect patient data, or apply therapy. Several experimental telerehabilitation scenarios were successfully tested on a LAN. A Web-based approach to "real-time" patient telemonitoring--the monitoring portal for hand telerehabilitation--was also developed. The therapist interface is implemented as a Java3D applet that monitors patient hand movement. The monitoring portal gives real-time performance on off-the-shelf desktop workstations.

  15. Application of advanced virtual reality and 3D computer assisted technologies in tele-3D-computer assisted surgery in rhinology.

    PubMed

    Klapan, Ivica; Vranjes, Zeljko; Prgomet, Drago; Lukinović, Juraj

    2008-03-01

    The real-time requirement means that the simulation should be able to follow the actions of the user that may be moving in the virtual environment. The computer system should also store in its memory a three-dimensional (3D) model of the virtual environment. In that case a real-time virtual reality system will update the 3D graphic visualization as the user moves, so that up-to-date visualization is always shown on the computer screen. Upon completion of the tele-operation, the surgeon compares the preoperative and postoperative images and models of the operative field, and studies video records of the procedure itself Using intraoperative records, animated images of the real tele-procedure performed can be designed. Virtual surgery offers the possibility of preoperative planning in rhinology. The intraoperative use of computer in real time requires development of appropriate hardware and software to connect medical instrumentarium with the computer and to operate the computer by thus connected instrumentarium and sophisticated multimedia interfaces.

  16. Using Virtual Reality to Improve Walking Post-Stroke: Translation to Individuals with Diabetes

    PubMed Central

    Deutsch, Judith E

    2011-01-01

    Use of virtual reality (VR) technology to improve walking for people post-stroke has been studied for its clinical application since 2004. The hardware and software used to create these systems has varied but has predominantly been constituted by projected environments with users walking on treadmills. Transfer of training from the virtual environment to real-world walking has modest but positive research support. Translation of the research findings to clinical practice has been hampered by commercial availability and costs of the VR systems. Suggestions for how the work for individuals post-stroke might be applied and adapted for individuals with diabetes and other impaired ambulatory conditions include involvement of the target user groups (both practitioners and clients) early in the design and integration of activity and education into the systems. PMID:21527098

  17. Using virtual reality to improve walking post-stroke: translation to individuals with diabetes.

    PubMed

    Deutsch, Judith E

    2011-03-01

    Use of virtual reality (VR) technology to improve walking for people post-stroke has been studied for its clinical application since 2004. The hardware and software used to create these systems has varied but has predominantly been constituted by projected environments with users walking on treadmills. Transfer of training from the virtual environment to real-world walking has modest but positive research support. Translation of the research findings to clinical practice has been hampered by commercial availability and costs of the VR systems. Suggestions for how the work for individuals post-stroke might be applied and adapted for individuals with diabetes and other impaired ambulatory conditions include involvement of the target user groups (both practitioners and clients) early in the design and integration of activity and education into the systems. © 2011 Diabetes Technology Society.

  18. The feasibility and acceptability of virtual environments in the treatment of childhood social anxiety disorder.

    PubMed

    Sarver, Nina Wong; Beidel, Deborah C; Spitalnick, Josh S

    2014-01-01

    Two significant challenges for the dissemination of social skills training programs are the need to assure generalizability and provide sufficient practice opportunities. In the case of social anxiety disorder, virtual environments may provide one strategy to address these issues. This study evaluated the utility of an interactive virtual school environment for the treatment of social anxiety disorder in preadolescent children. Eleven children with a primary diagnosis of social anxiety disorder between 8 to 12 years old participated in this initial feasibility trial. All children were treated with Social Effectiveness Therapy for Children, an empirically supported treatment for children with social anxiety disorder. However, the in vivo peer generalization sessions and standard parent-assisted homework assignments were substituted by practice in a virtual environment. Overall, the virtual environment programs were acceptable, feasible, and credible treatment components. Both children and clinicians were satisfied with using the virtual environment technology, and children believed it was a high-quality program overall. In addition, parents were satisfied with the virtual environment augmented treatment and indicated that they would recommend the program to family and friends. Findings indicate that the virtual environments are viewed as acceptable and credible by potential recipients. Furthermore, they are easy to implement by even novice users and appear to be useful adjunctive elements for the treatment of childhood social anxiety disorder.

  19. The feasibility and acceptability of virtual environments in the treatment of childhood social anxiety disorder

    PubMed Central

    Wong, Nina; Beidel, Deborah C.; Spitalnick, Josh

    2013-01-01

    Objective Two significant challenges for the dissemination of social skills training programs are the need to assure generalizability and provide sufficient practice opportunities. In the case of social anxiety disorder, virtual environments may provide one strategy to address these issues. This study evaluated the utility of an interactive virtual school environment for the treatment of social anxiety disorder in preadolescent children. Method Eleven children with a primary diagnosis of social anxiety disorder between 8 to 12 years old participated in this initial feasibility trial. All children were treated with Social Effectiveness Therapy for Children, an empirically supported treatment for children with social anxiety disorder. However, the in vivo peer generalization sessions and standard parent-assisted homework assignments were substituted by practice in a virtual environment. Results Overall, the virtual environment programs were acceptable, feasible, and credible treatment components. Both children and clinicians were satisfied with using the virtual environment technology, and children believed it was a high quality program overall. Additionally, parents were satisfied with the virtual environment augmented treatment and indicated that they would recommend the program to family and friends. Conclusion Virtual environments are viewed as acceptable and credible by potential recipients. Furthermore, they are easy to implement by even novice users and appear to be useful adjunctive elements for the treatment of childhood social anxiety disorder. PMID:24144182

  20. Research on multi-user encrypted search scheme in cloud environment

    NASA Astrophysics Data System (ADS)

    Yu, Zonghua; Lin, Sui

    2017-05-01

    Aiming at the existing problems of multi-user encrypted search scheme in cloud computing environment, a basic multi-user encrypted scheme is proposed firstly, and then the basic scheme is extended to an anonymous hierarchical management authority. Compared with most of the existing schemes, the scheme not only to achieve the protection of keyword information, but also to achieve the protection of user identity privacy; the same time, data owners can directly control the user query permissions, rather than the cloud server. In addition, through the use of a special query key generation rules, to achieve the hierarchical management of the user's query permissions. The safety analysis shows that the scheme is safe and that the performance analysis and experimental data show that the scheme is practicable.

  1. Emerging Conceptual Understanding of Complex Astronomical Phenomena by Using a Virtual Solar System

    ERIC Educational Resources Information Center

    Gazit, Elhanan; Yair, Yoav; Chen, David

    2005-01-01

    This study describes high school students' conceptual development of the basic astronomical phenomena during real-time interactions with a Virtual Solar System (VSS). The VSS is a non-immersive virtual environment which has a dynamic frame of reference that can be altered by the user. Ten 10th grade students were given tasks containing a set of…

  2. Effectiveness of a poverty simulation in Second Life®: changing nursing student attitudes toward poor people.

    PubMed

    Menzel, Nancy; Willson, Laura Helen; Doolen, Jessica

    2014-03-11

    Social justice is a fundamental value of the nursing profession, challenging educators to instill this professional value when caring for the poor. This randomized controlled trial examined whether an interactive virtual poverty simulation created in Second Life® would improve nursing students' empathy with and attributions for people living in poverty, compared to a self-study module. We created a multi-user virtual environment populated with families and individual avatars that represented the demographics contributing to poverty and vulnerability. Participants (N = 51 baccalaureate nursing students) were randomly assigned to either Intervention or Control groups and completed the modified Attitudes toward Poverty Scale pre- and post-intervention. The 2.5-hour simulation was delivered three times over a 1-year period to students in successive community health nursing classes. The investigators conducted post-simulation debriefings following a script. While participants in the virtual poverty simulation developed significantly more favorable attitudes on five questions than the Control group, the total scores did not differ significantly. Whereas students readily learned how to navigate inside Second Life®, faculty facilitators required periodic coaching and guidance to be competent. While poverty simulations, whether virtual or face-to-face, have some ability to transform nursing student attitudes, faculty must incorporate social justice concepts throughout the curriculum to produce lasting change.

  3. Interfacing modeling suite Physics Of Eclipsing Binaries 2.0 with a Virtual Reality Platform

    NASA Astrophysics Data System (ADS)

    Harriett, Edward; Conroy, Kyle; Prša, Andrej; Klassner, Frank

    2018-01-01

    To explore alternate methods for modeling eclipsing binary stars, we extrapolate upon PHOEBE’s (PHysics Of Eclipsing BinariEs) capabilities in a virtual reality (VR) environment to create an immersive and interactive experience for users. The application used is Vizard, a python-scripted VR development platform for environments such as Cave Automatic Virtual Environment (CAVE) and other off-the-shelf VR headsets. Vizard allows the freedom for all modeling to be precompiled without compromising functionality or usage on its part. The system requires five arguments to be precomputed using PHOEBE’s python front-end: the effective temperature, flux, relative intensity, vertex coordinates, and orbits; the user can opt to implement other features from PHOEBE to be accessed within the simulation as well. Here we present the method for making the data observables accessible in real time. An Occulus Rift will be available for a live showcase of various cases of VR rendering of PHOEBE binary systems including detached and contact binary stars.

  4. Bending the Curve: Sensitivity to Bending of Curved Paths and Application in Room-Scale VR.

    PubMed

    Langbehn, Eike; Lubos, Paul; Bruder, Gerd; Steinicke, Frank

    2017-04-01

    Redirected walking (RDW) promises to allow near-natural walking in an infinitely large virtual environment (VE) by subtle manipulations of the virtual camera. Previous experiments analyzed the human sensitivity to RDW manipulations by focusing on the worst-case scenario, in which users walk perfectly straight ahead in the VE, whereas they are redirected on a circular path in the real world. The results showed that a physical radius of at least 22 meters is required for undetectable RDW. However, users do not always walk exactly straight in a VE. So far, it has not been investigated how much a physical path can be bent in situations in which users walk a virtual curved path instead of a straight one. Such curved walking paths can be often observed, for example, when users walk on virtual trails, through bent corridors, or when circling around obstacles. In such situations the question is not, whether or not the physical path can be bent, but how much the bending of the physical path may vary from the bending of the virtual path. In this article, we analyze this question and present redirection by means of bending gains that describe the discrepancy between the bending of curved paths in the real and virtual environment. Furthermore, we report the psychophysical experiments in which we analyzed the human sensitivity to these gains. The results reveal encouragingly wider detection thresholds than for straightforward walking. Based on our findings, we discuss the potential of curved walking and present a first approach to leverage bent paths in a way that can provide undetectable RDW manipulations even in room-scale VR.

  5. CIS4/403: Design and Implementation of an Intranet-based system for Real-Time Tele-Consultation in Oncology

    PubMed Central

    Eccher, C; Berloffa, F; Demichelis, F; Larcher, B; Galvagni, M; Sboner, A; Graiff, A; Forti, S

    1999-01-01

    Introduction This study describes a tele-consultation system (TCS) developed to provide a computing environment over a Wide Area Network (WAN) in North Italy (Province of Trento), that can be used by two or more physicians to share medical data and to work co-operatively on medical records. A pilot study has been carried out in oncology to assess the effectiveness of the system. The aim of this project is to facilitate the management of oncology patients by improving communication among the specialists of central and district hospitals. Methods and Results The TCS is an Intranet-based solution. The Intranet is based on a PC WAN with Windows NT Server, Microsoft SQL Server, and Internet Information Server. TCS is composed of native and custom applications developed in the Microsoft Windows (9x and NT) environment. The basic component of the system is the multimedia digital medical record, structured as a collection of HTML and ASP pages. A distributed relational database will allow users to store and retrieve medical records, accessed by a dedicated Web browser via the Web Server. The medical data to be stored and the presentation architecture of the clinical record had been determined in close collaboration with the clinicians involved in the project. TCS will allow a multi-point tele-consultation (TC) among two or more participants on remote computers, providing synchronized surfing through the clinical report. A set of collaborative and personal tools, whiteboard with drawing tools, point-to-point digital audio-conference, chat, local notepad, e-mail service, are integrated in the system to provide an user friendly environment. TCS has been developed as a client-server architecture. The client part of the system is based on the Microsoft Web Browser control and provides the user interface and the tools described above. The server part, running all the time on a dedicated computer, accepts connection requests and manages the connections among the participants in a TC, allowing multiple TC to run simultaneously. TCS has been developed in Visual C++ environment using MFC library and COM technology; ActiveX controls have been written in Visual Basic to perform dedicated tasks from the inside of the HTML clinical report. Before deploying the system in the hospital departments involved in the project, TCS has been tested in our laboratory by clinicians involved in the project to evaluate the usability of the system. Discussion TCS has the potential to support a "multi-disciplinary distributed virtual oncological meeting". The specialists of different departments and of different hospitals can attend "virtual meetings" and interactively discuss on medical data. An expected benefit of the "virtual meeting" should be the possibility to provide expert remote advice from oncologists to peripheral cancer units in formulating treatment plans, conducting follow-up sessions and supporting clinical research.

  6. Virtualized Multi-Mission Operations Center (vMMOC) and its Cloud Services

    NASA Technical Reports Server (NTRS)

    Ido, Haisam Kassim

    2017-01-01

    His presentation will cover, the current and future, technical and organizational opportunities and challenges with virtualizing a multi-mission operations center. The full deployment of Goddard Space Flight Centers (GSFC) Virtualized Multi-Mission Operations Center (vMMOC) is nearly complete. The Space Science Mission Operations (SSMO) organizations spacecraft ACE, Fermi, LRO, MMS(4), OSIRIS-REx, SDO, SOHO, Swift, and Wind are in the process of being fully migrated to the vMMOC. The benefits of the vMMOC will be the normalization and the standardization of IT services, mission operations, maintenance, and development as well as ancillary services and policies such as collaboration tools, change management systems, and IT Security. The vMMOC will also provide operational efficiencies regarding hardware, IT domain expertise, training, maintenance and support.The presentation will also cover SSMO's secure Situational Awareness Dashboard in an integrated, fleet centric, cloud based web services fashion. Additionally the SSMO Telemetry as a Service (TaaS) will be covered, which allows authorized users and processes to access telemetry for the entire SSMO fleet, and for the entirety of each spacecrafts history. Both services leverage cloud services in a secure FISMA High and FedRamp environment, and also leverage distributed object stores in order to house and provide the telemetry. The services are also in the process of leveraging the cloud computing services elasticity and horizontal scalability. In the design phase is the Navigation as a Service (NaaS) which will provide a standardized, efficient, and normalized service for the fleet's space flight dynamics operations. Additional future services that may be considered are Ground Segment as a Service (GSaaS), Telemetry and Command as a Service (TCaaS), Flight Software Simulation as a Service, etc.

  7. Virtual reality: past, present and future.

    PubMed

    Gobbetti, E; Scateni, R

    1998-01-01

    This report provides a short survey of the field of virtual reality, highlighting application domains, technological requirements, and currently available solutions. The report is organized as follows: section 1 presents the background and motivation of virtual environment research and identifies typical application domain, section 2 discusses the characteristics a virtual reality system must have in order to exploit the perceptual and spatial skills of users, section 3 surveys current input/output devices for virtual reality, section 4 surveys current software approaches to support the creation of virtual reality systems, and section 5 summarizes the report.

  8. The James Webb Space Telescope RealWorld-InWorld Design Challenge: Involving Professionals in a Virtual Classroom

    NASA Astrophysics Data System (ADS)

    Masetti, Margaret; Bowers, S.

    2011-01-01

    Students around the country are becoming experts on the James Webb Space Telescope by designing solutions to two of the design challenges presented by this complex mission. RealWorld-InWorld has two parts; the first (the Real World portion) has high-school students working face to face in their classroom as engineers and scientists. The InWorld phase starts December 15, 2010 as interested teachers and their teams of high school students register to move their work into a 3D multi-user virtual world environment. At the start of this phase, college students from all over the country choose a registered team to lead InWorld. Each InWorld team is also assigned an engineer or scientist mentor. In this virtual world setting, each team refines their design solutions and creates a 3D model of the Webb telescope. InWorld teams will use 21st century tools to collaborate and build in the virtual world environment. Each team will learn, not only from their own team members, but will have the opportunity to interact with James Webb Space Telescope researchers through the virtual world setting, which allows for synchronous interactions. Halfway through the challenge, design solutions will be critiqued and a mystery problem will be introduced for each team. The top five teams will be invited to present their work during a synchronous Education Forum April 14, 2011. The top team will earn scholarships and technology. This is an excellent opportunity for professionals in both astronomy and associated engineering disciplines to become involved with a unique educational program. Besides the chance to mentor a group of interested students, there are many opportunities to interact with the students as a guest, via chats and presentations.

  9. Open multi-agent control architecture to support virtual-reality-based man-machine interfaces

    NASA Astrophysics Data System (ADS)

    Freund, Eckhard; Rossmann, Juergen; Brasch, Marcel

    2001-10-01

    Projective Virtual Reality is a new and promising approach to intuitively operable man machine interfaces for the commanding and supervision of complex automation systems. The user interface part of Projective Virtual Reality heavily builds on latest Virtual Reality techniques, a task deduction component and automatic action planning capabilities. In order to realize man machine interfaces for complex applications, not only the Virtual Reality part has to be considered but also the capabilities of the underlying robot and automation controller are of great importance. This paper presents a control architecture that has proved to be an ideal basis for the realization of complex robotic and automation systems that are controlled by Virtual Reality based man machine interfaces. The architecture does not just provide a well suited framework for the real-time control of a multi robot system but also supports Virtual Reality metaphors and augmentations which facilitate the user's job to command and supervise a complex system. The developed control architecture has already been used for a number of applications. Its capability to integrate sensor information from sensors of different levels of abstraction in real-time helps to make the realized automation system very responsive to real world changes. In this paper, the architecture will be described comprehensively, its main building blocks will be discussed and one realization that is built based on an open source real-time operating system will be presented. The software design and the features of the architecture which make it generally applicable to the distributed control of automation agents in real world applications will be explained. Furthermore its application to the commanding and control of experiments in the Columbus space laboratory, the European contribution to the International Space Station (ISS), is only one example which will be described.

  10. Phenomenology tools on cloud infrastructures using OpenStack

    NASA Astrophysics Data System (ADS)

    Campos, I.; Fernández-del-Castillo, E.; Heinemeyer, S.; Lopez-Garcia, A.; Pahlen, F.; Borges, G.

    2013-04-01

    We present a new environment for computations in particle physics phenomenology employing recent developments in cloud computing. On this environment users can create and manage "virtual" machines on which the phenomenology codes/tools can be deployed easily in an automated way. We analyze the performance of this environment based on "virtual" machines versus the utilization of physical hardware. In this way we provide a qualitative result for the influence of the host operating system on the performance of a representative set of applications for phenomenology calculations.

  11. Research on Intelligent Synthesis Environments

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Lobeck, William E.

    2002-01-01

    Four research activities related to Intelligent Synthesis Environment (ISE) have been performed under this grant. The four activities are: 1) non-deterministic approaches that incorporate technologies such as intelligent software agents, visual simulations and other ISE technologies; 2) virtual labs that leverage modeling, simulation and information technologies to create an immersive, highly interactive virtual environment tailored to the needs of researchers and learners; 3) advanced learning modules that incorporate advanced instructional, user interface and intelligent agent technologies; and 4) assessment and continuous improvement of engineering team effectiveness in distributed collaborative environments.

  12. Research on Intelligent Synthesis Environments

    NASA Astrophysics Data System (ADS)

    Noor, Ahmed K.; Loftin, R. Bowen

    2002-12-01

    Four research activities related to Intelligent Synthesis Environment (ISE) have been performed under this grant. The four activities are: 1) non-deterministic approaches that incorporate technologies such as intelligent software agents, visual simulations and other ISE technologies; 2) virtual labs that leverage modeling, simulation and information technologies to create an immersive, highly interactive virtual environment tailored to the needs of researchers and learners; 3) advanced learning modules that incorporate advanced instructional, user interface and intelligent agent technologies; and 4) assessment and continuous improvement of engineering team effectiveness in distributed collaborative environments.

  13. CliniSpace: a multiperson 3D online immersive training environment accessible through a browser.

    PubMed

    Dev, Parvati; Heinrichs, W LeRoy; Youngblood, Patricia

    2011-01-01

    Immersive online medical environments, with dynamic virtual patients, have been shown to be effective for scenario-based learning (1). However, ease of use and ease of access have been barriers to their use. We used feedback from prior evaluation of these projects to design and develop CliniSpace. To improve usability, we retained the richness of prior virtual environments but modified the user interface. To improve access, we used a Software-as-a-Service (SaaS) approach to present a richly immersive 3D environment within a web browser.

  14. Introducing ORACLE: Library Processing in a Multi-User Environment.

    ERIC Educational Resources Information Center

    Queensland Library Board, Brisbane (Australia).

    Currently being developed by the State Library of Queensland, Australia, ORACLE (On-Line Retrieval of Acquisitions, Cataloguing, and Circulation Details for Library Enquiries) is a computerized library system designed to provide rapid processing of library materials in a multi-user environment. It is based on the Australian MARC format and fully…

  15. Virtual reality interface devices in the reorganization of neural networks in the brain of patients with neurological diseases.

    PubMed

    Gatica-Rojas, Valeska; Méndez-Rebolledo, Guillermo

    2014-04-15

    Two key characteristics of all virtual reality applications are interaction and immersion. Systemic interaction is achieved through a variety of multisensory channels (hearing, sight, touch, and smell), permitting the user to interact with the virtual world in real time. Immersion is the degree to which a person can feel wrapped in the virtual world through a defined interface. Virtual reality interface devices such as the Nintendo® Wii and its peripheral nunchuks-balance board, head mounted displays and joystick allow interaction and immersion in unreal environments created from computer software. Virtual environments are highly interactive, generating great activation of visual, vestibular and proprioceptive systems during the execution of a video game. In addition, they are entertaining and safe for the user. Recently, incorporating therapeutic purposes in virtual reality interface devices has allowed them to be used for the rehabilitation of neurological patients, e.g., balance training in older adults and dynamic stability in healthy participants. The improvements observed in neurological diseases (chronic stroke and cerebral palsy) have been shown by changes in the reorganization of neural networks in patients' brain, along with better hand function and other skills, contributing to their quality of life. The data generated by such studies could substantially contribute to physical rehabilitation strategies.

  16. Virtual reality interface devices in the reorganization of neural networks in the brain of patients with neurological diseases

    PubMed Central

    Gatica-Rojas, Valeska; Méndez-Rebolledo, Guillermo

    2014-01-01

    Two key characteristics of all virtual reality applications are interaction and immersion. Systemic interaction is achieved through a variety of multisensory channels (hearing, sight, touch, and smell), permitting the user to interact with the virtual world in real time. Immersion is the degree to which a person can feel wrapped in the virtual world through a defined interface. Virtual reality interface devices such as the Nintendo® Wii and its peripheral nunchuks-balance board, head mounted displays and joystick allow interaction and immersion in unreal environments created from computer software. Virtual environments are highly interactive, generating great activation of visual, vestibular and proprioceptive systems during the execution of a video game. In addition, they are entertaining and safe for the user. Recently, incorporating therapeutic purposes in virtual reality interface devices has allowed them to be used for the rehabilitation of neurological patients, e.g., balance training in older adults and dynamic stability in healthy participants. The improvements observed in neurological diseases (chronic stroke and cerebral palsy) have been shown by changes in the reorganization of neural networks in patients’ brain, along with better hand function and other skills, contributing to their quality of life. The data generated by such studies could substantially contribute to physical rehabilitation strategies. PMID:25206907

  17. A multi-group and preemptable scheduling of cloud resource based on HTCondor

    NASA Astrophysics Data System (ADS)

    Jiang, Xiaowei; Zou, Jiaheng; Cheng, Yaodong; Shi, Jingyan

    2017-10-01

    Due to the features of virtual machine-flexibility, easy controlling and various system environments, more and more fields utilize the virtualization technology to construct the distributed system with the virtual resources, also including high energy physics. This paper introduce a method used in high energy physics that supports multiple resource group and preemptable cloud resource scheduling, combining virtual machine with HTCondor (a batch system). It makes resource controlling more flexible and more efficient and makes resource scheduling independent of job scheduling. Firstly, the resources belong to different experiment-groups, and the type of user-groups mapping to resource-groups(same as experiment-group) is one-to-one or many-to-one. In order to make the confused group simply to be managed, we designed the permission controlling component to ensure that the different resource-groups can get the suitable jobs. Secondly, for the purpose of elastically allocating resources for suitable resource-group, it is necessary to schedule resources like scheduling jobs. So this paper designs the cloud resource scheduling to maintain a resource queue and allocate an appropriate amount of virtual resources to the request resource-group. Thirdly, in some kind of situations, because of the resource occupied for a long time, resources need to be preempted. This paper adds the preemption function for the resource scheduling that implement resource preemption based on the group priority. Additionally, the way to preempting is soft that when virtual resources are preempted, jobs will not be killed but also be held and rematched later. It is implemented with the help of HTCondor, storing the held job information in scheduler, releasing the job to idle status and doing second matcher. In IHEP (institute of high energy physics), we have built a batch system based on HTCondor with a virtual resources pool based on Openstack. And this paper will show some cases of experiment JUNO and LHAASO. The result indicates that multi-group and preemptable resource scheduling is efficient to support multi-group and soft preemption. Additionally, the permission controlling component has been used in the local computing cluster, supporting for experiment JUNO, CMS and LHAASO, and the scale will be expanded to more experiments at the first half year, including DYW, BES and so on. Its evidence that the permission controlling is efficient.

  18. Direct manipulation of virtual objects

    NASA Astrophysics Data System (ADS)

    Nguyen, Long K.

    Interacting with a Virtual Environment (VE) generally requires the user to correctly perceive the relative position and orientation of virtual objects. For applications requiring interaction in personal space, the user may also need to accurately judge the position of the virtual object relative to that of a real object, for example, a virtual button and the user's real hand. This is difficult since VEs generally only provide a subset of the cues experienced in the real world. Complicating matters further, VEs presented by currently available visual displays may be inaccurate or distorted due to technological limitations. Fundamental physiological and psychological aspects of vision as they pertain to the task of object manipulation were thoroughly reviewed. Other sensory modalities -- proprioception, haptics, and audition -- and their cross-interactions with each other and with vision are briefly discussed. Visual display technologies, the primary component of any VE, were canvassed and compared. Current applications and research were gathered and categorized by different VE types and object interaction techniques. While object interaction research abounds in the literature, pockets of research gaps remain. Direct, dexterous, manual interaction with virtual objects in Mixed Reality (MR), where the real, seen hand accurately and effectively interacts with virtual objects, has not yet been fully quantified. An experimental test bed was designed to provide the highest accuracy attainable for salient visual cues in personal space. Optical alignment and user calibration were carefully performed. The test bed accommodated the full continuum of VE types and sensory modalities for comprehensive comparison studies. Experimental designs included two sets, each measuring depth perception and object interaction. The first set addressed the extreme end points of the Reality-Virtuality (R-V) continuum -- Immersive Virtual Environment (IVE) and Reality Environment (RE). This validated, linked, and extended several previous research findings, using one common test bed and participant pool. The results provided a proven method and solid reference points for further research. The second set of experiments leveraged the first to explore the full R-V spectrum and included additional, relevant sensory modalities. It consisted of two full-factorial experiments providing for rich data and key insights into the effect of each type of environment and each modality on accuracy and timeliness of virtual object interaction. The empirical results clearly showed that mean depth perception error in personal space was less than four millimeters whether the stimuli presented were real, virtual, or mixed. Likewise, mean error for the simple task of pushing a button was less than four millimeters whether the button was real or virtual. Mean task completion time was less than one second. Key to the high accuracy and quick task performance time observed was the correct presentation of the visual cues, including occlusion, stereoscopy, accommodation, and convergence. With performance results already near optimal level with accurate visual cues presented, adding proprioception, audio, and haptic cues did not significantly improve performance. Recommendations for future research include enhancement of the visual display and further experiments with more complex tasks and additional control variables.

  19. Enrichment of Human-Computer Interaction in Brain-Computer Interfaces via Virtual Environments

    PubMed Central

    Víctor Rodrigo, Mercado-García

    2017-01-01

    Tridimensional representations stimulate cognitive processes that are the core and foundation of human-computer interaction (HCI). Those cognitive processes take place while a user navigates and explores a virtual environment (VE) and are mainly related to spatial memory storage, attention, and perception. VEs have many distinctive features (e.g., involvement, immersion, and presence) that can significantly improve HCI in highly demanding and interactive systems such as brain-computer interfaces (BCI). BCI is as a nonmuscular communication channel that attempts to reestablish the interaction between an individual and his/her environment. Although BCI research started in the sixties, this technology is not efficient or reliable yet for everyone at any time. Over the past few years, researchers have argued that main BCI flaws could be associated with HCI issues. The evidence presented thus far shows that VEs can (1) set out working environmental conditions, (2) maximize the efficiency of BCI control panels, (3) implement navigation systems based not only on user intentions but also on user emotions, and (4) regulate user mental state to increase the differentiation between control and noncontrol modalities. PMID:29317861

  20. Moving Virtual Research Environments from high maintenance Stovepipes to Multi-purpose Sustainable Service-oriented Science Platforms

    NASA Astrophysics Data System (ADS)

    Klump, Jens; Fraser, Ryan; Wyborn, Lesley; Friedrich, Carsten; Squire, Geoffrey; Barker, Michelle; Moloney, Glenn

    2017-04-01

    The researcher of today is likely to be part of a team distributed over multiple sites that will access data from an external repository and then process the data on a public or private cloud or even on a large centralised supercomputer. They are increasingly likely to use a mixture of their own code, third party software and libraries, or even access global community codes. These components will be connected into a Virtual Research Environments (VREs) that will enable members of the research team who are not co-located to actively work together at various scales to share data, models, tools, software, workflows, best practices, infrastructures, etc. Many VRE's are built in isolation: designed to meet a specific research program with components tightly coupled and not capable of being repurposed for other use cases - they are becoming 'stovepipes'. The limited number of users of some VREs also means that the cost of maintenance per researcher can be unacceptably high. The alternative is to develop service-oriented Science Platforms that enable multiple communities to develop specialised solutions for specific research programs. The platforms can offer access to data, software tools and processing infrastructures (cloud, supercomputers) through globally distributed, interconnected modules. In Australia, the Virtual Geophysics Laboratory (VGL) was initially built to enable a specific set of researchers in government agencies access to specific data sets and a limited number of tools, that is now rapidly evolving into a multi-purpose Earth science platform with access to an increased variety of data, a broader range of tools, users from more sectors and a diversity of computational infrastructures. The expansion has been relatively easy, because of the architecture whereby data, tools and compute resources are loosely coupled via interfaces that are built on international standards and accessed as services wherever possible. In recent years, investments in discoverability and accessibility of data via online services in Australia mean that data resources can be easily added to the virtual environments as and when required. Another key to increasing to reusability and uptake of the VRE is the capability to capturing workflows so that they can be reused and repurposed both within and beyond the community that that defined the original use case. Unfortunately, Software-as-a-Service in the research sector is not yet mature. In response, we developed a Scientific Software solutions Center (SSSC) that enables researchers to discover, deploy and then share computational codes, code snippets or processes both in a human and machine-readable manner. Growth has come not only from within the Earth science community but from the Australian Virtual Laboratory community which is building VREs for a diversity of communities such as astronomy, genomics, environment, humanities, climate etc. Components such as access control, provenance, visualisation, accounting etc. are common to all scientific domains and sharing of these across multiple domains reduces costs, but more importantly increases the ability to undertake interdisciplinary science. These efforts are transitioning VREs to more sustainable Service-oriented Science Platforms that can be delivered in an agile, adaptable manner for broader community interests.

  1. Estimation of detection thresholds for redirected walking techniques.

    PubMed

    Steinicke, Frank; Bruder, Gerd; Jerald, Jason; Frenz, Harald; Lappe, Markus

    2010-01-01

    In immersive virtual environments (IVEs), users can control their virtual viewpoint by moving their tracked head and walking through the real world. Usually, movements in the real world are mapped one-to-one to virtual camera motions. With redirection techniques, the virtual camera is manipulated by applying gains to user motion so that the virtual world moves differently than the real world. Thus, users can walk through large-scale IVEs while physically remaining in a reasonably small workspace. In psychophysical experiments with a two-alternative forced-choice task, we have quantified how much humans can unknowingly be redirected on physical paths that are different from the visually perceived paths. We tested 12 subjects in three different experiments: (E1) discrimination between virtual and physical rotations, (E2) discrimination between virtual and physical straightforward movements, and (E3) discrimination of path curvature. In experiment E1, subjects performed rotations with different gains, and then had to choose whether the visually perceived rotation was smaller or greater than the physical rotation. In experiment E2, subjects chose whether the physical walk was shorter or longer than the visually perceived scaled travel distance. In experiment E3, subjects estimate the path curvature when walking a curved path in the real world while the visual display shows a straight path in the virtual world. Our results show that users can be turned physically about 49 percent more or 20 percent less than the perceived virtual rotation, distances can be downscaled by 14 percent and upscaled by 26 percent, and users can be redirected on a circular arc with a radius greater than 22 m while they believe that they are walking straight.

  2. Providing Personal Assistance in the SAGRES Virtual Museum.

    ERIC Educational Resources Information Center

    Bertoletti, Ana Carolina; Moraes, Marcia Cristina; da Rocha Costa, Antonio Carlos

    The SAGRES system is an educational environment built on the Web that facilitates the organization of visits to museums, presenting museum information bases in a way adapted to the user's characteristics (capacities and preferences). The system determines the group of links appropriate to the user(s) and shows them in a resultant HTML page. In…

  3. Sensor supervision and multiagent commanding by means of projective virtual reality

    NASA Astrophysics Data System (ADS)

    Rossmann, Juergen

    1998-10-01

    When autonomous systems with multiple agents are considered, conventional control- and supervision technologies are often inadequate because the amount of information available is often presented in a way that the user is effectively overwhelmed by the displayed data. New virtual reality (VR) techniques can help to cope with this problem, because VR offers the chance to convey information in an intuitive manner and can combine supervision capabilities and new, intuitive approaches to the control of autonomous systems. In the approach taken, control and supervision issues were equally stressed and finally led to the new ideas and the general framework for Projective Virtual Reality. The key idea of this new approach for an intuitively operable man machine interface for decentrally controlled multi-agent systems is to let the user act in the virtual world, detect the changes and have an action planning component automatically generate task descriptions for the agents involved to project actions that have been carried out by users in the virtual world into the physical world, e.g. with the help of robots. Thus the Projective Virtual Reality approach is to split the job between the task deduction in the VR and the task `projection' onto the physical automation components by the automatic action planning component. Besides describing the realized projective virtual reality system, the paper will also describe in detail the metaphors and visualization aids used to present different types of (e.g. sensor-) information in an intuitively comprehensible manner.

  4. NASA Virtual Glovebox (VBX): Emerging Simulation Technology for Space Station Experiment Design, Development, Training and Troubleshooting

    NASA Technical Reports Server (NTRS)

    Smith, Jeffrey D.; Twombly, I. Alexander; Maese, A. Christopher; Cagle, Yvonne; Boyle, Richard

    2003-01-01

    The International Space Station demonstrates the greatest capabilities of human ingenuity, international cooperation and technology development. The complexity of this space structure is unprecedented; and training astronaut crews to maintain all its systems, as well as perform a multitude of research experiments, requires the most advanced training tools and techniques. Computer simulation and virtual environments are currently used by astronauts to train for robotic arm manipulations and extravehicular activities; but now, with the latest computer technologies and recent successes in areas of medical simulation, the capability exists to train astronauts for more hands-on research tasks using immersive virtual environments. We have developed a new technology, the Virtual Glovebox (VGX), for simulation of experimental tasks that astronauts will perform aboard the Space Station. The VGX may also be used by crew support teams for design of experiments, testing equipment integration capability and optimizing the procedures astronauts will use. This is done through the 3D, desk-top sized, reach-in virtual environment that can simulate the microgravity environment in space. Additional features of the VGX allow for networking multiple users over the internet and operation of tele-robotic devices through an intuitive user interface. Although the system was developed for astronaut training and assisting support crews, Earth-bound applications, many emphasizing homeland security, have also been identified. Examples include training experts to handle hazardous biological and/or chemical agents in a safe simulation, operation of tele-robotic systems for assessing and diffusing threats such as bombs, and providing remote medical assistance to field personnel through a collaborative virtual environment. Thus, the emerging VGX simulation technology, while developed for space- based applications, can serve a dual use facilitating homeland security here on Earth.

  5. Design method for multi-user workstations utilizing anthropometry and preference data.

    PubMed

    Mahoney, Joseph M; Kurczewski, Nicolas A; Froede, Erick W

    2015-01-01

    Past efforts have been made to design single-user workstations to accommodate users' anthropometric and preference distributions. However, there is a lack of methods for designing workstations for group interaction. This paper introduces a method for sizing workstations to allow for a personal work area for each user and a shared space for adjacent users. We first create a virtual population with the same anthropometric and preference distributions as an intended demographic of college-aged students. Members of the virtual population are randomly paired to test if their extended reaches overlap but their normal reaches do not. This process is repeated in a Monte Carlo simulation to estimate the total percentage of groups in the population that will be accommodated for a workstation size. We apply our method to two test cases: in the first, we size polygonal workstations for two populations and, in the second, we dimension circular workstations for different group sizes. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  6. From toy to tool: the development of immersive virtual reality environments for psychotherapy of specific phobias.

    PubMed

    Bullinger, A H; Roessler, A; Mueller-Spahn, F

    1998-01-01

    Virtual Reality (VR) entered the mental health field some years ago. While the technology itself has been available for more than ten years now, there is still a certain amount of uncertainty among researchers and users as to whether VR will one day fulfill all it's promises. In this chapter we are giving an overview of the implementation of the technology in our mental health research facility in Basel, Switzerland. The development of two applications for use with claustrophobic and acrophobic patients perspectively serves just as an example within this context. Some may say, the chapter is too much based on technical considerations. Strictly speaking, VR is pure technology, even knowing that this special form of technology has sensory, psychological and even philosophical implications not known from other human computer interfaces so far. As far as we are concerned, the development of the technology for use within the mental health sector has merely just begun. As today's mostly used immersive output devices (Head-mounted Displays, shutter glasses) do not have a satisfactory resolution, do restrict movements and prevent multi-user-capabilities, there will be a soar of mental health applications the day some or at least the most important of these obstacles have been overcome.

  7. The Implementation and Validation of a Virtual Environment for Training Powered Wheelchair Manoeuvres.

    PubMed

    John, Nigel W; Pop, Serban R; Day, Thomas W; Ritsos, Panagiotis D; Headleand, Christopher J

    2018-05-01

    Navigating a powered wheelchair and avoiding collisions is often a daunting task for new wheelchair users. It takes time and practice to gain the coordination needed to become a competent driver and this can be even more of a challenge for someone with a disability. We present a cost-effective virtual reality (VR) application that takes advantage of consumer level VR hardware. The system can be easily deployed in an assessment centre or for home use, and does not depend on a specialized high-end virtual environment such as a Powerwall or CAVE. This paper reviews previous work that has used virtual environments technology for training tasks, particularly wheelchair simulation. We then describe the implementation of our own system and the first validation study carried out using thirty three able bodied volunteers. The study results indicate that at a significance level of 5 percent then there is an improvement in driving skills from the use of our VR system. We thus have the potential to develop the competency of a wheelchair user whilst avoiding the risks inherent to training in the real world. However, the occurrence of cybersickness is a particular problem in this application that will need to be addressed.

  8. An artificial reality environment for remote factory control and monitoring

    NASA Technical Reports Server (NTRS)

    Kosta, Charles Paul; Krolak, Patrick D.

    1993-01-01

    Work has begun on the merger of two well known systems, VEOS (HITLab) and CLIPS (NASA). In the recent past, the University of Massachusetts Lowell developed a parallel version of NASA CLIPS, called P-CLIPS. This modification allows users to create smaller expert systems which are able to communicate with each other to jointly solve problems. With the merger of a VEOS message system, PCLIPS-V can now act as a group of entities working within VEOS. To display the 3D virtual world we have been using a graphics package called HOOPS, from Ithaca Software. The artificial reality environment we have set up contains actors and objects as found in our Lincoln Logs Factory of the Future project. The environment allows us to view and control the objects within the virtual world. All communication between the separate CLIPS expert systems is done through VEOS. A graphical renderer generates camera views on X-Windows devices; Head Mounted Devices are not required. This allows more people to make use of this technology. We are experimenting with different types of virtual vehicles to give the user a sense that he or she is actually moving around inside the factory looking ahead through windows and virtual monitors.

  9. Training software using virtual-reality technology and pre-calculated effective dose data.

    PubMed

    Ding, Aiping; Zhang, Di; Xu, X George

    2009-05-01

    This paper describes the development of a software package, called VR Dose Simulator, which aims to provide interactive radiation safety and ALARA training to radiation workers using virtual-reality (VR) simulations. Combined with a pre-calculated effective dose equivalent (EDE) database, a virtual radiation environment was constructed in VR authoring software, EON Studio, using 3-D models of a real nuclear power plant building. Models of avatars representing two workers were adopted with arms and legs of the avatar being controlled in the software to simulate walking and other postures. Collision detection algorithms were developed for various parts of the 3-D power plant building and avatars to confine the avatars to certain regions of the virtual environment. Ten different camera viewpoints were assigned to conveniently cover the entire virtual scenery in different viewing angles. A user can control the avatar to carry out radiological engineering tasks using two modes of avatar navigation. A user can also specify two types of radiation source: Cs and Co. The location of the avatar inside the virtual environment during the course of the avatar's movement is linked to the EDE database. The accumulative dose is calculated and displayed on the screen in real-time. Based on the final accumulated dose and the completion status of all virtual tasks, a score is given to evaluate the performance of the user. The paper concludes that VR-based simulation technologies are interactive and engaging, thus potentially useful in improving the quality of radiation safety training. The paper also summarizes several challenges: more streamlined data conversion, realistic avatar movement and posture, more intuitive implementation of the data communication between EON Studio and VB.NET, and more versatile utilization of EDE data such as a source near the body, etc., all of which needs to be addressed in future efforts to develop this type of software.

  10. Evolution-based Virtual Content Insertion with Visually Virtual Interactions in Videos

    NASA Astrophysics Data System (ADS)

    Chang, Chia-Hu; Wu, Ja-Ling

    With the development of content-based multimedia analysis, virtual content insertion has been widely used and studied for video enrichment and multimedia advertising. However, how to automatically insert a user-selected virtual content into personal videos in a less-intrusive manner, with an attractive representation, is a challenging problem. In this chapter, we present an evolution-based virtual content insertion system which can insert virtual contents into videos with evolved animations according to predefined behaviors emulating the characteristics of evolutionary biology. The videos are considered not only as carriers of message conveyed by the virtual content but also as the environment in which the lifelike virtual contents live. Thus, the inserted virtual content will be affected by the videos to trigger a series of artificial evolutions and evolve its appearances and behaviors while interacting with video contents. By inserting virtual contents into videos through the system, users can easily create entertaining storylines and turn their personal videos into visually appealing ones. In addition, it would bring a new opportunity to increase the advertising revenue for video assets of the media industry and online video-sharing websites.

  11. The effect of extended sensory range via the EyeCane sensory substitution device on the characteristics of visionless virtual navigation.

    PubMed

    Maidenbaum, Shachar; Levy-Tzedek, Shelly; Chebat, Daniel Robert; Namer-Furstenberg, Rinat; Amedi, Amir

    2014-01-01

    Mobility training programs for helping the blind navigate through unknown places with a White-Cane significantly improve their mobility. However, what is the effect of new assistive technologies, offering more information to the blind user, on the underlying premises of these programs such as navigation patterns? We developed the virtual-EyeCane, a minimalistic sensory substitution device translating single-point-distance into auditory cues identical to the EyeCane's in the real world. We compared performance in virtual environments when using the virtual-EyeCane, a virtual-White-Cane, no device and visual navigation. We show that the characteristics of virtual-EyeCane navigation differ from navigation with a virtual-White-Cane or no device, and that virtual-EyeCane users complete more levels successfully, taking shorter paths and with less collisions than these groups, and we demonstrate the relative similarity of virtual-EyeCane and visual navigation patterns. This suggests that additional distance information indeed changes navigation patterns from virtual-White-Cane use, and brings them closer to visual navigation.

  12. Transduction between worlds: using virtual and mixed reality for earth and planetary science

    NASA Astrophysics Data System (ADS)

    Hedley, N.; Lochhead, I.; Aagesen, S.; Lonergan, C. D.; Benoy, N.

    2017-12-01

    Virtual reality (VR) and augmented reality (AR) have the potential to transform the way we visualize multidimensional geospatial datasets in support of geoscience research, exploration and analysis. The beauty of virtual environments is that they can be built at any scale, users can view them at many levels of abstraction, move through them in unconventional ways, and experience spatial phenomena as if they had superpowers. Similarly, augmented reality allows you to bring the power of virtual 3D data visualizations into everyday spaces. Spliced together, these interface technologies hold incredible potential to support 21st-century geoscience. In my ongoing research, my team and I have made significant advances to connect data and virtual simulations with real geographic spaces, using virtual environments, geospatial augmented reality and mixed reality. These research efforts have yielded new capabilities to connect users with spatial data and phenomena. These innovations include: geospatial x-ray vision; flexible mixed reality; augmented 3D GIS; situated augmented reality 3D simulations of tsunamis and other phenomena interacting with real geomorphology; augmented visual analytics; and immersive GIS. These new modalities redefine the ways in which we can connect digital spaces of spatial analysis, simulation and geovisualization, with geographic spaces of data collection, fieldwork, interpretation and communication. In a way, we are talking about transduction between real and virtual worlds. Taking a mixed reality approach to this, we can link real and virtual worlds. This paper presents a selection of our 3D geovisual interface projects in terrestrial, coastal, underwater and other environments. Using rigorous applied geoscience data, analyses and simulations, our research aims to transform the novelty of virtual and augmented reality interface technologies into game-changing mixed reality geoscience.

  13. Learning in the Virtual World: The Pedagogical Potentials of Massively Multiplayer Online Role Playing Games

    ERIC Educational Resources Information Center

    Yu, Tao Wang

    2009-01-01

    A much more attractive way to use the internet was discovered. Users are represented by avatars in the fantasy persistent 3D world, and the avatars apparently come to occupy a special place in the hearts of their creators (Castronova, 2001). At present, millions of people worldwide have accounts to some kind of virtual environments. Virtual world…

  14. Local concurrent error detection and correction in data structures using virtual backpointers

    NASA Technical Reports Server (NTRS)

    Li, C. C.; Chen, P. P.; Fuchs, W. K.

    1987-01-01

    A new technique, based on virtual backpointers, for local concurrent error detection and correction in linked data structures is presented. Two new data structures, the Virtual Double Linked List, and the B-tree with Virtual Backpointers, are described. For these structures, double errors can be detected in 0(1) time and errors detected during forward moves can be corrected in 0(1) time. The application of a concurrent auditor process to data structure error detection and correction is analyzed, and an implementation is described, to determine the effect on mean time to failure of a multi-user shared database system. The implementation utilizes a Sequent shared memory multiprocessor system operating on a shared databased of Virtual Double Linked Lists.

  15. Local concurrent error detection and correction in data structures using virtual backpointers

    NASA Technical Reports Server (NTRS)

    Li, Chung-Chi Jim; Chen, Paul Peichuan; Fuchs, W. Kent

    1989-01-01

    A new technique, based on virtual backpointers, for local concurrent error detection and correction in linked data strutures is presented. Two new data structures, the Virtual Double Linked List, and the B-tree with Virtual Backpointers, are described. For these structures, double errors can be detected in 0(1) time and errors detected during forward moves can be corrected in 0(1) time. The application of a concurrent auditor process to data structure error detection and correction is analyzed, and an implementation is described, to determine the effect on mean time to failure of a multi-user shared database system. The implementation utilizes a Sequent shared memory multiprocessor system operating on a shared database of Virtual Double Linked Lists.

  16. An innovative virtual reality training tool for orthognathic surgery.

    PubMed

    Pulijala, Y; Ma, M; Pears, M; Peebles, D; Ayoub, A

    2018-02-01

    Virtual reality (VR) surgery using Oculus Rift and Leap Motion devices is a multi-sensory, holistic surgical training experience. A multimedia combination including 360° videos, three-dimensional interaction, and stereoscopic videos in VR has been developed to enable trainees to experience a realistic surgery environment. The innovation allows trainees to interact with the individual components of the maxillofacial anatomy and apply surgical instruments while watching close-up stereoscopic three-dimensional videos of the surgery. In this study, a novel training tool for Le Fort I osteotomy based on immersive virtual reality (iVR) was developed and validated. Seven consultant oral and maxillofacial surgeons evaluated the application for face and content validity. Using a structured assessment process, the surgeons commented on the content of the developed training tool, its realism and usability, and the applicability of VR surgery for orthognathic surgical training. The results confirmed the clinical applicability of VR for delivering training in orthognathic surgery. Modifications were suggested to improve the user experience and interactions with the surgical instruments. This training tool is ready for testing with surgical trainees. Copyright © 2018 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  17. Implementing virtual reality interfaces for the geosciences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bethel, W.; Jacobsen, J.; Austin, A.

    1996-06-01

    For the past few years, a multidisciplinary team of computer and earth scientists at Lawrence Berkeley National Laboratory has been exploring the use of advanced user interfaces, commonly called {open_quotes}Virtual Reality{close_quotes} (VR), coupled with visualization and scientific computing software. Working closely with industry, these efforts have resulted in an environment in which VR technology is coupled with existing visualization and computational tools. VR technology may be thought of as a user interface. It is useful to think of a spectrum, ranging the gamut from command-line interfaces to completely immersive environments. In the former, one uses the keyboard to enter threemore » or six-dimensional parameters. In the latter, three or six-dimensional information is provided by trackers contained either in hand-held devices or attached to the user in some fashion, e.g. attached to a head-mounted display. Rich, extensible and often complex languages are a vehicle whereby the user controls parameters to manipulate object position and location in a virtual world, but the keyboard is the obstacle in that typing is cumbersome, error-prone and typically slow. In the latter, the user can interact with these parameters by means of motor skills which are highly developed. Two specific geoscience application areas will be highlighted. In the first, we have used VR technology to manipulate three-dimensional input parameters, such as the spatial location of injection or production wells in a reservoir simulator. In the second, we demonstrate how VR technology has been used to manipulate visualization tools, such as a tool for computing streamlines via manipulation of a {open_quotes}rake.{close_quotes} The rake is presented to the user in the form of a {open_quotes}virtual well{close_quotes} icon, and provides parameters used by the streamlines algorithm.« less

  18. Mission Simulation Facility: Simulation Support for Autonomy Development

    NASA Technical Reports Server (NTRS)

    Pisanich, Greg; Plice, Laura; Neukom, Christian; Flueckiger, Lorenzo; Wagner, Michael

    2003-01-01

    The Mission Simulation Facility (MSF) supports research in autonomy technology for planetary exploration vehicles. Using HLA (High Level Architecture) across distributed computers, the MSF connects users autonomy algorithms with provided or third-party simulations of robotic vehicles and planetary surface environments, including onboard components and scientific instruments. Simulation fidelity is variable to meet changing needs as autonomy technology advances in Technical Readiness Level (TRL). A virtual robot operating in a virtual environment offers numerous advantages over actual hardware, including availability, simplicity, and risk mitigation. The MSF is in use by researchers at NASA Ames Research Center (ARC) and has demonstrated basic functionality. Continuing work will support the needs of a broader user base.

  19. Virtual World Astrosociology

    NASA Astrophysics Data System (ADS)

    Bainbridge, William Sims

    2010-01-01

    This essay introduces the opportunity for theory development and even empirical research on some aspects of astrosociology through today's online virtual worlds. The examples covered present life on other planets or in space itself, in a manner that can be experienced by the user and where the user's reactions may simulate to some degree future human behavior in real extraterrestrial environments: Tabula Rasa, Anarchy Online, Entropia Universe, EVE Online, StarCraft and World of Warcraft. Ethnographic exploration of these computerized environments raises many questions about the social science both of space exploration and of direct contact with extraterrestrials. The views expressed in this essay do not necessarily represent the views of the National Science Foundation or the United States.

  20. Temporally coherent 4D video segmentation for teleconferencing

    NASA Astrophysics Data System (ADS)

    Ehmann, Jana; Guleryuz, Onur G.

    2013-09-01

    We develop an algorithm for 4-D (RGB+Depth) video segmentation targeting immersive teleconferencing ap- plications on emerging mobile devices. Our algorithm extracts users from their environments and places them onto virtual backgrounds similar to green-screening. The virtual backgrounds increase immersion and interac- tivity, relieving the users of the system from distractions caused by disparate environments. Commodity depth sensors, while providing useful information for segmentation, result in noisy depth maps with a large number of missing depth values. By combining depth and RGB information, our work signi¯cantly improves the other- wise very coarse segmentation. Further imposing temporal coherence yields compositions where the foregrounds seamlessly blend with the virtual backgrounds with minimal °icker and other artifacts. We achieve said improve- ments by correcting the missing information in depth maps before fast RGB-based segmentation, which operates in conjunction with temporal coherence. Simulation results indicate the e±cacy of the proposed system in video conferencing scenarios.

  1. Virtual reality and interactive 3D as effective tools for medical training.

    PubMed

    Webb, George; Norcliffe, Alex; Cannings, Peter; Sharkey, Paul; Roberts, Dave

    2003-01-01

    CAVE-like displays allow a user to walk in to a virtual environment, and use natural movement to change the viewpoint of virtual objects which they can manipulate with a hand held device. This maps well to many surgical procedures offering strong potential for training and planning. These devices may be networked together allowing geographically remote users to share the interactive experience. This maps to the strong need for distance training and planning of surgeons. Our paper shows how the properties of a CAVE-Like facility can be maximised in order to provide an ideal environment for medical training. The implementation of a large 3D-eye is described. The resulting application is that of an eye that can be manipulated and examined by trainee medics under the guidance of a medical expert. The progression and effects of different ailments can be illustrated and corrective procedures, demonstrated.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Song

    CFD (Computational Fluid Dynamics) is a widely used technique in engineering design field. It uses mathematical methods to simulate and predict flow characteristics in a certain physical space. Since the numerical result of CFD computation is very hard to understand, VR (virtual reality) and data visualization techniques are introduced into CFD post-processing to improve the understandability and functionality of CFD computation. In many cases CFD datasets are very large (multi-gigabytes), and more and more interactions between user and the datasets are required. For the traditional VR application, the limitation of computing power is a major factor to prevent visualizing largemore » dataset effectively. This thesis presents a new system designing to speed up the traditional VR application by using parallel computing and distributed computing, and the idea of using hand held device to enhance the interaction between a user and VR CFD application as well. Techniques in different research areas including scientific visualization, parallel computing, distributed computing and graphical user interface designing are used in the development of the final system. As the result, the new system can flexibly be built on heterogeneous computing environment, dramatically shorten the computation time.« less

  3. Robotic and Virtual Reality BCIs Using Spatial Tactile and Auditory Oddball Paradigms.

    PubMed

    Rutkowski, Tomasz M

    2016-01-01

    The paper reviews nine robotic and virtual reality (VR) brain-computer interface (BCI) projects developed by the author, in collaboration with his graduate students, within the BCI-lab research group during its association with University of Tsukuba, Japan. The nine novel approaches are discussed in applications to direct brain-robot and brain-virtual-reality-agent control interfaces using tactile and auditory BCI technologies. The BCI user intentions are decoded from the brainwaves in realtime using a non-invasive electroencephalography (EEG) and they are translated to a symbiotic robot or virtual reality agent thought-based only control. A communication protocol between the BCI output and the robot or the virtual environment is realized in a symbiotic communication scenario using an user datagram protocol (UDP), which constitutes an internet of things (IoT) control scenario. Results obtained from healthy users reproducing simple brain-robot and brain-virtual-agent control tasks in online experiments support the research goal of a possibility to interact with robotic devices and virtual reality agents using symbiotic thought-based BCI technologies. An offline BCI classification accuracy boosting method, using a previously proposed information geometry derived approach, is also discussed in order to further support the reviewed robotic and virtual reality thought-based control paradigms.

  4. Robotic and Virtual Reality BCIs Using Spatial Tactile and Auditory Oddball Paradigms

    PubMed Central

    Rutkowski, Tomasz M.

    2016-01-01

    The paper reviews nine robotic and virtual reality (VR) brain–computer interface (BCI) projects developed by the author, in collaboration with his graduate students, within the BCI–lab research group during its association with University of Tsukuba, Japan. The nine novel approaches are discussed in applications to direct brain-robot and brain-virtual-reality-agent control interfaces using tactile and auditory BCI technologies. The BCI user intentions are decoded from the brainwaves in realtime using a non-invasive electroencephalography (EEG) and they are translated to a symbiotic robot or virtual reality agent thought-based only control. A communication protocol between the BCI output and the robot or the virtual environment is realized in a symbiotic communication scenario using an user datagram protocol (UDP), which constitutes an internet of things (IoT) control scenario. Results obtained from healthy users reproducing simple brain-robot and brain-virtual-agent control tasks in online experiments support the research goal of a possibility to interact with robotic devices and virtual reality agents using symbiotic thought-based BCI technologies. An offline BCI classification accuracy boosting method, using a previously proposed information geometry derived approach, is also discussed in order to further support the reviewed robotic and virtual reality thought-based control paradigms. PMID:27999538

  5. New Desktop Virtual Reality Technology in Technical Education

    ERIC Educational Resources Information Center

    Ausburn, Lynna J.; Ausburn, Floyd B.

    2008-01-01

    Virtual reality (VR) that immerses users in a 3D environment through use of headwear, body suits, and data gloves has demonstrated effectiveness in technical and professional education. Immersive VR is highly engaging and appealing to technically skilled young Net Generation learners. However, technical difficulty and very high costs have kept…

  6. An Online Image Analysis Tool for Science Education

    ERIC Educational Resources Information Center

    Raeside, L.; Busschots, B.; Waddington, S.; Keating, J. G.

    2008-01-01

    This paper describes an online image analysis tool developed as part of an iterative, user-centered development of an online Virtual Learning Environment (VLE) called the Education through Virtual Experience (EVE) Portal. The VLE provides a Web portal through which schoolchildren and their teachers create scientific proposals, retrieve images and…

  7. A virtual environment for modeling and testing sensemaking with multisensor information

    NASA Astrophysics Data System (ADS)

    Nicholson, Denise; Bartlett, Kathleen; Hoppenfeld, Robert; Nolan, Margaret; Schatz, Sae

    2014-05-01

    Given today's challenging Irregular Warfare, members of small infantry units must be able to function as highly sensitized perceivers throughout large operational areas. Improved Situation Awareness (SA) in rapidly changing fields of operation may also save lives of law enforcement personnel and first responders. Critical competencies for these individuals include sociocultural sensemaking, the ability to assess a situation through the perception of essential salient environmental and behavioral cues, and intuitive sensemaking, which allows experts to act with the utmost agility. Intuitive sensemaking and intuitive decision making (IDM), which involve processing information at a subconscious level, have been cited as playing a critical role in saving lives and enabling mission success. This paper discusses the development of a virtual environment for modeling, analysis and human-in-the-loop testing of perception, sensemaking, intuitive sensemaking, decision making (DM), and IDM performance, using state-of-the-art scene simulation and modeled imagery from multi-source systems, under the "Intuition and Implicit Learning" Basic Research Challenge (I2BRC) sponsored by the Office of Naval Research (ONR). We present results from our human systems engineering approach including 1) development of requirements and test metrics for individual and integrated system components, 2) the system architecture design 3) images of the prototype virtual environment testing system and 4) a discussion of the system's current and future testing capabilities. In particular, we examine an Enhanced Interaction Suite testbed to model, test, and analyze the impact of advances in sensor spatial, and temporal resolution to a user's intuitive sensemaking and decision making capabilities.

  8. The impact of self-avatars on trust and collaboration in shared virtual environments.

    PubMed

    Pan, Ye; Steed, Anthony

    2017-01-01

    A self-avatar is known to have a potentially significant impact on the user's experience of the immersive content but it can also affect how users interact with each other in a shared virtual environment (SVE). We implemented an SVE for a consumer virtual reality system where each user's body could be represented by a jointed self-avatar that was dynamically controlled by head and hand controllers. We investigated the impact of a self-avatar on collaborative outcomes such as completion time and trust formation during competitive and cooperative tasks. We used two different embodiment levels: no self-avatar and self-avatar, and compared these to an in-person face to face version of the tasks. We found that participants could finish the task more quickly when they cooperated than when they competed, for both the self-avatar condition and the face to face condition, but not for the no self-avatar condition. In terms of trust formation, both the self-avatar condition and the face to face condition led to higher scores than the no self-avatar condition; however, collaboration style had no significant effect on trust built between partners. The results are further evidence of the importance of a self-avatar representation in immersive virtual reality.

  9. A Virtual Environment for People Who Are Blind – A Usability Study

    PubMed Central

    Lahav, O.; Schloerb, D. W.; Kumar, S.; Srinivasan, M. A.

    2013-01-01

    For most people who are blind, exploring an unknown environment can be unpleasant, uncomfortable, and unsafe. Over the past years, the use of virtual reality as a learning and rehabilitation tool for people with disabilities has been on the rise. This research is based on the hypothesis that the supply of appropriate perceptual and conceptual information through compensatory sensorial channels may assist people who are blind with anticipatory exploration. In this research we developed and tested the BlindAid system, which allows the user to explore a virtual environment. The two main goals of the research were: (a) evaluation of different modalities (haptic and audio) and navigation tools, and (b) evaluation of spatial cognitive mapping employed by people who are blind. Our research included four participants who are totally blind. The preliminary findings confirm that the system enabled participants to develop comprehensive cognitive maps by exploring the virtual environment. PMID:24353744

  10. Cloud hosting of the IPython Notebook to Provide Collaborative Research Environments for Big Data Analysis

    NASA Astrophysics Data System (ADS)

    Kershaw, Philip; Lawrence, Bryan; Gomez-Dans, Jose; Holt, John

    2015-04-01

    We explore how the popular IPython Notebook computing system can be hosted on a cloud platform to provide a flexible virtual research hosting environment for Earth Observation data processing and analysis and how this approach can be expanded more broadly into a generic SaaS (Software as a Service) offering for the environmental sciences. OPTIRAD (OPTImisation environment for joint retrieval of multi-sensor RADiances) is a project funded by the European Space Agency to develop a collaborative research environment for Data Assimilation of Earth Observation products for land surface applications. Data Assimilation provides a powerful means to combine multiple sources of data and derive new products for this application domain. To be most effective, it requires close collaboration between specialists in this field, land surface modellers and end users of data generated. A goal of OPTIRAD then is to develop a collaborative research environment to engender shared working. Another significant challenge is that of data volume and complexity. Study of land surface requires high spatial and temporal resolutions, a relatively large number of variables and the application of algorithms which are computationally expensive. These problems can be addressed with the application of parallel processing techniques on specialist compute clusters. However, scientific users are often deterred by the time investment required to port their codes to these environments. Even when successfully achieved, it may be difficult to readily change or update. This runs counter to the scientific process of continuous experimentation, analysis and validation. The IPython Notebook provides users with a web-based interface to multiple interactive shells for the Python programming language. Code, documentation and graphical content can be saved and shared making it directly applicable to OPTIRAD's requirements for a shared working environment. Given the web interface it can be readily made into a hosted service with Wakari and Microsoft Azure being notable examples. Cloud-hosting of the Notebook allows the same familiar Python interface to be retained but backed by Cloud Computing attributes of scalability, elasticity and resource pooling. This combination makes it a powerful solution to address the needs of long-tail science users of Big Data: an intuitive interactive interface with which to access powerful compute resources. IPython Notebook can be hosted as a single user desktop environment but the recent development by the IPython community of JupyterHub enables it to be run as a multi-user hosting environment. In addition, IPython.parallel allows the exposition of parallel compute infrastructure through a Python interface. Applying these technologies in combination, a collaborative research environment has been developed for OPTIRAD on the UK JASMIN/CEMS facility's private cloud (http://jasmin.ac.uk). Based on this experience, a generic virtualised solution is under development suitable for use by the wider environmental science community - on both JASMIN and portable to third party cloud platforms.

  11. Detecting Distributed SQL Injection Attacks in a Eucalyptus Cloud Environment

    NASA Technical Reports Server (NTRS)

    Kebert, Alan; Barnejee, Bikramjit; Solano, Juan; Solano, Wanda

    2013-01-01

    The cloud computing environment offers malicious users the ability to spawn multiple instances of cloud nodes that are similar to virtual machines, except that they can have separate external IP addresses. In this paper we demonstrate how this ability can be exploited by an attacker to distribute his/her attack, in particular SQL injection attacks, in such a way that an intrusion detection system (IDS) could fail to identify this attack. To demonstrate this, we set up a small private cloud, established a vulnerable website in one instance, and placed an IDS within the cloud to monitor the network traffic. We found that an attacker could quite easily defeat the IDS by periodically altering its IP address. To detect such an attacker, we propose to use multi-agent plan recognition, where the multiple source IPs are considered as different agents who are mounting a collaborative attack. We show that such a formulation of this problem yields a more sophisticated approach to detecting SQL injection attacks within a cloud computing environment.

  12. D-VASim: an interactive virtual laboratory environment for the simulation and analysis of genetic circuits.

    PubMed

    Baig, Hasan; Madsen, Jan

    2017-01-15

    Simulation and behavioral analysis of genetic circuits is a standard approach of functional verification prior to their physical implementation. Many software tools have been developed to perform in silico analysis for this purpose, but none of them allow users to interact with the model during runtime. The runtime interaction gives the user a feeling of being in the lab performing a real world experiment. In this work, we present a user-friendly software tool named D-VASim (Dynamic Virtual Analyzer and Simulator), which provides a virtual laboratory environment to simulate and analyze the behavior of genetic logic circuit models represented in an SBML (Systems Biology Markup Language). Hence, SBML models developed in other software environments can be analyzed and simulated in D-VASim. D-VASim offers deterministic as well as stochastic simulation; and differs from other software tools by being able to extract and validate the Boolean logic from the SBML model. D-VASim is also capable of analyzing the threshold value and propagation delay of a genetic circuit model. D-VASim is available for Windows and Mac OS and can be downloaded from bda.compute.dtu.dk/downloads/. haba@dtu.dk, jama@dtu.dk. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  13. Multi-Dimensional Optimization for Cloud Based Multi-Tier Applications

    ERIC Educational Resources Information Center

    Jung, Gueyoung

    2010-01-01

    Emerging trends toward cloud computing and virtualization have been opening new avenues to meet enormous demands of space, resource utilization, and energy efficiency in modern data centers. By being allowed to host many multi-tier applications in consolidated environments, cloud infrastructure providers enable resources to be shared among these…

  14. The EVER-EST portal as support for the Sea Monitoring Virtual Research Community, through the sharing of resources, enabling dynamic collaboration and promoting community engagement

    NASA Astrophysics Data System (ADS)

    Foglini, Federica; Grande, Valentina; De Leo, Francesco; Mantovani, Simone; Ferraresi, Sergio

    2017-04-01

    EVER-EST offers a framework based on advanced services delivered both at the e-infrastructure and domain-specific level, with the objective of supporting each phase of the Earth Science Research and Information Lifecycle. It provides innovative e-research services to Earth Science user communities for communication, cross-validation and the sharing of knowledge and science outputs. The project follows a user-centric approach: real use cases taken from pre-selected Virtual Research Communities (VRC) covering different Earth Science research scenarios drive the implementation of the Virtual Research Environment (VRE) services and capabilities. The Sea Monitoring community is involved in the evaluation of the EVER-EST infrastructure. The community of potential users is wide and heterogeneous including both multi-disciplinary scientists and national/international agencies and authorities (e.g. MPAs directors, technicians from regional agencies like ARPA in Italy, the technicians working for the Ministry of the Environment) dealing with the adoption of a better way of measuring the quality of the environment. The scientific community has the main role of assessing the best criteria and indicators for defining the Good Environmental Status (GES) in their own sub regions, and implementing methods, protocols and tools for monitoring the GES descriptors. According to the Marine Strategy Framework Directive (MSFD), the environmental status of marine waters is defined by 11 descriptors, and forms a proposed set of 29 associated criteria and 56 different indicators. The objective of the Sea Monitoring VRC is to provide useful and applicable contributions to the evaluation of the descriptors: D1.Biodiversity, D2.Non-indigenous species and D6.Seafloor Integrity (http://ec.europa.eu/environment/marine/good-environmental-status/index_en.htm). The main challenges for the community members are: 1. discovery of existing data and products distributed among different infrastructures; 2. sharing methodologies about the GES evaluation and monitoring; 3. working on the same workflows and data; 4. adopting shared powerful tools for data processing (e.g. software and servers). The Sea Monitoring portal provides the VRC users with tools and services aimed at enhancing their ability to interoperate and share knowledge, experience and methods for GES assessment and monitoring, such as: •digital information services for data management, exploitation and preservation (accessibility of heterogeneous data sources including associated documentation); •e-collaboration services to communicate and share knowledge, ideas, protocols and workflows; •e-learning services to facilitate the use of common workflows for assessing GES indicators; •e-research services for workflow management, validation and verification, as well as visualization and interactive services. The current study is co-financed by the European Union's Horizon 2020 research and innovation programme under the EVER-EST project (Grant Agreement No. 674907).

  15. The Modeling of Virtual Environment Distance Education

    NASA Astrophysics Data System (ADS)

    Xueqin, Chang

    This research presented a virtual environment that integrates in a virtual mockup services available in a university campus for students and teachers communication in different actual locations. Advantages of this system include: the remote access to a variety of services and educational tools, the representation of real structures and landscapes in an interactive 3D model that favors localization of services and preserves the administrative organization of the university. For that, the system was implemented a control access for users and an interface to allow the use of previous educational equipments and resources not designed for distance education mode.

  16. A multi-criteria approach to camera motion design for volume data animation.

    PubMed

    Hsu, Wei-Hsien; Zhang, Yubo; Ma, Kwan-Liu

    2013-12-01

    We present an integrated camera motion design and path generation system for building volume data animations. Creating animations is an essential task in presenting complex scientific visualizations. Existing visualization systems use an established animation function based on keyframes selected by the user. This approach is limited in providing the optimal in-between views of the data. Alternatively, computer graphics and virtual reality camera motion planning is frequently focused on collision free movement in a virtual walkthrough. For semi-transparent, fuzzy, or blobby volume data the collision free objective becomes insufficient. Here, we provide a set of essential criteria focused on computing camera paths to establish effective animations of volume data. Our dynamic multi-criteria solver coupled with a force-directed routing algorithm enables rapid generation of camera paths. Once users review the resulting animation and evaluate the camera motion, they are able to determine how each criterion impacts path generation. In this paper, we demonstrate how incorporating this animation approach with an interactive volume visualization system reduces the effort in creating context-aware and coherent animations. This frees the user to focus on visualization tasks with the objective of gaining additional insight from the volume data.

  17. User-Centered Design Strategies for Massive Open Online Courses (MOOCs)

    ERIC Educational Resources Information Center

    Mendoza-Gonzalez, Ricardo, Ed.

    2016-01-01

    In today's society, educational opportunities have evolved beyond the traditional classroom setting. Most universities have implemented virtual learning environments in an effort to provide more opportunities for potential or current students seeking alternative and more affordable learning solutions. "User-Centered Design Strategies for…

  18. The Heliophysics Data Environment Today

    NASA Technical Reports Server (NTRS)

    Fung, Shing F.; McGuire, R.; Roberts, D. A.

    2008-01-01

    Driven by the nature of the research questions now most critical to further progress in heliophysics science, data-driven research has evolved from a model once centered on individual instrument Principal investigator groups and a circle of immediate collaborators into a more inclusive and open environment where data gathered ay great public cost must then be findable and useable throughout the broad national and international research community. In this paper and as an introduction to this special session, we will draw a picture of existing and evolving resources throughout the heliophyscs community, the capabilities and data now available to end users, and the relationships and complementarity of different elements in the environment today. We will cite the relative roles of mission and instrument data centers and resident archives, multi-mission data centers, and the growing importance of virtual discipline observatories and cross-cutting services including the evolution of a common data dictionary. We will briefly summarize our view of the most important challenges still faced by users and providers, and our vision in ow the efforts today can evolve into a more and more enabling data framework for the global research community to tap the widest range of existing missions and their data to address a full range of critical science questions from the scale of microphysics to the heliospheric system as a whole.

  19. Playing in or out of character: user role differences in the experience of interactive storytelling.

    PubMed

    Roth, Christian; Vermeulen, Ivar; Vorderer, Peter; Klimmt, Christoph; Pizzi, David; Lugrin, Jean-Luc; Cavazza, Marc

    2012-11-01

    Interactive storytelling (IS) is a promising new entertainment technology synthesizing preauthored narrative with dynamic user interaction. Existing IS prototypes employ different modes to involve users in a story, ranging from individual avatar control to comprehensive control over the virtual environment. The current experiment tested whether different player modes (exerting local vs. global influence) yield different user experiences (e.g., senses of immersion vs. control). A within-subject design involved 34 participants playing the cinematic IS drama "Emo Emma"( 1 ) both in the local (actor) and in global (ghost) mode. The latter mode allowed free movement in the virtual environment and hidden influence on characters, objects, and story development. As expected, control-related experiential qualities (effectance, autonomy, flow, and pride) were more intense for players in the global (ghost) mode. Immersion-related experiences did not differ over modes. Additionally, men preferred the sense of command facilitated by the ghost mode, whereas women preferred the sense of involvement facilitated by the actor mode.

  20. Confessions of a Second Life: Conforming in the Virtual World?

    NASA Astrophysics Data System (ADS)

    Chicas, K.; Bailenson, J.; Stevenson Won, A.; Bailey, J.

    2012-12-01

    Virtual Worlds such as Second Life or World of Warcraft are increasingly popular, with people all over the world joining these online communities. In these virtual environments people break the barrier of reality every day when they fly, walk through walls and teleport places. It is easy for people to violate the norms and behaviors of the real world in the virtual environment without real world consequences. However, previous research has shown that users' behavior may conform to their digital self-representation (avatar). This is also known as the Proteus effect (Yee, 2007). Are people behaving in virtual worlds in ways that most people would not in the physical world? It's important to understand the behaviors that occur in the virtual world if they have an impact on how people act in the real world.

  1. Development of the e-Baby serious game with regard to the evaluation of oxygenation in preterm babies: contributions of the emotional design.

    PubMed

    Fonseca, Luciana Mara Monti; Dias, Danielle Monteiro Vilela; Góes, Fernanda Dos Santos Nogueira; Seixas, Carlos Alberto; Scochi, Carmen Gracinda Silvan; Martins, José Carlos Amado; Rodrigues, Manuel Alves

    2014-09-01

    The present study aimed to describe the development process of a serious game that enables users to evaluate the respiratory process in a preterm infant based on an emotional design model. The e-Baby serious game was built to feature the simulated environment of an incubator, in which the user performs a clinical evaluation of the respiratory process in a virtual preterm infant. The user learns about the preterm baby's history, chooses the tools for the clinical evaluation, evaluates the baby, and determines whether his/her evaluation is appropriate. The e-Baby game presents phases that contain respiratory process impairments of higher or lower complexity in the virtual preterm baby. Included links give the user the option of recording the entire evaluation procedure and sharing his/her performance on a social network. e-Baby integrates a Clinical Evaluation of the Preterm Baby course in the Moodle virtual environment. This game, which evaluates the respiratory process in preterm infants, could support a more flexible, attractive, and interactive teaching and learning process that includes simulations with features very similar to neonatal unit realities, thus allowing more appropriate training for clinical oxygenation evaluations in at-risk preterm infants. e-Baby allows advanced user-technology-educational interactions because it requires active participation in the process and is emotionally integrated.

  2. A scalable multi-DLP pico-projector system for virtual reality

    NASA Astrophysics Data System (ADS)

    Teubl, F.; Kurashima, C.; Cabral, M.; Fels, S.; Lopes, R.; Zuffo, M.

    2014-03-01

    Virtual Reality (VR) environments can offer immersion, interaction and realistic images to users. A VR system is usually expensive and requires special equipment in a complex setup. One approach is to use Commodity-Off-The-Shelf (COTS) desktop multi-projectors manually or camera based calibrated to reduce the cost of VR systems without significant decrease of the visual experience. Additionally, for non-planar screen shapes, special optics such as lenses and mirrors are required thus increasing costs. We propose a low-cost, scalable, flexible and mobile solution that allows building complex VR systems that projects images onto a variety of arbitrary surfaces such as planar, cylindrical and spherical surfaces. This approach combines three key aspects: 1) clusters of DLP-picoprojectors to provide homogeneous and continuous pixel density upon arbitrary surfaces without additional optics; 2) LED lighting technology for energy efficiency and light control; 3) smaller physical footprint for flexibility purposes. Therefore, the proposed system is scalable in terms of pixel density, energy and physical space. To achieve these goals, we developed a multi-projector software library called FastFusion that calibrates all projectors in a uniform image that is presented to viewers. FastFusion uses a camera to automatically calibrate geometric and photometric correction of projected images from ad-hoc positioned projectors, the only requirement is some few pixels overlapping amongst them. We present results with eight Pico-projectors, with 7 lumens (LED) and DLP 0.17 HVGA Chipset.

  3. Interactive Visual Analysis within Dynamic Ocean Models

    NASA Astrophysics Data System (ADS)

    Butkiewicz, T.

    2012-12-01

    The many observation and simulation based ocean models available today can provide crucial insights for all fields of marine research and can serve as valuable references when planning data collection missions. However, the increasing size and complexity of these models makes leveraging their contents difficult for end users. Through a combination of data visualization techniques, interactive analysis tools, and new hardware technologies, the data within these models can be made more accessible to domain scientists. We present an interactive system that supports exploratory visual analysis within large-scale ocean flow models. The currents and eddies within the models are illustrated using effective, particle-based flow visualization techniques. Stereoscopic displays and rendering methods are employed to ensure that the user can correctly perceive the complex 3D structures of depth-dependent flow patterns. Interactive analysis tools are provided which allow the user to experiment through the introduction of their customizable virtual dye particles into the models to explore regions of interest. A multi-touch interface provides natural, efficient interaction, with custom multi-touch gestures simplifying the otherwise challenging tasks of navigating and positioning tools within a 3D environment. We demonstrate the potential applications of our visual analysis environment with two examples of real-world significance: Firstly, an example of using customized particles with physics-based behaviors to simulate pollutant release scenarios, including predicting the oil plume path for the 2010 Deepwater Horizon oil spill disaster. Secondly, an interactive tool for plotting and revising proposed autonomous underwater vehicle mission pathlines with respect to the surrounding flow patterns predicted by the model; as these survey vessels have extremely limited energy budgets, designing more efficient paths allows for greater survey areas.

  4. Real enough: using virtual public speaking environments to evoke feelings and behaviors targeted in stuttering assessment and treatment.

    PubMed

    Brundage, Shelley B; Hancock, Adrienne B

    2015-05-01

    Virtual reality environments (VREs) are computer-generated, 3-dimensional worlds that allow users to experience situations similar to those encountered in the real world. The purpose of this study was to investigate VREs for potential use in assessing and treating persons who stutter (PWS) by determining the extent to which PWS's affective, behavioral, and cognitive measures in a VRE correlate with those same measures in a similar live environment. Ten PWS delivered speeches-first to a live audience and, on another day, to 2 virtual audiences (neutral and challenging audiences). Participants completed standard tests of communication apprehension and confidence prior to each condition, and frequency of stuttering was measured during each speech. Correlational analyses revealed significant, positive correlations between virtual and live conditions for affective and cognitive measures as well as for frequency of stuttering. These findings suggest that virtual public speaking environments engender affective, behavioral, and cognitive reactions in PWS that correspond to those experienced in the real world. Therefore, the authentic, safe, and controlled environments provided by VREs may be useful for stuttering assessment and treatment.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gonzalez Anez, Francisco

    This paper presents two development projects (STARMATE and VIRMAN) focused on supporting training on maintenance. Both projects aim at specifying, designing, developing, and demonstrating prototypes allowing computer guided maintenance of complex mechanical elements using Augmented and Virtual Reality techniques. VIRMAN is a Spanish development project. The objective is to create a computer tool for maintenance training course elaborations and training delivery based on 3D virtual reality models of complex components. The training delivery includes 3D record displays on maintenance procedures with all complementary information for intervention understanding. Users are requested to perform the maintenance intervention trying to follow up themore » procedure. Users can be evaluated about the level of knowledge achieved. Instructors can check the evaluation records left during the training sessions. VIRMAN is simple software supported by a regular computer and can be used in an Internet framework. STARMATE is a forward step in the area of virtual reality. STARMATE is a European Commission project in the frame of 'Information Societies Technologies'. A consortium of five companies and one research institute shares their expertise in this new technology. STARMATE provides two main functionalities (1) user assistance for achieving assembly/de-assembly and following maintenance procedures, and (2) workforce training. The project relies on Augmented Reality techniques, which is a growing area in Virtual Reality research. The idea of Augmented Reality is to combine a real scene, viewed by the user, with a virtual scene, generated by a computer, augmenting the reality with additional information. The user interface is see-through goggles, headphones, microphone and an optical tracking system. All these devices are integrated in a helmet connected with two regular computers. The user has his hands free for performing the maintenance intervention and he can navigate in the virtual world thanks to a voice recognition system and a virtual pointing device. The maintenance work is guided with audio instructions, 2D and 3D information are directly displayed into the user's goggles: There is a position-tracking system that allows 3D virtual models to be displayed in the real counterpart positions independently of the user allocation. The user can create his own virtual environment, placing the information required wherever he wants. The STARMATE system is applicable to a large variety of real work situations. (author)« less

  6. A haptic interface for virtual simulation of endoscopic surgery.

    PubMed

    Rosenberg, L B; Stredney, D

    1996-01-01

    Virtual reality can be described as a convincingly realistic and naturally interactive simulation in which the user is given a first person illusion of being immersed within a computer generated environment While virtual reality systems offer great potential to reduce the cost and increase the quality of medical training, many technical challenges must be overcome before such simulation platforms offer effective alternatives to more traditional training means. A primary challenge in developing effective virtual reality systems is designing the human interface hardware which allows rich sensory information to be presented to users in natural ways. When simulating a given manual procedure, task specific human interface requirements dictate task specific human interface hardware. The following paper explores the design of human interface hardware that satisfies the task specific requirements of virtual reality simulation of Endoscopic surgical procedures. Design parameters were derived through direct cadaver studies and interviews with surgeons. Final hardware design is presented.

  7. Distributed interactive virtual environments for collaborative experiential learning and training independent of distance over Internet2.

    PubMed

    Alverson, Dale C; Saiki, Stanley M; Jacobs, Joshua; Saland, Linda; Keep, Marcus F; Norenberg, Jeffrey; Baker, Rex; Nakatsu, Curtis; Kalishman, Summers; Lindberg, Marlene; Wax, Diane; Mowafi, Moad; Summers, Kenneth L; Holten, James R; Greenfield, John A; Aalseth, Edward; Nickles, David; Sherstyuk, Andrei; Haines, Karen; Caudell, Thomas P

    2004-01-01

    Medical knowledge and skills essential for tomorrow's healthcare professionals continue to change faster than ever before creating new demands in medical education. Project TOUCH (Telehealth Outreach for Unified Community Health) has been developing methods to enhance learning by coupling innovations in medical education with advanced technology in high performance computing and next generation Internet2 embedded in virtual reality environments (VRE), artificial intelligence and experiential active learning. Simulations have been used in education and training to allow learners to make mistakes safely in lieu of real-life situations, learn from those mistakes and ultimately improve performance by subsequent avoidance of those mistakes. Distributed virtual interactive environments are used over distance to enable learning and participation in dynamic, problem-based, clinical, artificial intelligence rules-based, virtual simulations. The virtual reality patient is programmed to dynamically change over time and respond to the manipulations by the learner. Participants are fully immersed within the VRE platform using a head-mounted display and tracker system. Navigation, locomotion and handling of objects are accomplished using a joy-wand. Distribution is managed via the Internet2 Access Grid using point-to-point or multi-casting connectivity through which the participants can interact. Medical students in Hawaii and New Mexico (NM) participated collaboratively in problem solving and managing of a simulated patient with a closed head injury in VRE; dividing tasks, handing off objects, and functioning as a team. Students stated that opportunities to make mistakes and repeat actions in the VRE were extremely helpful in learning specific principles. VRE created higher performance expectations and some anxiety among VRE users. VRE orientation was adequate but students needed time to adapt and practice in order to improve efficiency. This was also demonstrated successfully between Western Australia and UNM. We successfully demonstrated the ability to fully immerse participants in a distributed virtual environment independent of distance for collaborative team interaction in medical simulation designed for education and training. The ability to make mistakes in a safe environment is well received by students and has a positive impact on their understanding, as well as memory of the principles involved in correcting those mistakes. Bringing people together as virtual teams for interactive experiential learning and collaborative training, independent of distance, provides a platform for distributed "just-in-time" training, performance assessment and credentialing. Further validation is necessary to determine the potential value of the distributed VRE in knowledge transfer, improved future performance and should entail training participants to competence in using these tools.

  8. Measuring Reduction Methods for VR Sickness in Virtual Environments

    ERIC Educational Resources Information Center

    Magaki, Takurou; Vallance, Michael

    2017-01-01

    Recently, virtual reality (VR) technologies have developed remarkably. However, some users have negative symptoms during VR experiences or post-experiences. Consequently, alleviating VR sickness is a major challenge, but an effective reduction method has not yet been discovered. The purpose of this article is to compare and evaluate VR sickness in…

  9. Architectural Principles and Experimentation of Distributed High Performance Virtual Clusters

    ERIC Educational Resources Information Center

    Younge, Andrew J.

    2016-01-01

    With the advent of virtualization and Infrastructure-as-a-Service (IaaS), the broader scientific computing community is considering the use of clouds for their scientific computing needs. This is due to the relative scalability, ease of use, advanced user environment customization abilities, and the many novel computing paradigms available for…

  10. Affective Load and Engagement in Second Life: Experiencing Urgent, Persistent, and Long-Term Information Needs

    ERIC Educational Resources Information Center

    Nahl, Diane

    2010-01-01

    New users of virtual environments face a steep learning curve, requiring persistence and determination to overcome challenges experienced while acclimatizing to the demands of avatar-mediated behavior. Concurrent structured self-reports can be used to monitor the personal affective and cognitive struggles involved in virtual world adaptation to…

  11. Using shadow page cache to improve isolated drivers performance.

    PubMed

    Zheng, Hao; Dong, Xiaoshe; Wang, Endong; Chen, Baoke; Zhu, Zhengdong; Liu, Chengzhe

    2015-01-01

    With the advantage of the reusability property of the virtualization technology, users can reuse various types and versions of existing operating systems and drivers in a virtual machine, so as to customize their application environment. In order to prevent users' virtualization environments being impacted by driver faults in virtual machine, Chariot examines the correctness of driver's write operations by the method of combining a driver's write operation capture and a driver's private access control table. However, this method needs to keep the write permission of shadow page table as read-only, so as to capture isolated driver's write operations through page faults, which adversely affect the performance of the driver. Based on delaying setting frequently used shadow pages' write permissions to read-only, this paper proposes an algorithm using shadow page cache to improve the performance of isolated drivers and carefully study the relationship between the performance of drivers and the size of shadow page cache. Experimental results show that, through the shadow page cache, the performance of isolated drivers can be greatly improved without impacting Chariot's reliability too much.

  12. Using Shadow Page Cache to Improve Isolated Drivers Performance

    PubMed Central

    Dong, Xiaoshe; Wang, Endong; Chen, Baoke; Zhu, Zhengdong; Liu, Chengzhe

    2015-01-01

    With the advantage of the reusability property of the virtualization technology, users can reuse various types and versions of existing operating systems and drivers in a virtual machine, so as to customize their application environment. In order to prevent users' virtualization environments being impacted by driver faults in virtual machine, Chariot examines the correctness of driver's write operations by the method of combining a driver's write operation capture and a driver's private access control table. However, this method needs to keep the write permission of shadow page table as read-only, so as to capture isolated driver's write operations through page faults, which adversely affect the performance of the driver. Based on delaying setting frequently used shadow pages' write permissions to read-only, this paper proposes an algorithm using shadow page cache to improve the performance of isolated drivers and carefully study the relationship between the performance of drivers and the size of shadow page cache. Experimental results show that, through the shadow page cache, the performance of isolated drivers can be greatly improved without impacting Chariot's reliability too much. PMID:25815373

  13. Human-scale interaction for virtual model displays: a clear case for real tools

    NASA Astrophysics Data System (ADS)

    Williams, George C.; McDowall, Ian E.; Bolas, Mark T.

    1998-04-01

    We describe a hand-held user interface for interacting with virtual environments displayed on a Virtual Model Display. The tool, constructed entirely of transparent materials, is see-through. We render a graphical counterpart of the tool on the display and map it one-to-one with the real tool. This feature, combined with a capability for touch- sensitive, discrete input, results in a useful spatial input device that is visually versatile. We discuss the tool's design and interaction techniques it supports. Briefly, we look at the human factors issues and engineering challenges presented by this tool and, in general, by the class of hand-held user interfaces that are see-through.

  14. Android Based Mobile Environment for Moodle Users

    ERIC Educational Resources Information Center

    de Clunie, Gisela T.; Clunie, Clifton; Castillo, Aris; Rangel, Norman

    2013-01-01

    This paper is about the development of a platform that eases, throughout Android based mobile devices, mobility of users of virtual courses at Technological University of Panama. The platform deploys computational techniques such as "web services," design patterns, ontologies and mobile technologies to allow mobile devices communicate…

  15. Virtual Partnerships in Research and Education.

    ERIC Educational Resources Information Center

    Payne, Deborah A.; Keating, Kelly A.; Myers, James D.

    The William R. Wiley Environmental Molecular Sciences Laboratory (EMSL) at the Pacific Northwest National Laboratory (Washington) is a collaborative user facility with many unique scientific capabilities. The EMSL expects to support many of its remote users and collaborators by electronic means and is creating a collaborative environment for this…

  16. Interactive Learning Environment: Web-based Virtual Hydrological Simulation System using Augmented and Immersive Reality

    NASA Astrophysics Data System (ADS)

    Demir, I.

    2014-12-01

    Recent developments in internet technologies make it possible to manage and visualize large data on the web. Novel visualization techniques and interactive user interfaces allow users to create realistic environments, and interact with data to gain insight from simulations and environmental observations. The hydrological simulation system is a web-based 3D interactive learning environment for teaching hydrological processes and concepts. The simulation systems provides a visually striking platform with realistic terrain information, and water simulation. Students can create or load predefined scenarios, control environmental parameters, and evaluate environmental mitigation alternatives. The web-based simulation system provides an environment for students to learn about the hydrological processes (e.g. flooding and flood damage), and effects of development and human activity in the floodplain. The system utilizes latest web technologies and graphics processing unit (GPU) for water simulation and object collisions on the terrain. Users can access the system in three visualization modes including virtual reality, augmented reality, and immersive reality using heads-up display. The system provides various scenarios customized to fit the age and education level of various users. This presentation provides an overview of the web-based flood simulation system, and demonstrates the capabilities of the system for various visualization and interaction modes.

  17. Tactical Operations Analysis Support Facility.

    DTIC Science & Technology

    1981-05-01

    Punch/Reader 2 DMC-11AR DDCMP Micro Processor 2 DMC-11DA Network Link Line Unit 2 DL-11E Async Serial Line Interface 4 Intel IN-1670 448K Words MOS Memory...86 5.3 VIRTUAL PROCESSORS - VAX-11/750 ........................... 89 5.4 A RELATIONAL DATA MANAGEMENT SYSTEM - ORACLE...Central Processing Unit (CPU) is a 16 bit processor for high-speed, real time applications, and for large multi-user, multi- task, time shared

  18. The effects of substitute multisensory feedback on task performance and the sense of presence in a virtual reality environment.

    PubMed

    Cooper, Natalia; Milella, Ferdinando; Pinto, Carlo; Cant, Iain; White, Mark; Meyer, Georg

    2018-01-01

    Objective and subjective measures of performance in virtual reality environments increase as more sensory cues are delivered and as simulation fidelity increases. Some cues (colour or sound) are easier to present than others (object weight, vestibular cues) so that substitute cues can be used to enhance informational content in a simulation at the expense of simulation fidelity. This study evaluates how substituting cues in one modality by alternative cues in another modality affects subjective and objective performance measures in a highly immersive virtual reality environment. Participants performed a wheel change in a virtual reality (VR) environment. Auditory, haptic and visual cues, signalling critical events in the simulation, were manipulated in a factorial design. Subjective ratings were recorded via questionnaires. The time taken to complete the task was used as an objective performance measure. The results show that participants performed best and felt an increased sense of immersion and involvement, collectively referred to as 'presence', when substitute multimodal sensory feedback was provided. Significant main effects of audio and tactile cues on task performance and on participants' subjective ratings were found. A significant negative relationship was found between the objective (overall completion times) and subjective (ratings of presence) performance measures. We conclude that increasing informational content, even if it disrupts fidelity, enhances performance and user's overall experience. On this basis we advocate the use of substitute cues in VR environments as an efficient method to enhance performance and user experience.

  19. A Hypermedia Representation of a Taxonomy of Usability Characteristics in Virtual Environments

    DTIC Science & Technology

    2003-03-01

    user, organization, and social workflow; needs analysis; and user modeling. A user task analysis generates critical information used throughout all...exist specific to VE user interaction [Gabbard and others, 1999]. Typically more than one person performs guidelines-based evaluations, since it’s...unlikely that any one person could identify all if not most of an interaction design’s usability problems. Nielsen [1994] recommends three to five

  20. Issues of Learning Games: From Virtual to Real

    ERIC Educational Resources Information Center

    Carron, Thibault; Pernelle, Philippe; Talbot, Stéphane

    2013-01-01

    Our research work deals with the development of new learning environments, and we are particularly interested in studying the different aspects linked to users' collaboration in these environments. We believe that Game-based Learning can significantly enhance learning. That is why we have developed learning environments grounded on graphical…

  1. A virtual reality environment for telescope operation

    NASA Astrophysics Data System (ADS)

    Martínez, Luis A.; Villarreal, José L.; Ángeles, Fernando; Bernal, Abel

    2010-07-01

    Astronomical observatories and telescopes are becoming increasingly large and complex systems, demanding to any potential user the acquirement of great amount of information previous to access them. At present, the most common way to overcome that information is through the implementation of larger graphical user interfaces and computer monitors to increase the display area. Tonantzintla Observatory has a 1-m telescope with a remote observing system. As a step forward in the improvement of the telescope software, we have designed a Virtual Reality (VR) environment that works as an extension of the remote system and allows us to operate the telescope. In this work we explore this alternative technology that is being suggested here as a software platform for the operation of the 1-m telescope.

  2. Virtual displays for 360-degree video

    NASA Astrophysics Data System (ADS)

    Gilbert, Stephen; Boonsuk, Wutthigrai; Kelly, Jonathan W.

    2012-03-01

    In this paper we describe a novel approach for comparing users' spatial cognition when using different depictions of 360- degree video on a traditional 2D display. By using virtual cameras within a game engine and texture mapping of these camera feeds to an arbitrary shape, we were able to offer users a 360-degree interface composed of four 90-degree views, two 180-degree views, or one 360-degree view of the same interactive environment. An example experiment is described using these interfaces. This technique for creating alternative displays of wide-angle video facilitates the exploration of how compressed or fish-eye distortions affect spatial perception of the environment and can benefit the creation of interfaces for surveillance and remote system teleoperation.

  3. Suitability of virtual prototypes to support human factors/ergonomics evaluation during the design.

    PubMed

    Aromaa, Susanna; Väänänen, Kaisa

    2016-09-01

    In recent years, the use of virtual prototyping has increased in product development processes, especially in the assessment of complex systems targeted at end-users. The purpose of this study was to evaluate the suitability of virtual prototyping to support human factors/ergonomics evaluation (HFE) during the design phase. Two different virtual prototypes were used: augmented reality (AR) and virtual environment (VE) prototypes of a maintenance platform of a rock crushing machine. Nineteen designers and other stakeholders were asked to assess the suitability of the prototype for HFE evaluation. Results indicate that the system model characteristics and user interface affect the experienced suitability. The VE system was valued as being more suitable to support the assessment of visibility, reach, and the use of tools than the AR system. The findings of this study can be used as a guidance for the implementing virtual prototypes in the product development process. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Identification of Program Signatures from Cloud Computing System Telemetry Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nichols, Nicole M.; Greaves, Mark T.; Smith, William P.

    Malicious cloud computing activity can take many forms, including running unauthorized programs in a virtual environment. Detection of these malicious activities while preserving the privacy of the user is an important research challenge. Prior work has shown the potential viability of using cloud service billing metrics as a mechanism for proxy identification of malicious programs. Previously this novel detection method has been evaluated in a synthetic and isolated computational environment. In this paper we demonstrate the ability of billing metrics to identify programs, in an active cloud computing environment, including multiple virtual machines running on the same hypervisor. The openmore » source cloud computing platform OpenStack, is used for private cloud management at Pacific Northwest National Laboratory. OpenStack provides a billing tool (Ceilometer) to collect system telemetry measurements. We identify four different programs running on four virtual machines under the same cloud user account. Programs were identified with up to 95% accuracy. This accuracy is dependent on the distinctiveness of telemetry measurements for the specific programs we tested. Future work will examine the scalability of this approach for a larger selection of programs to better understand the uniqueness needed to identify a program. Additionally, future work should address the separation of signatures when multiple programs are running on the same virtual machine.« less

  5. Hack-proof Synchronization Protocol for Multi-player Online Games

    NASA Astrophysics Data System (ADS)

    Fung, Yeung Siu; Lui, John C. S.

    Modern multi-player online games are popular and attractive because they provide a sense of virtual world experience to users: players can interact with each other on the Internet but perceive a local area network responsiveness. To make this possible, most modern multi-player online games use similar networking architecture that aims to hide the effects of network latency, packet loss, and high variance of delay from players. Because real-time interactivity is a crucial feature from a player's point of view, any delay perceived by a player can affect his/her performance [16]. Therefore, the game client must be able to run and accept new user commands continuously regardless of the condition of the underlying communication channel, and that it will not stop responding because of waiting for update packets from other players. To make this possible, multi-player online games typically use protocols based on "dead-reckoning" [5, 6, 9] which allows loose synchronization between players.

  6. Personalization in an Interactive Learning Environment through a Virtual Character

    ERIC Educational Resources Information Center

    Reategui, E.; Boff, E.; Campbell, J. A.

    2008-01-01

    Traditional hypermedia applications present the same content and provide identical navigational support to all users. Adaptive Hypermedia Systems (AHS) make it possible to construct personalized presentations to each user, according to preferences and needs identified. We present in this paper an alternative approach to educational AHS where a…

  7. Role of virtual reality for cerebral palsy management.

    PubMed

    Weiss, Patrice L Tamar; Tirosh, Emanuel; Fehlings, Darcy

    2014-08-01

    Virtual reality is the use of interactive simulations to present users with opportunities to perform in virtual environments that appear, sound, and less frequently, feel similar to real-world objects and events. Interactive computer play refers to the use of a game where a child interacts and plays with virtual objects in a computer-generated environment. Because of their distinctive attributes that provide ecologically realistic and motivating opportunities for active learning, these technologies have been used in pediatric rehabilitation over the past 15 years. The ability of virtual reality to create opportunities for active repetitive motor/sensory practice adds to their potential for neuroplasticity and learning in individuals with neurologic disorders. The objectives of this article is to provide an overview of how virtual reality and gaming are used clinically, to present the results of several example studies that demonstrate their use in research, and to briefly remark on future developments. © The Author(s) 2014.

  8. A virtual surgical environment for rehearsal of tympanomastoidectomy.

    PubMed

    Chan, Sonny; Li, Peter; Lee, Dong Hoon; Salisbury, J Kenneth; Blevins, Nikolas H

    2011-01-01

    This article presents a virtual surgical environment whose purpose is to assist the surgeon in preparation for individual cases. The system constructs interactive anatomical models from patient-specific, multi-modal preoperative image data, and incorporates new methods for visually and haptically rendering the volumetric data. Evaluation of the system's ability to replicate temporal bone dissections for tympanomastoidectomy, using intraoperative video of the same patients as guides, showed strong correlations between virtual and intraoperative anatomy. The result is a portable and cost-effective tool that may prove highly beneficial for the purposes of surgical planning and rehearsal.

  9. NeuroVR 1.5 - a free virtual reality platform for the assessment and treatment in clinical psychology and neuroscience.

    PubMed

    Riva, Giuseppe; Carelli, Laura; Gaggioli, Andrea; Gorini, Alessandra; Vigna, Cinzia; Corsi, Riccardo; Faletti, Gianluca; Vezzadini, Luca

    2009-01-01

    At MMVR 2007 we presented NeuroVR (http://www.neurovr.org) a free virtual reality platform based on open-source software. The software allows non-expert users to adapt the content of 14 pre-designed virtual environments to the specific needs of the clinical or experimental setting. Following the feedbacks of the 700 users who downloaded the first version, we developed a new version - NeuroVR 1.5 - that improves the possibility for the therapist to enhance the patient's feeling of familiarity and intimacy with the virtual scene, by using external sounds, photos or videos. Specifically, the new version now includes full sound support and the ability of triggering external sounds and videos using the keyboard. The outcomes of different trials made using NeuroVR will be presented and discussed.

  10. Sensor-Based Human Activity Recognition in a Multi-user Scenario

    NASA Astrophysics Data System (ADS)

    Wang, Liang; Gu, Tao; Tao, Xianping; Lu, Jian

    Existing work on sensor-based activity recognition focuses mainly on single-user activities. However, in real life, activities are often performed by multiple users involving interactions between them. In this paper, we propose Coupled Hidden Markov Models (CHMMs) to recognize multi-user activities from sensor readings in a smart home environment. We develop a multimodal sensing platform and present a theoretical framework to recognize both single-user and multi-user activities. We conduct our trace collection done in a smart home, and evaluate our framework through experimental studies. Our experimental result shows that we achieve an average accuracy of 85.46% with CHMMs.

  11. Kinematic evaluation of virtual walking trajectories.

    PubMed

    Cirio, Gabriel; Olivier, Anne-Hélène; Marchal, Maud; Pettré, Julien

    2013-04-01

    Virtual walking, a fundamental task in Virtual Reality (VR), is greatly influenced by the locomotion interface being used, by the specificities of input and output devices, and by the way the virtual environment is represented. No matter how virtual walking is controlled, the generation of realistic virtual trajectories is absolutely required for some applications, especially those dedicated to the study of walking behaviors in VR, navigation through virtual places for architecture, rehabilitation and training. Previous studies focused on evaluating the realism of locomotion trajectories have mostly considered the result of the locomotion task (efficiency, accuracy) and its subjective perception (presence, cybersickness). Few focused on the locomotion trajectory itself, but in situation of geometrically constrained task. In this paper, we study the realism of unconstrained trajectories produced during virtual walking by addressing the following question: did the user reach his destination by virtually walking along a trajectory he would have followed in similar real conditions? To this end, we propose a comprehensive evaluation framework consisting on a set of trajectographical criteria and a locomotion model to generate reference trajectories. We consider a simple locomotion task where users walk between two oriented points in space. The travel path is analyzed both geometrically and temporally in comparison to simulated reference trajectories. In addition, we demonstrate the framework over a user study which considered an initial set of common and frequent virtual walking conditions, namely different input devices, output display devices, control laws, and visualization modalities. The study provides insight into the relative contributions of each condition to the overall realism of the resulting virtual trajectories.

  12. A collaborative virtual reality environment for neurosurgical planning and training.

    PubMed

    Kockro, Ralf A; Stadie, Axel; Schwandt, Eike; Reisch, Robert; Charalampaki, Cleopatra; Ng, Ivan; Yeo, Tseng Tsai; Hwang, Peter; Serra, Luis; Perneczky, Axel

    2007-11-01

    We have developed a highly interactive virtual environment that enables collaborative examination of stereoscopic three-dimensional (3-D) medical imaging data for planning, discussing, or teaching neurosurgical approaches and strategies. The system consists of an interactive console with which the user manipulates 3-D data using hand-held and tracked devices within a 3-D virtual workspace and a stereoscopic projection system. The projection system displays the 3-D data on a large screen while the user is working with it. This setup allows users to interact intuitively with complex 3-D data while sharing this information with a larger audience. We have been using this system on a routine clinical basis and during neurosurgical training courses to collaboratively plan and discuss neurosurgical procedures with 3-D reconstructions of patient-specific magnetic resonance and computed tomographic imaging data or with a virtual model of the temporal bone. Working collaboratively with the 3-D information of a large, interactive, stereoscopic projection provides an unambiguous way to analyze and understand the anatomic spatial relationships of different surgical corridors. In our experience, the system creates a unique forum for open and precise discussion of neurosurgical approaches. We believe the system provides a highly effective way to work with 3-D data in a group, and it significantly enhances teaching of neurosurgical anatomy and operative strategies.

  13. Development of and feedback on a fully automated virtual reality system for online training in weight management skills.

    PubMed

    Thomas, J Graham; Spitalnick, Josh S; Hadley, Wendy; Bond, Dale S; Wing, Rena R

    2015-01-01

    Virtual reality (VR) technology can provide a safe environment for observing, learning, and practicing use of behavioral weight management skills, which could be particularly useful in enhancing minimal contact online weight management programs. The Experience Success (ES) project developed a system for creating and deploying VR scenarios for online weight management skills training. Virtual environments populated with virtual actors allow users to experiment with implementing behavioral skills via a PC-based point and click interface. A culturally sensitive virtual coach guides the experience, including planning for real-world skill use. Thirty-seven overweight/obese women provided feedback on a test scenario focused on social eating situations. They reported that the scenario gave them greater skills, confidence, and commitment for controlling eating in social situations. © 2014 Diabetes Technology Society.

  14. Development of and Feedback on a Fully Automated Virtual Reality System for Online Training in Weight Management Skills

    PubMed Central

    Spitalnick, Josh S.; Hadley, Wendy; Bond, Dale S.; Wing, Rena R.

    2014-01-01

    Virtual reality (VR) technology can provide a safe environment for observing, learning, and practicing use of behavioral weight management skills, which could be particularly useful in enhancing minimal contact online weight management programs. The Experience Success (ES) project developed a system for creating and deploying VR scenarios for online weight management skills training. Virtual environments populated with virtual actors allow users to experiment with implementing behavioral skills via a PC-based point and click interface. A culturally sensitive virtual coach guides the experience, including planning for real-world skill use. Thirty-seven overweight/obese women provided feedback on a test scenario focused on social eating situations. They reported that the scenario gave them greater skills, confidence, and commitment for controlling eating in social situations. PMID:25367014

  15. Recent developments in virtual experience design and production

    NASA Astrophysics Data System (ADS)

    Fisher, Scott S.

    1995-03-01

    Today, the media of VR and Telepresence are in their infancy and the emphasis is still on technology and engineering. But, it is not the hardware people might use that will determine whether VR becomes a powerful medium--instead, it will be the experiences that they are able to have that will drive its acceptance and impact. A critical challenge in the elaboration of these telepresence capabilities will be the development of environments that are as unpredictable and rich in interconnected processes as an actual location or experience. This paper will describe the recent development of several Virtual Experiences including: `Menagerie', an immersive Virtual Environment inhabited by virtual characters designed to respond to and interact with its users; and `The Virtual Brewery', an immersive public VR installation that provides multiple levels of interaction in an artistic interpretation of the brewing process.

  16. Derived virtual devices: a secure distributed file system mechanism

    NASA Technical Reports Server (NTRS)

    VanMeter, Rodney; Hotz, Steve; Finn, Gregory

    1996-01-01

    This paper presents the design of derived virtual devices (DVDs). DVDs are the mechanism used by the Netstation Project to provide secure shared access to network-attached peripherals distributed in an untrusted network environment. DVDs improve Input/Output efficiency by allowing user processes to perform I/O operations directly from devices without intermediate transfer through the controlling operating system kernel. The security enforced at the device through the DVD mechanism includes resource boundary checking, user authentication, and restricted operations, e.g., read-only access. To illustrate the application of DVDs, we present the interactions between a network-attached disk and a file system designed to exploit the DVD abstraction. We further discuss third-party transfer as a mechanism intended to provide for efficient data transfer in a typical NAP environment. We show how DVDs facilitate third-party transfer, and provide the security required in a more open network environment.

  17. Effective Communication with Cultural Heritage Using Virtual Technologies

    NASA Astrophysics Data System (ADS)

    Reffat, R. M.; Nofal, E. M.

    2013-07-01

    Cultural heritage is neither static nor stable. There is a need to explore ways for effectively communicating with cultural heritage to tourists and society at large, in an age of immediacy, a time of multiple realities and to multi-cultural tourists. It is vital to consider cultural heritage as a creative and relational process where places and communities are constantly remade through creative performance. The paper introduces virtual technologies as an approach to attain effective communication with cultural heritage. This approach emphasizes the importance of "user, content and context" in guiding the production of virtual heritage, as opposed to technology being the sole motivator. It addresses how these three issues in virtual heritage need to be transformed from merely representing quantitative data towards cultural information using the proposed effective communication triangle through representing meaningful relationships between cultural heritage elements, users and context. The paper offers a focused articulation of a proposed computational platform of "interactive, personalized and contextual-based navigation" with Egyptian heritage monuments as a one step forward towards achieving effective communication with Egyptian cultural heritage.

  18. Comparison of path visualizations and cognitive measures relative to travel technique in a virtual environment.

    PubMed

    Zanbaka, Catherine A; Lok, Benjamin C; Babu, Sabarish V; Ulinski, Amy C; Hodges, Larry F

    2005-01-01

    We describe a between-subjects experiment that compared four different methods of travel and their effect on cognition and paths taken in an immersive virtual environment (IVE). Participants answered a set of questions based on Crook's condensation of Bloom's taxonomy that assessed their cognition of the IVE with respect to knowledge, understanding and application, and higher mental processes. Participants also drew a sketch map of the IVE and the objects within it. The users' sense of presence was measured using the Steed-Usoh-Slater Presence Questionnaire. The participants' position and head orientation were automatically logged during their exposure to the virtual environment. These logs were later used to create visualizations of the paths taken. Path analysis, such as exploring the overlaid path visualizations and dwell data information, revealed further differences among the travel techniques. Our results suggest that, for applications where problem solving and evaluation of information is important or where opportunity to train is minimal, then having a large tracked space so that the participant can walk around the virtual environment provides benefits over common virtual travel techniques.

  19. LHCb experience with running jobs in virtual machines

    NASA Astrophysics Data System (ADS)

    McNab, A.; Stagni, F.; Luzzi, C.

    2015-12-01

    The LHCb experiment has been running production jobs in virtual machines since 2013 as part of its DIRAC-based infrastructure. We describe the architecture of these virtual machines and the steps taken to replicate the WLCG worker node environment expected by user and production jobs. This relies on the uCernVM system for providing root images for virtual machines. We use the CernVM-FS distributed filesystem to supply the root partition files, the LHCb software stack, and the bootstrapping scripts necessary to configure the virtual machines for us. Using this approach, we have been able to minimise the amount of contextualisation which must be provided by the virtual machine managers. We explain the process by which the virtual machine is able to receive payload jobs submitted to DIRAC by users and production managers, and how this differs from payloads executed within conventional DIRAC pilot jobs on batch queue based sites. We describe our operational experiences in running production on VM based sites managed using Vcycle/OpenStack, Vac, and HTCondor Vacuum. Finally we show how our use of these resources is monitored using Ganglia and DIRAC.

  20. Performance analysis of cooperative virtual MIMO systems for wireless sensor networks.

    PubMed

    Rafique, Zimran; Seet, Boon-Chong; Al-Anbuky, Adnan

    2013-05-28

    Multi-Input Multi-Output (MIMO) techniques can be used to increase the data rate for a given bit error rate (BER) and transmission power. Due to the small form factor, energy and processing constraints of wireless sensor nodes, a cooperative Virtual MIMO as opposed to True MIMO system architecture is considered more feasible for wireless sensor network (WSN) applications. Virtual MIMO with Vertical-Bell Labs Layered Space-Time (V-BLAST) multiplexing architecture has been recently established to enhance WSN performance. In this paper, we further investigate the impact of different modulation techniques, and analyze for the first time, the performance of a cooperative Virtual MIMO system based on V-BLAST architecture with multi-carrier modulation techniques. Through analytical models and simulations using real hardware and environment settings, both communication and processing energy consumptions, BER, spectral efficiency, and total time delay of multiple cooperative nodes each with single antenna are evaluated. The results show that cooperative Virtual-MIMO with Binary Phase Shift Keying-Wavelet based Orthogonal Frequency Division Multiplexing (BPSK-WOFDM) modulation is a promising solution for future high data-rate and energy-efficient WSNs.

  1. Performance Analysis of Cooperative Virtual MIMO Systems for Wireless Sensor Networks

    PubMed Central

    Rafique, Zimran; Seet, Boon-Chong; Al-Anbuky, Adnan

    2013-01-01

    Multi-Input Multi-Output (MIMO) techniques can be used to increase the data rate for a given bit error rate (BER) and transmission power. Due to the small form factor, energy and processing constraints of wireless sensor nodes, a cooperative Virtual MIMO as opposed to True MIMO system architecture is considered more feasible for wireless sensor network (WSN) applications. Virtual MIMO with Vertical-Bell Labs Layered Space-Time (V-BLAST) multiplexing architecture has been recently established to enhance WSN performance. In this paper, we further investigate the impact of different modulation techniques, and analyze for the first time, the performance of a cooperative Virtual MIMO system based on V-BLAST architecture with multi-carrier modulation techniques. Through analytical models and simulations using real hardware and environment settings, both communication and processing energy consumptions, BER, spectral efficiency, and total time delay of multiple cooperative nodes each with single antenna are evaluated. The results show that cooperative Virtual-MIMO with Binary Phase Shift Keying-Wavelet based Orthogonal Frequency Division Multiplexing (BPSK-WOFDM) modulation is a promising solution for future high data-rate and energy-efficient WSNs. PMID:23760087

  2. A Prototype Publishing Registry for the Virtual Observatory

    NASA Astrophysics Data System (ADS)

    Williamson, R.; Plante, R.

    2004-07-01

    In the Virtual Observatory (VO), a registry helps users locate resources, such as data and services, in a distributed environment. A general framework for VO registries is now under development within the International Virtual Observatory Alliance (IVOA) Registry Working Group. We present a prototype of one component of this framework: the publishing registry. The publishing registry allows data providers to expose metadata descriptions of their resources to the VO environment. Searchable registries can harvest the metadata from many publishing registries and make them searchable by users. We have developed a prototype publishing registry that data providers can install at their sites to publish their resources. The descriptions are exposed using the Open Archive Initiative (OAI) Protocol for Metadata Harvesting. Automating the input of metadata into registries is critical when a provider wishes to describe many resources. We illustrate various strategies for such automation, both currently in use and planned for the future. We also describe how future versions of the registry can adapt automatically to evolving metadata schemas for describing resources.

  3. Evaluation for the design of experience in virtual environments: modeling breakdown of interaction and illusion.

    PubMed

    Marsh, T; Wright, P; Smith, S

    2001-04-01

    New and emerging media technologies have the potential to induce a variety of experiences in users. In this paper, it is argued that the inducement of experience presupposes that users are absorbed in the illusion created by these media. Looking to another successful visual medium, film, this paper borrows from the techniques used in "shaping experience" to hold spectators' attention in the illusion of film, and identifies what breaks the illusion/experience for spectators. This paper focuses on one medium, virtual reality (VR), and advocates a transparent or "invisible style" of interaction. We argue that transparency keeps users in the "flow" of their activities and consequently enhances experience in users. Breakdown in activities breaks the experience and subsequently provides opportunities to identify and analyze potential causes of usability problems. Adopting activity theory, we devise a model of interaction with VR--through consciousness and activity--and introduce the concept of breakdown in illusion. From this, a model of effective interaction with VR is devised and the occurrence of breakdown in interaction and illusion is identified along a continuum of engagement. Evaluation guidelines for the design of experience are proposed and applied to usability problems detected in an empirical study of a head-mounted display (HMD) VR system. This study shows that the guidelines are effective in the evaluation of VR. Finally, we look at the potential experiences that may be induced in users and propose a way to evaluate user experience in virtual environments (VEs) and other new and emerging media.

  4. Web-Based Virtual Microscopy of Digitized Blood Slides for Malaria Diagnosis: An Effective Tool for Skills Assessment in Different Countries and Environments.

    PubMed

    Ahmed, Laura; Seal, Leonard H; Ainley, Carol; De la Salle, Barbara; Brereton, Michelle; Hyde, Keith; Burthem, John; Gilmore, William Samuel

    2016-08-11

    Morphological examination of blood films remains the reference standard for malaria diagnosis. Supporting the skills required to make an accurate morphological diagnosis is therefore essential. However, providing support across different countries and environments is a substantial challenge. This paper reports a scheme supplying digital slides of malaria-infected blood within an Internet-based virtual microscope environment to users with different access to training and computing facilities. The feasibility of the approach was established, allowing users to test, record, and compare their own performance with that of other users. From Giemsa stained thick and thin blood films, 56 large high-resolution digital slides were prepared, using high-quality image capture and 63x oil-immersion objective lens. The individual images were combined using the photomerge function of Adobe Photoshop and then adjusted to ensure resolution and reproduction of essential diagnostic features. Web delivery employed the Digital Slidebox platform allowing digital microscope viewing facilities and image annotation with data gathering from participants. Engagement was high with images viewed by 38 participants in five countries in a range of environments and a mean completion rate of 42/56 cases. The rate of parasite detection was 78% and accuracy of species identification was 53%, which was comparable with results of similar studies using glass slides. Data collection allowed users to compare performance with other users over time or for each individual case. Overall, these results demonstrate that users worldwide can effectively engage with the system in a range of environments, with the potential to enhance personal performance through education, external quality assessment, and personal professional development, especially in regions where educational resources are difficult to access.

  5. Telearch - Integrated visual simulation environment for collaborative virtual archaeology.

    NASA Astrophysics Data System (ADS)

    Kurillo, Gregorij; Forte, Maurizio

    Archaeologists collect vast amounts of digital data around the world; however, they lack tools for integration and collaborative interaction to support reconstruction and interpretation process. TeleArch software is aimed to integrate different data sources and provide real-time interaction tools for remote collaboration of geographically distributed scholars inside a shared virtual environment. The framework also includes audio, 2D and 3D video streaming technology to facilitate remote presence of users. In this paper, we present several experimental case studies to demonstrate the integration and interaction with 3D models and geographical information system (GIS) data in this collaborative environment.

  6. A User-Centric Knowledge Creation Model in a Web of Object-Enabled Internet of Things Environment

    PubMed Central

    Kibria, Muhammad Golam; Fattah, Sheik Mohammad Mostakim; Jeong, Kwanghyeon; Chong, Ilyoung; Jeong, Youn-Kwae

    2015-01-01

    User-centric service features in a Web of Object-enabled Internet of Things environment can be provided by using a semantic ontology that classifies and integrates objects on the World Wide Web as well as shares and merges context-aware information and accumulated knowledge. The semantic ontology is applied on a Web of Object platform to virtualize the real world physical devices and information to form virtual objects that represent the features and capabilities of devices in the virtual world. Detailed information and functionalities of multiple virtual objects are combined with service rules to form composite virtual objects that offer context-aware knowledge-based services, where context awareness plays an important role in enabling automatic modification of the system to reconfigure the services based on the context. Converting the raw data into meaningful information and connecting the information to form the knowledge and storing and reusing the objects in the knowledge base can both be expressed by semantic ontology. In this paper, a knowledge creation model that synchronizes a service logistic model and a virtual world knowledge model on a Web of Object platform has been proposed. To realize the context-aware knowledge-based service creation and execution, a conceptual semantic ontology model has been developed and a prototype has been implemented for a use case scenario of emergency service. PMID:26393609

  7. A User-Centric Knowledge Creation Model in a Web of Object-Enabled Internet of Things Environment.

    PubMed

    Kibria, Muhammad Golam; Fattah, Sheik Mohammad Mostakim; Jeong, Kwanghyeon; Chong, Ilyoung; Jeong, Youn-Kwae

    2015-09-18

    User-centric service features in a Web of Object-enabled Internet of Things environment can be provided by using a semantic ontology that classifies and integrates objects on the World Wide Web as well as shares and merges context-aware information and accumulated knowledge. The semantic ontology is applied on a Web of Object platform to virtualize the real world physical devices and information to form virtual objects that represent the features and capabilities of devices in the virtual world. Detailed information and functionalities of multiple virtual objects are combined with service rules to form composite virtual objects that offer context-aware knowledge-based services, where context awareness plays an important role in enabling automatic modification of the system to reconfigure the services based on the context. Converting the raw data into meaningful information and connecting the information to form the knowledge and storing and reusing the objects in the knowledge base can both be expressed by semantic ontology. In this paper, a knowledge creation model that synchronizes a service logistic model and a virtual world knowledge model on a Web of Object platform has been proposed. To realize the context-aware knowledge-based service creation and execution, a conceptual semantic ontology model has been developed and a prototype has been implemented for a use case scenario of emergency service.

  8. Crossing the Virtual World Barrier with OpenAvatar

    NASA Technical Reports Server (NTRS)

    Joy, Bruce; Kavle, Lori; Tan, Ian

    2012-01-01

    There are multiple standards and formats for 3D models in virtual environments. The problem is that there is no open source platform for generating models out of discrete parts; this results in the process of having to "reinvent the wheel" when new games, virtual worlds and simulations want to enable their users to create their own avatars or easily customize in-world objects. OpenAvatar is designed to provide a framework to allow artists and programmers to create reusable assets which can be used by end users to generate vast numbers of complete models that are unique and functional. OpenAvatar serves as a framework which facilitates the modularization of 3D models allowing parts to be interchanged within a set of logical constraints.

  9. LivePhantom: Retrieving Virtual World Light Data to Real Environments.

    PubMed

    Kolivand, Hoshang; Billinghurst, Mark; Sunar, Mohd Shahrizal

    2016-01-01

    To achieve realistic Augmented Reality (AR), shadows play an important role in creating a 3D impression of a scene. Casting virtual shadows on real and virtual objects is one of the topics of research being conducted in this area. In this paper, we propose a new method for creating complex AR indoor scenes using real time depth detection to exert virtual shadows on virtual and real environments. A Kinect camera was used to produce a depth map for the physical scene mixing into a single real-time transparent tacit surface. Once this is created, the camera's position can be tracked from the reconstructed 3D scene. Real objects are represented by virtual object phantoms in the AR scene enabling users holding a webcam and a standard Kinect camera to capture and reconstruct environments simultaneously. The tracking capability of the algorithm is shown and the findings are assessed drawing upon qualitative and quantitative methods making comparisons with previous AR phantom generation applications. The results demonstrate the robustness of the technique for realistic indoor rendering in AR systems.

  10. LivePhantom: Retrieving Virtual World Light Data to Real Environments

    PubMed Central

    2016-01-01

    To achieve realistic Augmented Reality (AR), shadows play an important role in creating a 3D impression of a scene. Casting virtual shadows on real and virtual objects is one of the topics of research being conducted in this area. In this paper, we propose a new method for creating complex AR indoor scenes using real time depth detection to exert virtual shadows on virtual and real environments. A Kinect camera was used to produce a depth map for the physical scene mixing into a single real-time transparent tacit surface. Once this is created, the camera’s position can be tracked from the reconstructed 3D scene. Real objects are represented by virtual object phantoms in the AR scene enabling users holding a webcam and a standard Kinect camera to capture and reconstruct environments simultaneously. The tracking capability of the algorithm is shown and the findings are assessed drawing upon qualitative and quantitative methods making comparisons with previous AR phantom generation applications. The results demonstrate the robustness of the technique for realistic indoor rendering in AR systems. PMID:27930663

  11. Designing Virtual Worlds for Use in Mathematics Education.

    ERIC Educational Resources Information Center

    Winn, William; Bricken, William

    Virtual Reality (VR) is a computer generated, multi-dimensional, inclusive environment that can build axioms of algebra into the behavior of the world. This paper discusses the use of VR to represent part of the algebra curriculum in order to improve students' classroom experiences in learning algebra. Students learn to construct their knowledge…

  12. The Virtual Campus: Technology and Reform in Higher Education. ASHE-ERIC Higher Education Report, Volume 25, No. 5.

    ERIC Educational Resources Information Center

    Van Dusen, Gerald C.

    The "virtual campus" is a metaphor for the electronic teaching, learning, and research environment created by the convergence of several relatively new technologies including, but not restricted to, the Internet, World Wide Web, computer-mediated communication, video conferencing, multi-media, groupware, video-on-demand, desktop…

  13. Sonic intelligence as a virtual therapeutic environment.

    PubMed

    Tarnanas, Ioannis; Adam, Dimitrios

    2003-06-01

    This paper reports on the results of a research project, on comparing one virtual collaborative environment with a first-person visual immersion (first-perspective interaction) and a second one where the user interacts through a sound-kinetic virtual representation of himself (avatar), as a stress-coping environment in real-life situations. Recent developments in coping research are proposing a shift from a trait-oriented approach of coping to a more situation-specific treatment. We defined as real-life situation a target-oriented situation that demands a complex coping skills inventory of high self-efficacy and internal or external "locus of control" strategies. The participants were 90 normal adults with healthy or impaired coping skills, 25-40 years of age, randomly spread across two groups. There was the same number of participants across groups and gender balance within groups. All two groups went through two phases. In Phase I, Solo, one participant was assessed using a three-stage assessment inspired by the transactional stress theory of Lazarus and the stress inoculation theory of Meichenbaum. In Phase I, each participant was given a coping skills measurement within the time course of various hypothetical stressful encounters performed in two different conditions and a control group. In Condition A, the participant was given a virtual stress assessment scenario relative to a first-person perspective (VRFP). In Condition B, the participant was given a virtual stress assessment scenario relative to a behaviorally realistic motion controlled avatar with sonic feedback (VRSA). In Condition C, the No Treatment Condition (NTC), the participant received just an interview. In Phase II, all three groups were mixed and exercised the same tasks but with two participants in pairs. The results showed that the VRSA group performed notably better in terms of cognitive appraisals, emotions and attributions than the other two groups in Phase I (VRSA, 92%; VRFP, 85%; NTC, 34%). In Phase II, the difference again favored the VRSA group against the other two. These results indicate that a virtual collaborative environment seems to be a consistent coping environment, tapping two classes of stress: (a) aversive or ambiguous situations, and (b) loss or failure situations in relation to the stress inoculation theory. In terms of coping behaviors, a distinction is made between self-directed and environment-directed strategies. A great advantage of the virtual collaborative environment with the behaviorally enhanced sound-kinetic avatar is the consideration of team coping intentions in different stages. Even if the aim is to tap transactional processes in real-life situations, it might be better to conduct research using a sound-kinetic avatar based collaborative environment than a virtual first-person perspective scenario alone. The VE consisted of two dual-processor PC systems, a video splitter, a digital camera and two stereoscopic CRT displays. The system was programmed in C++ and VRScape Immersive Cluster from VRCO, which created an artificial environment that encodes the user's motion from a video camera, targeted at the face of the users and physiological sensors attached to the body.

  14. System-Level Virtualization for High Performance Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vallee, Geoffroy R; Naughton, III, Thomas J; Engelmann, Christian

    2008-01-01

    System-level virtualization has been a research topic since the 70's but regained popularity during the past few years because of the availability of efficient solution such as Xen and the implementation of hardware support in commodity processors (e.g. Intel-VT, AMD-V). However, a majority of system-level virtualization projects is guided by the server consolidation market. As a result, current virtualization solutions appear to not be suitable for high performance computing (HPC) which is typically based on large-scale systems. On another hand there is significant interest in exploiting virtual machines (VMs) within HPC for a number of other reasons. By virtualizing themore » machine, one is able to run a variety of operating systems and environments as needed by the applications. Virtualization allows users to isolate workloads, improving security and reliability. It is also possible to support non-native environments and/or legacy operating environments through virtualization. In addition, it is possible to balance work loads, use migration techniques to relocate applications from failing machines, and isolate fault systems for repair. This document presents the challenges for the implementation of a system-level virtualization solution for HPC. It also presents a brief survey of the different approaches and techniques to address these challenges.« less

  15. Action tagging in a multi-user indoor environment for behavioural analysis purposes.

    PubMed

    Guerra, Claudio; Bianchi, Valentina; De Munari, Ilaria; Ciampolini, Paolo

    2015-01-01

    EU population is getting older, so that ICT-based solutions are expected to provide support in the challenges implied by the demographic change. At the University of Parma an AAL (Ambient Assisted Living) system, named CARDEA, has been developed. In this paper a new feature of the system is introduced, in which environmental and personal (i.e., wearable) sensors coexist, providing an accurate picture of the user's activity and needs. Environmental devices may greatly help in performing activity recognition and behavioral analysis tasks. However, in a multi-user environment, this implies the need of attributing environmental sensors outcome to a specific user, i.e., identifying the user when he performs a task detected by an environmental device. We implemented such an "action tagging" feature, based on information fusion, within the CARDEA environment, as an inexpensive, alternative solution to the problematic issue of indoor locationing.

  16. A convertor and user interface to import CAD files into worldtoolkit virtual reality systems

    NASA Technical Reports Server (NTRS)

    Wang, Peter Hor-Ching

    1996-01-01

    Virtual Reality (VR) is a rapidly developing human-to-computer interface technology. VR can be considered as a three-dimensional computer-generated Virtual World (VW) which can sense particular aspects of a user's behavior, allow the user to manipulate the objects interactively, and render the VW at real-time accordingly. The user is totally immersed in the virtual world and feel the sense of transforming into that VW. NASA/MSFC Computer Application Virtual Environments (CAVE) has been developing the space-related VR applications since 1990. The VR systems in CAVE lab are based on VPL RB2 system which consists of a VPL RB2 control tower, an LX eyephone, an Isotrak polhemus sensor, two Fastrak polhemus sensors, a folk of Bird sensor, and two VPL DG2 DataGloves. A dynamics animator called Body Electric from VPL is used as the control system to interface with all the input/output devices and to provide the network communications as well as VR programming environment. The RB2 Swivel 3D is used as the modelling program to construct the VW's. A severe limitation of the VPL VR system is the use of RB2 Swivel 3D, which restricts the files to a maximum of 1020 objects and doesn't have the advanced graphics texture mapping. The other limitation is that the VPL VR system is a turn-key system which does not provide the flexibility for user to add new sensors and C language interface. Recently, NASA/MSFC CAVE lab provides VR systems built on Sense8 WorldToolKit (WTK) which is a C library for creating VR development environments. WTK provides device drivers for most of the sensors and eyephones available on the VR market. WTK accepts several CAD file formats, such as Sense8 Neutral File Format, AutoCAD DXF and 3D Studio file format, Wave Front OBJ file format, VideoScape GEO file format, Intergraph EMS stereolithographics and CATIA Stereolithographics STL file formats. WTK functions are object-oriented in their naming convention, are grouped into classes, and provide easy C language interface. Using a CAD or modelling program to build a VW for WTK VR applications, we typically construct the stationary universe with all the geometric objects except the dynamic objects, and create each dynamic object in an individual file.

  17. Study on Collaborative Object Manipulation in Virtual Environment

    NASA Astrophysics Data System (ADS)

    Mayangsari, Maria Niken; Yong-Moo, Kwon

    This paper presents comparative study on network collaboration performance in different immersion. Especially, the relationship between user collaboration performance and degree of immersion provided by the system is addressed and compared based on several experiments. The user tests on our system include several cases: 1) Comparison between non-haptics and haptics collaborative interaction over LAN, 2) Comparison between non-haptics and haptics collaborative interaction over Internet, and 3) Analysis of collaborative interaction between non-immersive and immersive display environments.

  18. Development of virtual environment for treating acrophobia.

    PubMed

    Ku, J; Jang, D; Shin, M; Jo, H; Ahn, H; Lee, J; Cho, B; Kim, S I

    2001-01-01

    Virtual Reality (VR) is a new technology that makes humans communicate with computer. It allows the user to see, hear, feel and interact in a three-dimensional virtual world created graphically. Virtual Reality Therapy (VRT), based on this sophisticated technology, has been recently used in the treatment of subjects diagnosed with acrophobia, a disorder that is characterized by marked anxiety upon exposure to heights, avoidance of heights, and a resulting interference in functioning. Conventional virtual reality system for the treatment of acrophobia has a limitation in scope that it is based on over-costly devices or somewhat unrealistic graphic scene. The goal of this study was to develop a inexpensive and more realistic virtual environment for the exposure therapy of acrophobia. We constructed two types virtual environment. One is constituted a bungee-jump tower in the middle of a city. It includes the open lift surrounded by props beside tower that allowed the patient to feel sense of heights. Another is composed of diving boards which have various heights. It provides a view of a lower diving board and people swimming in the pool to serve the patient stimuli upon exposure to heights.

  19. 3D virtual environment of Taman Mini Indonesia Indah in a web

    NASA Astrophysics Data System (ADS)

    Wardijono, B. A.; Wardhani, I. P.; Chandra, Y. I.; Pamungkas, B. U. G.

    2018-05-01

    Taman Mini Indonesia Indah known as TMII is a largest recreational park based on culture in Indonesia. This park has 250 acres that consist of houses from provinces in Indonesia. In TMII, there are traditional houses of the various provinces in Indonesia. The official website of TMII has informed the traditional houses, but the information was limited to public. To provide information more detail about TMII to the public, this research aims to create and develop virtual traditional houses as 3d graphics models and show it via website. The Virtual Reality (VR) technology was used to display the visualization of the TMII and the surrounding environment. This research used Blender software to create the 3D models and Unity3D software to make virtual reality models that can be showed on a web. This research has successfully created 33 virtual traditional houses of province in Indonesia. The texture of traditional house was taken from original to make the culture house realistic. The result of this research was the website of TMII including virtual culture houses that can be displayed through the web browser. The website consists of virtual environment scenes and internet user can walkthrough and navigates inside the scenes.

  20. Lost in Interaction in IMS Learning Design Runtime Environments

    ERIC Educational Resources Information Center

    Derntl, Michael; Neumann, Susanne; Oberhuemer, Petra

    2014-01-01

    Educators are exploiting the advantages of advanced web-based collaboration technologies and massive online interactions. Interactions between learners and human or nonhuman resources therefore play an increasingly important pedagogical role, and the way these interactions are expressed in the user interface of virtual learning environments is…

  1. Virtual Wireless Sensor Networks: Adaptive Brain-Inspired Configuration for Internet of Things Applications.

    PubMed

    Toyonaga, Shinya; Kominami, Daichi; Murata, Masayuki

    2016-08-19

    Many researchers are devoting attention to the so-called "Internet of Things" (IoT), and wireless sensor networks (WSNs) are regarded as a critical technology for realizing the communication infrastructure of the future, including the IoT. Against this background, virtualization is a crucial technique for the integration of multiple WSNs. Designing virtualized WSNs for actual environments will require further detailed studies. Within the IoT environment, physical networks can undergo dynamic change, and so, many problems exist that could prevent applications from running without interruption when using the existing approaches. In this paper, we show an overall architecture that is suitable for constructing and running virtual wireless sensor network (VWSN) services within a VWSN topology. Our approach provides users with a reliable VWSN network by assigning redundant resources according to each user's demand and providing a recovery method to incorporate environmental changes. We tested this approach by simulation experiment, with the results showing that the VWSN network is reliable in many cases, although physical deployment of sensor nodes and the modular structure of the VWSN will be quite important to the stability of services within the VWSN topology.

  2. Telescopic multi-resolution augmented reality

    NASA Astrophysics Data System (ADS)

    Jenkins, Jeffrey; Frenchi, Christopher; Szu, Harold

    2014-05-01

    To ensure a self-consistent scaling approximation, the underlying microscopic fluctuation components can naturally influence macroscopic means, which may give rise to emergent observable phenomena. In this paper, we describe a consistent macroscopic (cm-scale), mesoscopic (micron-scale), and microscopic (nano-scale) approach to introduce Telescopic Multi-Resolution (TMR) into current Augmented Reality (AR) visualization technology. We propose to couple TMR-AR by introducing an energy-matter interaction engine framework that is based on known Physics, Biology, Chemistry principles. An immediate payoff of TMR-AR is a self-consistent approximation of the interaction between microscopic observables and their direct effect on the macroscopic system that is driven by real-world measurements. Such an interdisciplinary approach enables us to achieve more than multiple scale, telescopic visualization of real and virtual information but also conducting thought experiments through AR. As a result of the consistency, this framework allows us to explore a large dimensionality parameter space of measured and unmeasured regions. Towards this direction, we explore how to build learnable libraries of biological, physical, and chemical mechanisms. Fusing analytical sensors with TMR-AR libraries provides a robust framework to optimize testing and evaluation through data-driven or virtual synthetic simulations. Visualizing mechanisms of interactions requires identification of observable image features that can indicate the presence of information in multiple spatial and temporal scales of analog data. The AR methodology was originally developed to enhance pilot-training as well as `make believe' entertainment industries in a user-friendly digital environment We believe TMR-AR can someday help us conduct thought experiments scientifically, to be pedagogically visualized in a zoom-in-and-out, consistent, multi-scale approximations.

  3. Accessing Multi-Dimensional Images and Data Cubes in the Virtual Observatory

    NASA Astrophysics Data System (ADS)

    Tody, Douglas; Plante, R. L.; Berriman, G. B.; Cresitello-Dittmar, M.; Good, J.; Graham, M.; Greene, G.; Hanisch, R. J.; Jenness, T.; Lazio, J.; Norris, P.; Pevunova, O.; Rots, A. H.

    2014-01-01

    Telescopes across the spectrum are routinely producing multi-dimensional images and datasets, such as Doppler velocity cubes, polarization datasets, and time-resolved “movies.” Examples of current telescopes producing such multi-dimensional images include the JVLA, ALMA, and the IFU instruments on large optical and near-infrared wavelength telescopes. In the near future, both the LSST and JWST will also produce such multi-dimensional images routinely. High-energy instruments such as Chandra produce event datasets that are also a form of multi-dimensional data, in effect being a very sparse multi-dimensional image. Ensuring that the data sets produced by these telescopes can be both discovered and accessed by the community is essential and is part of the mission of the Virtual Observatory (VO). The Virtual Astronomical Observatory (VAO, http://www.usvao.org/), in conjunction with its international partners in the International Virtual Observatory Alliance (IVOA), has developed a protocol and an initial demonstration service designed for the publication, discovery, and access of arbitrarily large multi-dimensional images. The protocol describing multi-dimensional images is the Simple Image Access Protocol, version 2, which provides the minimal set of metadata required to characterize a multi-dimensional image for its discovery and access. A companion Image Data Model formally defines the semantics and structure of multi-dimensional images independently of how they are serialized, while providing capabilities such as support for sparse data that are essential to deal effectively with large cubes. A prototype data access service has been deployed and tested, using a suite of multi-dimensional images from a variety of telescopes. The prototype has demonstrated the capability to discover and remotely access multi-dimensional data via standard VO protocols. The prototype informs the specification of a protocol that will be submitted to the IVOA for approval, with an operational data cube service to be delivered in mid-2014. An associated user-installable VO data service framework will provide the capabilities required to publish VO-compatible multi-dimensional images or data cubes.

  4. HDOMO: Smart Sensor Integration for an Active and Independent Longevity of the Elderly.

    PubMed

    Frontoni, Emanuele; Pollini, Rama; Russo, Paola; Zingaretti, Primo; Cerri, Graziano

    2017-11-13

    The aim of this paper is to present the main results of HDOMO, an Ambient Assisted Living (AAL) project that involved 16 Small and Medium Enterprises (SMEs) and 2 research institutes. The objective of the project was to create an autonomous and automated domestic environment, primarily for elderly people and people with physical and motor disabilities. A known and familiar environment should help users in their daily activities and it should act as a virtual caregiver by calling, if necessary, relief efforts. Substantially, the aim of the project is to simplify the life of people in need of support, while keeping them autonomous in their private environment. From a technical point of view, the project provides the use of different Smart Objects (SOs), able to communicate among each other, in a cloud base infrastructure, and with the assisted users and their caregivers, in a perspective of interoperability and standardization of devices, usability and effectiveness of alarm systems. In the state of the art there are projects that achieve only a few of the elements listed. The HDOMO project aims to achieve all of them in one single project effectively. The experimental trials performed in a real scenario demonstrated the accuracy and efficiency of the system in extracting and processing data in real time to promptly acting, and in providing timely response to the needs of the user by integrating and confirming main alarms with different interoperable smart sensors. The article proposes a new technique to improve the accuracy of the system in detecting alarms using a multi-SO approach with information fusion between different devices, proving that this architecture can provide robust and reliable results on real environments.

  5. HDOMO: Smart Sensor Integration for an Active and Independent Longevity of the Elderly

    PubMed Central

    2017-01-01

    The aim of this paper is to present the main results of HDOMO, an Ambient Assisted Living (AAL) project that involved 16 Small and Medium Enterprises (SMEs) and 2 research institutes. The objective of the project was to create an autonomous and automated domestic environment, primarily for elderly people and people with physical and motor disabilities. A known and familiar environment should help users in their daily activities and it should act as a virtual caregiver by calling, if necessary, relief efforts. Substantially, the aim of the project is to simplify the life of people in need of support, while keeping them autonomous in their private environment. From a technical point of view, the project provides the use of different Smart Objects (SOs), able to communicate among each other, in a cloud base infrastructure, and with the assisted users and their caregivers, in a perspective of interoperability and standardization of devices, usability and effectiveness of alarm systems. In the state of the art there are projects that achieve only a few of the elements listed. The HDOMO project aims to achieve all of them in one single project effectively. The experimental trials performed in a real scenario demonstrated the accuracy and efficiency of the system in extracting and processing data in real time to promptly acting, and in providing timely response to the needs of the user by integrating and confirming main alarms with different interoperable smart sensors. The article proposes a new technique to improve the accuracy of the system in detecting alarms using a multi-SO approach with information fusion between different devices, proving that this architecture can provide robust and reliable results on real environments. PMID:29137174

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shawver, D.M.; Stansfield, S.

    This overview presents current research at Sandia National Laboratories in the Virtual Reality and Intelligent Simulation Lab. Into an existing distributed VR environment which we have been developing, and which provides shared immersion for multiple users, we are adding virtual actor support. The virtual actor support we are adding to this environment is intended to provide semi-autonomous actors, with oversight and high-level guiding control by a director/user, and to allow the overall action to be driven by a scenario. We present an overview of the environment into which our virtual actors will be added in Section 3, and discuss themore » direction of the Virtual Actor research itself in Section 4. We will briefly review related work in Section 2. First however we need to place the research in the context of what motivates it. The motivation for our construction of this environment, and the line of research associated with it, is based on a long-term program of providing support, through simulation, for situational training, by which we mean a type of training in which students learn to handle multiple situations or scenarios. In these situations, the student may encounter events ranging from the routine occurance to the rare emergency. Indeed, the appeal of such training systems is that they could allow the student to experience and develop effective responses for situations they would otherwise have no opportunity to practice, until they happened to encounter an actual occurance. Examples of the type of students for this kind of training would be security forces or emergency response forces. An example of the type of training scenario we would like to support is given in Section 4.2.« less

  7. Cross-standard user description in mobile, medical oriented virtual collaborative environments

    NASA Astrophysics Data System (ADS)

    Ganji, Rama Rao; Mitrea, Mihai; Joveski, Bojan; Chammem, Afef

    2015-03-01

    By combining four different open standards belonging to the ISO/IEC JTC1/SC29 WG11 (a.k.a. MPEG) and W3C, this paper advances an architecture for mobile, medical oriented virtual collaborative environments. The various users are represented according to MPEG-UD (MPEG User Description) while the security issues are dealt with by deploying the WebID principles. On the server side, irrespective of their elementary types (text, image, video, 3D, …), the medical data are aggregated into hierarchical, interactive multimedia scenes which are alternatively represented into MPEG-4 BiFS or HTML5 standards. This way, each type of content can be optimally encoded according to its particular constraints (semantic, medical practice, network conditions, etc.). The mobile device should ensure only the displaying of the content (inside an MPEG player or an HTML5 browser) and the capturing of the user interaction. The overall architecture is implemented and tested under the framework of the MEDUSA European project, in partnership with medical institutions. The testbed considers a server emulated by a PC and heterogeneous user devices (tablets, smartphones, laptops) running under iOS, Android and Windows operating systems. The connection between the users and the server is alternatively ensured by WiFi and 3G/4G networks.

  8. A Full Body Steerable Wind Display for a Locomotion Interface.

    PubMed

    Kulkarni, Sandip D; Fisher, Charles J; Lefler, Price; Desai, Aditya; Chakravarthy, Shanthanu; Pardyjak, Eric R; Minor, Mark A; Hollerbach, John M

    2015-10-01

    This paper presents the Treadport Active Wind Tunnel (TPAWT)-a full-body immersive virtual environment for the Treadport locomotion interface designed for generating wind on a user from any frontal direction at speeds up to 20 kph. The goal is to simulate the experience of realistic wind while walking in an outdoor virtual environment. A recirculating-type wind tunnel was created around the pre-existing Treadport installation by adding a large fan, ducting, and enclosure walls. Two sheets of air in a non-intrusive design flow along the side screens of the back-projection CAVE-like visual display, where they impinge and mix at the front screen to redirect towards the user in a full-body cross-section. By varying the flow conditions of the air sheets, the direction and speed of wind at the user are controlled. Design challenges to fit the wind tunnel in the pre-existing facility, and to manage turbulence to achieve stable and steerable flow, were overcome. The controller performance for wind speed and direction is demonstrated experimentally.

  9. Trends in the salience of data collected in a multi user virtual environment: An exploratory study

    NASA Astrophysics Data System (ADS)

    Tutwiler, M. Shane

    In this study, by exploring patterns in the degree of physical salience of the data the students collected, I investigated the relationship between the level of students' tendency to frame explanations in terms of complex patterns and evidence of how they attend to and select data in support of their developing understandings of causal relationships. I accomplished this by analyzing longitudinal data collected as part of a larger study of 143 7th grade students (clustered within 36 teams, 5 teachers, and 2 schools in the same Northeastern school district) as they navigated and collected data in an ecosystems-based multi-user virtual environment curriculum known as the EcoMUVE Pond module (Metcalf, Kamarainen, Tutwiler, Grotzer, Dede, 2011) . Using individual growth modeling (Singer & Willett, 2003) I found no direct link between student pre-intervention tendency to offer explanations containing complex causal components and patterns of physical salience-driven data collection (average physical salience level, number of low physical salience data points collected, and proportion of low physical salience data points collected), though prior science content knowledge did affect the initial status and rate of change of outcomes in the average physical salience level and proportion of low physical salience data collected over time. The findings of this study suggest two issues for consideration about the use of MUVEs to study student data collection behaviors in complex spaces. Firstly, the structure of the curriculum in which the MUVE is embedded might have a direct effect on what types of data students choose to collect. This undercuts our ability to make inferences about student-driven decisions to collect specific types of data, and suggests that a more open-ended curricular model might be better suited to this type of inquiry. Secondly, differences between teachers' choices in how to facilitate the units likely contribute to the variance in student data collection behaviors between students with different teachers. This foreshadows external validity issues in studies that use behaviors of students within a single class to develop "detectors" of student latent traits (e.g., Baker, Corbett, Roll, Koedinger, 2008).

  10. Psychological and physiological human responses to simulated and real environments: A comparison between Photographs, 360° Panoramas, and Virtual Reality.

    PubMed

    Higuera-Trujillo, Juan Luis; López-Tarruella Maldonado, Juan; Llinares Millán, Carmen

    2017-11-01

    Psychological research into human factors frequently uses simulations to study the relationship between human behaviour and the environment. Their validity depends on their similarity with the physical environments. This paper aims to validate three environmental-simulation display formats: photographs, 360° panoramas, and virtual reality. To do this we compared the psychological and physiological responses evoked by simulated environments set-ups to those from a physical environment setup; we also assessed the users' sense of presence. Analysis show that 360° panoramas offer the closest to reality results according to the participants' psychological responses, and virtual reality according to the physiological responses. Correlations between the feeling of presence and physiological and other psychological responses were also observed. These results may be of interest to researchers using environmental-simulation technologies currently available in order to replicate the experience of physical environments. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. The Role of Multi-Institutional Partnerships in Supply Chain Management Course Design and Improvement

    ERIC Educational Resources Information Center

    Long, Suzanna; Moos, J. Chris; Radic, Anne Bartel

    2012-01-01

    The authors examined the skills achieved through a multicultural, virtual student project environment among 3 supply chain management courses. The partnership included 2 universities in the United States and 1 in France and created virtual teams of students across university lines and is presented as a case study. The case includes detailed…

  12. Evaluating the Effects of Immersive Embodied Interaction on Cognition in Virtual Reality

    NASA Astrophysics Data System (ADS)

    Parmar, Dhaval

    Virtual reality is on its advent of becoming mainstream household technology, as technologies such as head-mounted displays, trackers, and interaction devices are becoming affordable and easily available. Virtual reality (VR) has immense potential in enhancing the fields of education and training, and its power can be used to spark interest and enthusiasm among learners. It is, therefore, imperative to evaluate the risks and benefits that immersive virtual reality poses to the field of education. Research suggests that learning is an embodied process. Learning depends on grounded aspects of the body including action, perception, and interactions with the environment. This research aims to study if immersive embodiment through the means of virtual reality facilitates embodied cognition. A pedagogical VR solution which takes advantage of embodied cognition can lead to enhanced learning benefits. Towards achieving this goal, this research presents a linear continuum for immersive embodied interaction within virtual reality. This research evaluates the effects of three levels of immersive embodied interactions on cognitive thinking, presence, usability, and satisfaction among users in the fields of science, technology, engineering, and mathematics (STEM) education. Results from the presented experiments show that immersive virtual reality is greatly effective in knowledge acquisition and retention, and highly enhances user satisfaction, interest and enthusiasm. Users experience high levels of presence and are profoundly engaged in the learning activities within the immersive virtual environments. The studies presented in this research evaluate pedagogical VR software to train and motivate students in STEM education, and provide an empirical analysis comparing desktop VR (DVR), immersive VR (IVR), and immersive embodied VR (IEVR) conditions for learning. This research also proposes a fully immersive embodied interaction metaphor (IEIVR) for learning of computational concepts as a future direction, and presents the challenges faced in implementing the IEIVR metaphor due to extended periods of immersion. Results from the conducted studies help in formulating guidelines for virtual reality and education researchers working in STEM education and training, and for educators and curriculum developers seeking to improve student engagement in the STEM fields.

  13. Learner Presence, Perception, and Learning Achievements in Augmented-Reality-Mediated Learning Environments

    ERIC Educational Resources Information Center

    Chen, Yu-Hsuan; Wang, Chang-Hwa

    2018-01-01

    Although research has indicated that augmented reality (AR)-facilitated instruction improves learning performance, further investigation of the usefulness of AR from a psychological perspective has been recommended. Researchers consider presence a major psychological effect when users are immersed in virtual reality environments. However, most…

  14. Making Learning Fun: Quest Atlantis, A Game Without Guns

    ERIC Educational Resources Information Center

    Barab, Sasha; Thomas, Michael; Dodge, Tyler; Carteaux, Robert; Tuzun, Hakan

    2005-01-01

    This article describes the Quest Atlantis (QA) project, a learning and teaching project that employs a multiuser, virtual environment to immerse children, ages 9-12, in educational tasks. QA combines strategies used in commercial gaming environments with lessons from educational research on learning and motivation. It allows users at participating…

  15. A Trusted Portable Computing Device

    NASA Astrophysics Data System (ADS)

    Ming-wei, Fang; Jun-jun, Wu; Peng-fei, Yu; Xin-fang, Zhang

    A trusted portable computing device and its security mechanism were presented to solve the security issues, such as the attack of virus and Trojan horse, the lost and stolen of storage device, in mobile office. It used smart card to build a trusted portable security base, virtualization to create a secure virtual execution environment, two-factor authentication mechanism to identify legitimate users, and dynamic encryption to protect data privacy. The security environment described in this paper is characteristic of portability, security and reliability. It can meet the security requirement of mobile office.

  16. The Benefits and Complexities of Operating Geographic Information Systems (GIS) in a High Performance Computing (HPC) Environment

    NASA Astrophysics Data System (ADS)

    Shute, J.; Carriere, L.; Duffy, D.; Hoy, E.; Peters, J.; Shen, Y.; Kirschbaum, D.

    2017-12-01

    The NASA Center for Climate Simulation (NCCS) at the Goddard Space Flight Center is building and maintaining an Enterprise GIS capability for its stakeholders, to include NASA scientists, industry partners, and the public. This platform is powered by three GIS subsystems operating in a highly-available, virtualized environment: 1) the Spatial Analytics Platform is the primary NCCS GIS and provides users discoverability of the vast DigitalGlobe/NGA raster assets within the NCCS environment; 2) the Disaster Mapping Platform provides mapping and analytics services to NASA's Disaster Response Group; and 3) the internal (Advanced Data Analytics Platform/ADAPT) enterprise GIS provides users with the full suite of Esri and open source GIS software applications and services. All systems benefit from NCCS's cutting edge infrastructure, to include an InfiniBand network for high speed data transfers; a mixed/heterogeneous environment featuring seamless sharing of information between Linux and Windows subsystems; and in-depth system monitoring and warning systems. Due to its co-location with the NCCS Discover High Performance Computing (HPC) environment and the Advanced Data Analytics Platform (ADAPT), the GIS platform has direct access to several large NCCS datasets including DigitalGlobe/NGA, Landsat, MERRA, and MERRA2. Additionally, the NCCS ArcGIS Desktop Windows virtual machines utilize existing NetCDF and OPeNDAP assets for visualization, modelling, and analysis - thus eliminating the need for data duplication. With the advent of this platform, Earth scientists have full access to vast data repositories and the industry-leading tools required for successful management and analysis of these multi-petabyte, global datasets. The full system architecture and integration with scientific datasets will be presented. Additionally, key applications and scientific analyses will be explained, to include the NASA Global Landslide Catalog (GLC) Reporter crowdsourcing application, the NASA GLC Viewer discovery and analysis tool, the DigitalGlobe/NGA Data Discovery Tool, the NASA Disaster Response Group Mapping Platform (https://maps.disasters.nasa.gov), and support for NASA's Arctic - Boreal Vulnerability Experiment (ABoVE).

  17. Alteration and Implementation of the CP/M-86 Operating System for a Multi-User Environment.

    DTIC Science & Technology

    1982-12-01

    THE CP/M-86 OPERATING SYSTEM FOR A MULTI-USER ENVIRONMENT by Thomas V. Almquist and David S. Stevens C-, December 1982 ,LU Thesis Advisor : U. R. Kodres...tool$ 044, robo O0eA 6^900091 Approved for public release; distribution unlimited Alteration and Implementation of the CP/M-86 Operating System for a...SCIENCE IN COMPUTER SCIENCE from the NAVAL POSTGRADUATE SCHOOL December 1982 Authors: Approved by: ..... .. . . . . . . . . Thesis Advisor Second

  18. ConfocalVR: Immersive Visualization Applied to Confocal Microscopy.

    PubMed

    Stefani, Caroline; Lacy-Hulbert, Adam; Skillman, Thomas

    2018-06-24

    ConfocalVR is a virtual reality (VR) application created to improve the ability of researchers to study the complexity of cell architecture. Confocal microscopes take pictures of fluorescently labeled proteins or molecules at different focal planes to create a stack of 2D images throughout the specimen. Current software applications reconstruct the 3D image and render it as a 2D projection onto a computer screen where users need to rotate the image to expose the full 3D structure. This process is mentally taxing, breaks down if you stop the rotation, and does not take advantage of the eye's full field of view. ConfocalVR exploits consumer-grade virtual reality (VR) systems to fully immerse the user in the 3D cellular image. In this virtual environment the user can: 1) adjust image viewing parameters without leaving the virtual space, 2) reach out and grab the image to quickly rotate and scale the image to focus on key features, and 3) interact with other users in a shared virtual space enabling real-time collaborative exploration and discussion. We found that immersive VR technology allows the user to rapidly understand cellular architecture and protein or molecule distribution. We note that it is impossible to understand the value of immersive visualization without experiencing it first hand, so we encourage readers to get access to a VR system, download this software, and evaluate it for yourself. The ConfocalVR software is available for download at http://www.confocalvr.com, and is free for nonprofits. Copyright © 2018. Published by Elsevier Ltd.

  19. Using virtual reality to assess user experience.

    PubMed

    Rebelo, Francisco; Noriega, Paulo; Duarte, Emília; Soares, Marcelo

    2012-12-01

    The aim of this article is to discuss how user experience (UX) evaluation can benefit from the use of virtual reality (VR). UX is usually evaluated in laboratory settings. However, considering that UX occurs as a consequence of the interaction between the product, the user, and the context of use, the assessment of UX can benefit from a more ecological test setting. VR provides the means to develop realistic-looking virtual environments with the advantage of allowing greater control of the experimental conditions while granting good ecological validity. The methods used to evaluate UX, as well as their main limitations, are identified.The currentVR equipment and its potential applications (as well as its limitations and drawbacks) to overcome some of the limitations in the assessment of UX are highlighted. The relevance of VR for UX studies is discussed, and a VR-based framework for evaluating UX is presented. UX research may benefit from a VR-based methodology in the scopes of user research (e.g., assessment of users' expectations derived from their lifestyles) and human-product interaction (e.g., assessment of users' emotions since the first moment of contact with the product and then during the interaction). This article provides knowledge to researchers and professionals engaged in the design of technological interfaces about the usefulness of VR in the evaluation of UX.

  20. Presence and User Experience in a Virtual Environment under the Influence of Ethanol: An Explorative Study.

    PubMed

    Lorenz, Mario; Brade, Jennifer; Diamond, Lisa; Sjölie, Daniel; Busch, Marc; Tscheligi, Manfred; Klimant, Philipp; Heyde, Christoph-E; Hammer, Niels

    2018-04-23

    Virtual Reality (VR) is used for a variety of applications ranging from entertainment to psychological medicine. VR has been demonstrated to influence higher order cognitive functions and cortical plasticity, with implications on phobia and stroke treatment. An integral part for successful VR is a high sense of presence - a feeling of 'being there' in the virtual scenario. The underlying cognitive and perceptive functions causing presence in VR scenarios are however not completely known. It is evident that the brain function is influenced by drugs, such as ethanol, potentially confounding cortical plasticity, also in VR. As ethanol is ubiquitous and forms part of daily life, understanding the effects of ethanol on presence and user experience, the attitudes and emotions about using VR applications, is important. This exploratory study aims at contributing towards an understanding of how low-dose ethanol intake influences presence, user experience and their relationship in a validated VR context. It was found that low-level ethanol consumption did influence presence and user experience, but on a minimal level. In contrast, correlations between presence and user experience were strongly influenced by low-dose ethanol. Ethanol consumption may consequently alter cognitive and perceptive functions related to the connections between presence and user experience.

  1. Simulation Exploration through Immersive Parallel Planes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brunhart-Lupo, Nicholas J; Bush, Brian W; Gruchalla, Kenny M

    We present a visualization-driven simulation system that tightly couples systems dynamics simulations with an immersive virtual environment to allow analysts to rapidly develop and test hypotheses in a high-dimensional parameter space. To accomplish this, we generalize the two-dimensional parallel-coordinates statistical graphic as an immersive 'parallel-planes' visualization for multivariate time series emitted by simulations running in parallel with the visualization. In contrast to traditional parallel coordinate's mapping the multivariate dimensions onto coordinate axes represented by a series of parallel lines, we map pairs of the multivariate dimensions onto a series of parallel rectangles. As in the case of parallel coordinates, eachmore » individual observation in the dataset is mapped to a polyline whose vertices coincide with its coordinate values. Regions of the rectangles can be 'brushed' to highlight and select observations of interest: a 'slider' control allows the user to filter the observations by their time coordinate. In an immersive virtual environment, users interact with the parallel planes using a joystick that can select regions on the planes, manipulate selection, and filter time. The brushing and selection actions are used to both explore existing data as well as to launch additional simulations corresponding to the visually selected portions of the input parameter space. As soon as the new simulations complete, their resulting observations are displayed in the virtual environment. This tight feedback loop between simulation and immersive analytics accelerates users' realization of insights about the simulation and its output.« less

  2. Simulation Exploration through Immersive Parallel Planes: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brunhart-Lupo, Nicholas; Bush, Brian W.; Gruchalla, Kenny

    We present a visualization-driven simulation system that tightly couples systems dynamics simulations with an immersive virtual environment to allow analysts to rapidly develop and test hypotheses in a high-dimensional parameter space. To accomplish this, we generalize the two-dimensional parallel-coordinates statistical graphic as an immersive 'parallel-planes' visualization for multivariate time series emitted by simulations running in parallel with the visualization. In contrast to traditional parallel coordinate's mapping the multivariate dimensions onto coordinate axes represented by a series of parallel lines, we map pairs of the multivariate dimensions onto a series of parallel rectangles. As in the case of parallel coordinates, eachmore » individual observation in the dataset is mapped to a polyline whose vertices coincide with its coordinate values. Regions of the rectangles can be 'brushed' to highlight and select observations of interest: a 'slider' control allows the user to filter the observations by their time coordinate. In an immersive virtual environment, users interact with the parallel planes using a joystick that can select regions on the planes, manipulate selection, and filter time. The brushing and selection actions are used to both explore existing data as well as to launch additional simulations corresponding to the visually selected portions of the input parameter space. As soon as the new simulations complete, their resulting observations are displayed in the virtual environment. This tight feedback loop between simulation and immersive analytics accelerates users' realization of insights about the simulation and its output.« less

  3. Interface Design Implications for Recalling the Spatial Configuration of Virtual Auditory Environments

    NASA Astrophysics Data System (ADS)

    McMullen, Kyla A.

    Although the concept of virtual spatial audio has existed for almost twenty-five years, only in the past fifteen years has modern computing technology enabled the real-time processing needed to deliver high-precision spatial audio. Furthermore, the concept of virtually walking through an auditory environment did not exist. The applications of such an interface have numerous potential uses. Spatial audio has the potential to be used in various manners ranging from enhancing sounds delivered in virtual gaming worlds to conveying spatial locations in real-time emergency response systems. To incorporate this technology in real-world systems, various concerns should be addressed. First, to widely incorporate spatial audio into real-world systems, head-related transfer functions (HRTFs) must be inexpensively created for each user. The present study further investigated an HRTF subjective selection procedure previously developed within our research group. Users discriminated auditory cues to subjectively select their preferred HRTF from a publicly available database. Next, the issue of training to find virtual sources was addressed. Listeners participated in a localization training experiment using their selected HRTFs. The training procedure was created from the characterization of successful search strategies in prior auditory search experiments. Search accuracy significantly improved after listeners performed the training procedure. Next, in the investigation of auditory spatial memory, listeners completed three search and recall tasks with differing recall methods. Recall accuracy significantly decreased in tasks that required the storage of sound source configurations in memory. To assess the impacts of practical scenarios, the present work assessed the performance effects of: signal uncertainty, visual augmentation, and different attenuation modeling. Fortunately, source uncertainty did not affect listeners' ability to recall or identify sound sources. The present study also found that the presence of visual reference frames significantly increased recall accuracy. Additionally, the incorporation of drastic attenuation significantly improved environment recall accuracy. Through investigating the aforementioned concerns, the present study made initial footsteps guiding the design of virtual auditory environments that support spatial configuration recall.

  4. WeaVR: a self-contained and wearable immersive virtual environment simulation system.

    PubMed

    Hodgson, Eric; Bachmann, Eric R; Vincent, David; Zmuda, Michael; Waller, David; Calusdian, James

    2015-03-01

    We describe WeaVR, a computer simulation system that takes virtual reality technology beyond specialized laboratories and research sites and makes it available in any open space, such as a gymnasium or a public park. Novel hardware and software systems enable HMD-based immersive virtual reality simulations to be conducted in any arbitrary location, with no external infrastructure and little-to-no setup or site preparation. The ability of the WeaVR system to provide realistic motion-tracked navigation for users, to improve the study of large-scale navigation, and to generate usable behavioral data is shown in three demonstrations. First, participants navigated through a full-scale virtual grocery store while physically situated in an open grass field. Trajectory data are presented for both normal tracking and for tracking during the use of redirected walking that constrained users to a predefined area. Second, users followed a straight path within a virtual world for distances of up to 2 km while walking naturally and being redirected to stay within the field, demonstrating the ability of the system to study large-scale navigation by simulating virtual worlds that are potentially unlimited in extent. Finally, the portability and pedagogical implications of this system were demonstrated by taking it to a regional high school for live use by a computer science class on their own school campus.

  5. Use of virtual reality to promote hand therapy post-stroke

    NASA Astrophysics Data System (ADS)

    Tsoupikova, Daria; Stoykov, Nikolay; Vick, Randy; Li, Yu; Kamper, Derek; Listenberger, Molly

    2013-03-01

    A novel artistic virtual reality (VR) environment was developed and tested for use as a rehabilitation protocol for post-stroke hand rehabilitation therapy. The system was developed by an interdisciplinary team of engineers, art therapists, occupational therapists, and VR artists to improve patients' motivation and engagement. Specific exercises were developed to explicitly promote the practice of therapeutic tasks requiring hand and arm coordination for upper extremity rehabilitation. Here we describe system design, development, and user testing for efficiency, subject's satisfaction and clinical feasibility. We report results of the completed qualitative, pre-clinical pilot study of the system effectiveness for therapy. Fourteen stroke survivors with chronic hemiparesis participated in a single training session within the environment to gauge user response to the protocol through a custom survey. Results indicate that users found the system comfortable, enjoyable, tiring; instructions clear, and reported a high level of satisfaction with the VR environment and rehabilitation task variety and difficulty. Most patients reported very positive impressions of the VR environment and rated it highly, appreciating its engagement and motivation. We are currently conducting a longitudinal intervention study over 6 weeks in stroke survivors with chronic hemiparesis. Initial results following use of the system on the first subjects demonstrate that the system is operational and can facilitate therapy for post stroke patients with upper extremity impairment.

  6. Virtual environment display for a 3D audio room simulation

    NASA Astrophysics Data System (ADS)

    Chapin, William L.; Foster, Scott

    1992-06-01

    Recent developments in virtual 3D audio and synthetic aural environments have produced a complex acoustical room simulation. The acoustical simulation models a room with walls, ceiling, and floor of selected sound reflecting/absorbing characteristics and unlimited independent localizable sound sources. This non-visual acoustic simulation, implemented with 4 audio ConvolvotronsTM by Crystal River Engineering and coupled to the listener with a Poihemus IsotrakTM, tracking the listener's head position and orientation, and stereo headphones returning binaural sound, is quite compelling to most listeners with eyes closed. This immersive effect should be reinforced when properly integrated into a full, multi-sensory virtual environment presentation. This paper discusses the design of an interactive, visual virtual environment, complementing the acoustic model and specified to: 1) allow the listener to freely move about the space, a room of manipulable size, shape, and audio character, while interactively relocating the sound sources; 2) reinforce the listener's feeling of telepresence into the acoustical environment with visual and proprioceptive sensations; 3) enhance the audio with the graphic and interactive components, rather than overwhelm or reduce it; and 4) serve as a research testbed and technology transfer demonstration. The hardware/software design of two demonstration systems, one installed and one portable, are discussed through the development of four iterative configurations. The installed system implements a head-coupled, wide-angle, stereo-optic tracker/viewer and multi-computer simulation control. The portable demonstration system implements a head-mounted wide-angle, stereo-optic display, separate head and pointer electro-magnetic position trackers, a heterogeneous parallel graphics processing system, and object oriented C++ program code.

  7. Inertial Motion-Tracking Technology for Virtual 3-D

    NASA Technical Reports Server (NTRS)

    2005-01-01

    In the 1990s, NASA pioneered virtual reality research. The concept was present long before, but, prior to this, the technology did not exist to make a viable virtual reality system. Scientists had theories and ideas they knew that the concept had potential, but the computers of the 1970s and 1980s were not fast enough, sensors were heavy and cumbersome, and people had difficulty blending fluidly with the machines. Scientists at Ames Research Center built upon the research of previous decades and put the necessary technology behind them, making the theories of virtual reality a reality. Virtual reality systems depend on complex motion-tracking sensors to convey information between the user and the computer to give the user the feeling that he is operating in the real world. These motion-tracking sensors measure and report an object s position and orientation as it changes. A simple example of motion tracking would be the cursor on a computer screen moving in correspondence to the shifting of the mouse. Tracking in 3-D, necessary to create virtual reality, however, is much more complex. To be successful, the perspective of the virtual image seen on the computer must be an accurate representation of what is seen in the real world. As the user s head or camera moves, turns, or tilts, the computer-generated environment must change accordingly with no noticeable lag, jitter, or distortion. Historically, the lack of smooth and rapid tracking of the user s motion has thwarted the widespread use of immersive 3-D computer graphics. NASA uses virtual reality technology for a variety of purposes, mostly training of astronauts. The actual missions are costly and dangerous, so any opportunity the crews have to practice their maneuvering in accurate situations before the mission is valuable and instructive. For that purpose, NASA has funded a great deal of virtual reality research, and benefited from the results.

  8. 3DUI assisted lower and upper member therapy.

    PubMed

    Uribe-Quevedo, Alvaro; Perez-Gutierrez, Byron

    2012-01-01

    3DUIs are becoming very popular among researchers, developers and users as they allow more immersive and interactive experiences by taking advantage of the human dexterity. The features offered by these interfaces outside the gaming environment, have allowed the development of applications in the medical area by enhancing the user experience and aiding the therapy process in controlled and monitored environments. Using mainstream videogame 3DUIs based on inertial and image sensors available in the market, this work presents the development of a virtual environment and its navigation through lower member captured gestures for assisting motion during therapy.

  9. A Learning Game for Youth Financial Literacy Education in the Teen Grid of Second Life Three-Dimensional Virtual Environment

    ERIC Educational Resources Information Center

    Liu, Chang; Franklin, Teresa; Shelor, Roger; Ozercan, Sertac; Reuter, Jarrod; Ye, En; Moriarty, Scott

    2011-01-01

    Game-like three-dimensional (3D) virtual worlds have become popular venues for youth to explore and interact with friends. To bring vital financial literacy education to them in places they frequent, a multi-disciplinary team of computer scientists, educators, and financial experts developed a youth-oriented financial literacy education game in…

  10. a Low-Cost and Lightweight 3d Interactive Real Estate-Purposed Indoor Virtual Reality Application

    NASA Astrophysics Data System (ADS)

    Ozacar, K.; Ortakci, Y.; Kahraman, I.; Durgut, R.; Karas, I. R.

    2017-11-01

    Interactive 3D architectural indoor design have been more popular after it benefited from Virtual Reality (VR) technologies. VR brings computer-generated 3D content to real life scale and enable users to observe immersive indoor environments so that users can directly modify it. This opportunity enables buyers to purchase a property off-the-plan cheaper through virtual models. Instead of showing property through 2D plan or renders, this visualized interior architecture of an on-sale unbuilt property is demonstrated beforehand so that the investors have an impression as if they were in the physical building. However, current applications either use highly resource consuming software, or are non-interactive, or requires specialist to create such environments. In this study, we have created a real-estate purposed low-cost high quality fully interactive VR application that provides a realistic interior architecture of the property by using free and lightweight software: Sweet Home 3D and Unity. A preliminary study showed that participants generally liked proposed real estate-purposed VR application, and it satisfied the expectation of the property buyers.

  11. Love 2.0: a quantitative exploration of sex and relationships in the virtual world Second Life.

    PubMed

    Craft, Ashley John

    2012-08-01

    This study presents the quantitative results of a web-based survey exploring the experiences of those who seek sex and relationships in the virtual world of Second Life. The survey gathered data on demographics, relationships, and sexual behaviors from 235 Second Life residents to compare with U.S. General Social Survey data on Internet users and the general population. The Second Life survey also gathered data on interests in and experiences with a number of sexual practices in both offline and online environments. Comparative analysis found that survey participants were significantly older, more educated, and less religious than a wider group of Internet users, and in certain age groups were far less likely to be married or have children. Motivations for engaging in cybersex were presented. Analysis of interest and experience of different sexual practices supported findings by other researchers that online environments facilitated access, but also indicated that interest in certain sexual practices could differ between offline and online environments.

  12. Using a Virtual Environment to Deliver Evidence-Based Interventions: The Facilitator's Experience

    PubMed Central

    Villarruel, Antonia; Tschannen, Dana; Valladares, Angel; Yaksich, Joseph; Yeagley, Emily; Hawes, Armani

    2015-01-01

    Background Evidence-based interventions (EBIs) have the potential to maximize positive impact on communities. However, despite the quantity and quality of EBIs for prevention, the need for formalized training and associated training-related expenses, such as travel costs, program materials, and input of personnel hours, pose implementation challenges for many community-based organizations. In this study, the community of inquiry (CoI) framework was used to develop the virtual learning environment to support the adaptation of the ¡Cuídate! (Take Care of Yourself!) Training of Facilitators curriculum (an EBI) to train facilitators from community-based organizations. Objective The purpose of this study was to examine the feasibility of adapting a traditional face-to-face facilitator training program for ¡Cuídate!, a sexual risk reduction EBI for Latino youth, for use in a multi-user virtual environment (MUVE). Additionally, two aims of the study were explored: the acceptability of the facilitator training and the level of the facilitators’ knowledge and self-efficacy to implement the training. Methods A total of 35 facilitators were trained in the virtual environment. We evaluated the facilitators' experience in the virtual training environment and determined if the learning environment was acceptable and supported the acquisition of learning outcomes. To this end, the facilitators were surveyed using a modified community of inquiry survey, with questions specific to the Second Life environment and an open-ended questionnaire. In addition, a comparison to face-to-face training was conducted using survey methods. Results Results of the community of inquiry survey demonstrated a subscale mean of 23.11 (SD 4.12) out of a possible 30 on social presence, a subscale mean of 8.74 (SD 1.01) out of a possible 10 on teaching presence, and a subscale mean of 16.69 (SD 1.97) out of a possible 20 on cognitive presence. The comparison to face-to-face training showed no significant differences in participants' ability to respond to challenging or sensitive questions (P=.50) or their ability to help participants recognize how Latino culture supports safer sex (P=.32). There was a significant difference in their knowledge of core elements and modules (P<.001). A total of 74% (26/35) of the Second Life participants did agree/strongly agree that they had the skills to deliver the ¡Cuídate! program. Conclusions The results showed that participants found the Second Life environment to be acceptable to the learners and supported an experience in which learners were able to acquire the knowledge and skills needed to deliver the curriculum. PMID:26199045

  13. Molecular Dynamics Visualization (MDV): Stereoscopic 3D Display of Biomolecular Structure and Interactions Using the Unity Game Engine.

    PubMed

    Wiebrands, Michael; Malajczuk, Chris J; Woods, Andrew J; Rohl, Andrew L; Mancera, Ricardo L

    2018-06-21

    Molecular graphics systems are visualization tools which, upon integration into a 3D immersive environment, provide a unique virtual reality experience for research and teaching of biomolecular structure, function and interactions. We have developed a molecular structure and dynamics application, the Molecular Dynamics Visualization tool, that uses the Unity game engine combined with large scale, multi-user, stereoscopic visualization systems to deliver an immersive display experience, particularly with a large cylindrical projection display. The application is structured to separate the biomolecular modeling and visualization systems. The biomolecular model loading and analysis system was developed as a stand-alone C# library and provides the foundation for the custom visualization system built in Unity. All visual models displayed within the tool are generated using Unity-based procedural mesh building routines. A 3D user interface was built to allow seamless dynamic interaction with the model while being viewed in 3D space. Biomolecular structure analysis and display capabilities are exemplified with a range of complex systems involving cell membranes, protein folding and lipid droplets.

  14. Virtual fixtures as tools to enhance operator performance in telepresence environments

    NASA Astrophysics Data System (ADS)

    Rosenberg, Louis B.

    1993-12-01

    This paper introduces the notion of virtual fixtures for use in telepresence systems and presents an empirical study which demonstrates that such virtual fixtures can greatly enhance operator performance within remote environments. Just as tools and fixtures in the real world can enhance human performance by guiding manual operations, providing localizing references, and reducing the mental processing required to perform a task, virtual fixtures are computer generated percepts overlaid on top of the reflection of a remote workspace which can provide similar benefits. Like a ruler guiding a pencil in a real manipulation task, a virtual fixture overlaid on top of a remote workspace can act to reduce the mental processing required to perform a task, limit the workload of certain sensory modalities, and most of all allow precision and performance to exceed natural human abilities. Because such perceptual overlays are virtual constructions they can be diverse in modality, abstract in form, and custom tailored to individual task or user needs. This study investigates the potential of virtual fixtures by implementing simple combinations of haptic and auditory sensations as perceptual overlays during a standardized telemanipulation task.

  15. Novel graphical environment for virtual and real-world operations of tracked mobile manipulators

    NASA Astrophysics Data System (ADS)

    Chen, ChuXin; Trivedi, Mohan M.; Azam, Mir; Lassiter, Nils T.

    1993-08-01

    A simulation, animation, visualization and interactive control (SAVIC) environment has been developed for the design and operation of an integrated mobile manipulator system. This unique system possesses the abilities for (1) multi-sensor simulation, (2) kinematics and locomotion animation, (3) dynamic motion and manipulation animation, (4) transformation between real and virtual modes within the same graphics system, (5) ease in exchanging software modules and hardware devices between real and virtual world operations, and (6) interfacing with a real robotic system. This paper describes a working system and illustrates the concepts by presenting the simulation, animation and control methodologies for a unique mobile robot with articulated tracks, a manipulator, and sensory modules.

  16. Software architecture and design of the web services facilitating climate model diagnostic analysis

    NASA Astrophysics Data System (ADS)

    Pan, L.; Lee, S.; Zhang, J.; Tang, B.; Zhai, C.; Jiang, J. H.; Wang, W.; Bao, Q.; Qi, M.; Kubar, T. L.; Teixeira, J.

    2015-12-01

    Climate model diagnostic analysis is a computationally- and data-intensive task because it involves multiple numerical model outputs and satellite observation data that can both be high resolution. We have built an online tool that facilitates this process. The tool is called Climate Model Diagnostic Analyzer (CMDA). It employs the web service technology and provides a web-based user interface. The benefits of these choices include: (1) No installation of any software other than a browser, hence it is platform compatable; (2) Co-location of computation and big data on the server side, and small results and plots to be downloaded on the client side, hence high data efficiency; (3) multi-threaded implementation to achieve parallel performance on multi-core servers; and (4) cloud deployment so each user has a dedicated virtual machine. In this presentation, we will focus on the computer science aspects of this tool, namely the architectural design, the infrastructure of the web services, the implementation of the web-based user interface, the mechanism of provenance collection, the approach to virtualization, and the Amazon Cloud deployment. As an example, We will describe our methodology to transform an existing science application code into a web service using a Python wrapper interface and Python web service frameworks (i.e., Flask, Gunicorn, and Tornado). Another example is the use of Docker, a light-weight virtualization container, to distribute and deploy CMDA onto an Amazon EC2 instance. Our tool of CMDA has been successfully used in the 2014 Summer School hosted by the JPL Center for Climate Science. Students had positive feedbacks in general and we will report their comments. An enhanced version of CMDA with several new features, some requested by the 2014 students, will be used in the 2015 Summer School soon.

  17. Dynamic Extension of a Virtualized Cluster by using Cloud Resources

    NASA Astrophysics Data System (ADS)

    Oberst, Oliver; Hauth, Thomas; Kernert, David; Riedel, Stephan; Quast, Günter

    2012-12-01

    The specific requirements concerning the software environment within the HEP community constrain the choice of resource providers for the outsourcing of computing infrastructure. The use of virtualization in HPC clusters and in the context of cloud resources is therefore a subject of recent developments in scientific computing. The dynamic virtualization of worker nodes in common batch systems provided by ViBatch serves each user with a dynamically virtualized subset of worker nodes on a local cluster. Now it can be transparently extended by the use of common open source cloud interfaces like OpenNebula or Eucalyptus, launching a subset of the virtual worker nodes within the cloud. This paper demonstrates how a dynamically virtualized computing cluster is combined with cloud resources by attaching remotely started virtual worker nodes to the local batch system.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    Sandia National Laboratories has developed a state-of-the-art augmented reality training system for close-quarters combat (CQB). This system uses a wearable augmented reality system to place the user in a real environment while engaging enemy combatants in virtual space (Boston Dynamics DI-Guy). Umbra modeling and simulation environment is used to integrate and control the AR system.

  19. Second Life: Creating Worlds of Wonder for Language Learners

    ERIC Educational Resources Information Center

    Ocasio, Michelle A.

    2016-01-01

    This article describes Second Life, a three-dimensional virtual environment in which a user creates an avatar for the purpose of socializing, learning, developing skills, and exploring a variety of academic and social areas. Since its inception in 2003, Second Life has been used by educators to build and foster innovative learning environments and…

  20. Getting the point across: exploring the effects of dynamic virtual humans in an interactive museum exhibit on user perceptions.

    PubMed

    Rivera-Gutierrez, Diego; Ferdig, Rick; Li, Jian; Lok, Benjamin

    2014-04-01

    We have created “You, M.D.”, an interactive museum exhibit in which users learn about topics in public health literacy while interacting with virtual humans. You, M.D. is equipped with a weight sensor, a height sensor and a Microsoft Kinect that gather basic user information. Conceptually, You, M.D. could use this user information to dynamically select the appearance of the virtual humans in the interaction attempting to improve learning outcomes and user perception for each particular user. For this concept to be possible, a better understanding of how different elements of the visual appearance of a virtual human affects user perceptions is required. In this paper, we present the results of an initial user study with a large sample size (n =333) ran using You, M.D. The study measured users’ reactions based on the user’s gender and body-mass index (BMI) when facing virtual humans with BMI either concordant or discordant from the user’s BMI. The results of the study indicate that concordance between the users’ BMI and the virtual human’s BMI affects male and female users differently. The results also show that female users rate virtual humans as more knowledgeable than male users rate the same virtual humans.

  1. The impact of physical navigation on spatial organization for sensemaking.

    PubMed

    Andrews, Christopher; North, Chris

    2013-12-01

    Spatial organization has been proposed as a compelling approach to externalizing the sensemaking process. However, there are two ways in which space can be provided to the user: by creating a physical workspace that the user can interact with directly, such as can be provided by a large, high-resolution display, or through the use of a virtual workspace that the user navigates using virtual navigation techniques such as zoom and pan. In this study we explicitly examined the use of spatial sensemaking techniques within these two environments. The results demonstrate that these two approaches to providing sensemaking space are not equivalent, and that the greater embodiment afforded by the physical workspace changes how the space is perceived and used, leading to increased externalization of the sensemaking process.

  2. Combining Digital Archives Content with Serious Game Approach to Create a Gamified Learning Experience

    NASA Astrophysics Data System (ADS)

    Shih, D.-T.; Lin, C. L.; Tseng, C.-Y.

    2015-08-01

    This paper presents an interdisciplinary to develop content-aware application that combines game with learning on specific categories of digital archives. The employment of content-oriented game enhances the gamification and efficacy of learning in culture education on architectures and history of Hsinchu County, Taiwan. The gamified form of the application is used as a backbone to support and provide a strong stimulation to engage users in learning art and culture, therefore this research is implementing under the goal of "The Digital ARt/ARchitecture Project". The purpose of the abovementioned project is to develop interactive serious game approaches and applications for Hsinchu County historical archives and architectures. Therefore, we present two applications, "3D AR for Hukou Old " and "Hsinchu County History Museum AR Tour" which are in form of augmented reality (AR). By using AR imaging techniques to blend real object and virtual content, the users can immerse in virtual exhibitions of Hukou Old Street and Hsinchu County History Museum, and to learn in ubiquitous computing environment. This paper proposes a content system that includes tools and materials used to create representations of digitized cultural archives including historical artifacts, documents, customs, religion, and architectures. The Digital ARt / ARchitecture Project is based on the concept of serious game and consists of three aspects: content creation, target management, and AR presentation. The project focuses on developing a proper approach to serve as an interactive game, and to offer a learning opportunity for appreciating historic architectures by playing AR cards. Furthermore, the card game aims to provide multi-faceted understanding and learning experience to help user learning through 3D objects, hyperlinked web data, and the manipulation of learning mode, and then effectively developing their learning levels on cultural and historical archives in Hsinchu County.

  3. Exploring 4D Flow Data in an Immersive Virtual Environment

    NASA Astrophysics Data System (ADS)

    Stevens, A. H.; Butkiewicz, T.

    2017-12-01

    Ocean models help us to understand and predict a wide range of intricate physical processes which comprise the atmospheric and oceanic systems of the Earth. Because these models output an abundance of complex time-varying three-dimensional (i.e., 4D) data, effectively conveying the myriad information from a given model poses a significant visualization challenge. The majority of the research effort into this problem has concentrated around synthesizing and examining methods for representing the data itself; by comparison, relatively few studies have looked into the potential merits of various viewing conditions and virtual environments. We seek to improve our understanding of the benefits offered by current consumer-grade virtual reality (VR) systems through an immersive, interactive 4D flow visualization system. Our dataset is a Regional Ocean Modeling System (ROMS) model representing a 12-hour tidal cycle of the currents within New Hampshire's Great Bay estuary. The model data was loaded into a custom VR particle system application using the OpenVR software library and the HTC Vive hardware, which tracks a headset and two six-degree-of-freedom (6DOF) controllers within a 5m-by-5m area. The resulting visualization system allows the user to coexist in the same virtual space as the data, enabling rapid and intuitive analysis of the flow model through natural interactions with the dataset and within the virtual environment. Whereas a traditional computer screen typically requires the user to reposition a virtual camera in the scene to obtain the desired view of the data, in virtual reality the user can simply move their head to the desired viewpoint, completely eliminating the mental context switches from data exploration/analysis to view adjustment and back. The tracked controllers become tools to quickly manipulate (reposition, reorient, and rescale) the dataset and to interrogate it by, e.g., releasing dye particles into the flow field, probing scalar velocities, placing a cutting plane through a region of interest, etc. It is hypothesized that the advantages afforded by head-tracked viewing and 6DOF interaction devices will lead to faster and more efficient examination of 4D flow data. A human factors study is currently being prepared to empirically evaluate this method of visualization and interaction.

  4. What Leads People to Keep on E-Learning? An Empirical Analysis of Users' Experiences and Their Effects on Continuance Intention

    ERIC Educational Resources Information Center

    Rodríguez-Ardura, Inma; Meseguer-Artola, Antoni

    2016-01-01

    User retention is a major goal for higher education institutions running their teaching and learning programmes online. This is the first investigation into how the senses of presence and flow, together with perceptions about two central elements of the virtual education environment (didactic resource quality and instructor attitude), facilitate…

  5. Stage Cylindrical Immersive Display

    NASA Technical Reports Server (NTRS)

    Abramyan, Lucy; Norris, Jeffrey S.; Powell, Mark W.; Mittman, David S.; Shams, Khawaja S.

    2011-01-01

    Panoramic images with a wide field of view intend to provide a better understanding of an environment by placing objects of the environment on one seamless image. However, understanding the sizes and relative positions of the objects in a panorama is not intuitive and prone to errors because the field of view is unnatural to human perception. Scientists are often faced with the difficult task of interpreting the sizes and relative positions of objects in an environment when viewing an image of the environment on computer monitors or prints. A panorama can display an object that appears to be to the right of the viewer when it is, in fact, behind the viewer. This misinterpretation can be very costly, especially when the environment is remote and/or only accessible by unmanned vehicles. A 270 cylindrical display has been developed that surrounds the viewer with carefully calibrated panoramic imagery that correctly engages their natural kinesthetic senses and provides a more accurate awareness of the environment. The cylindrical immersive display offers a more natural window to the environment than a standard cubic CAVE (Cave Automatic Virtual Environment), and the geometry allows multiple collocated users to simultaneously view data and share important decision-making tasks. A CAVE is an immersive virtual reality environment that allows one or more users to absorb themselves in a virtual environment. A common CAVE setup is a room-sized cube where the cube sides act as projection planes. By nature, all cubic CAVEs face a problem with edge matching at edges and corners of the display. Modern immersive displays have found ways to minimize seams by creating very tight edges, and rely on the user to ignore the seam. One significant deficiency of flat-walled CAVEs is that the sense of orientation and perspective within the scene is broken across adjacent walls. On any single wall, parallel lines properly converge at their vanishing point as they should, and the sense of perspective within the scene contained on only one wall has integrity. Unfortunately, parallel lines that lie on adjacent walls do not necessarily remain parallel. This results in inaccuracies in the scene that can distract the viewer and subtract from the immersive experience of the CAVE.

  6. The effect of user's perceived presence and promotion focus on usability for interacting in virtual environments.

    PubMed

    Sun, Huey-Min; Li, Shang-Phone; Zhu, Yu-Qian; Hsiao, Bo

    2015-09-01

    Technological advance in human-computer interaction has attracted increasing research attention, especially in the field of virtual reality (VR). Prior research has focused on examining the effects of VR on various outcomes, for example, learning and health. However, which factors affect the final outcomes? That is, what kind of VR system design will achieve higher usability? This question remains largely. Furthermore, when we look at VR system deployment from a human-computer interaction (HCI) lens, does user's attitude play a role in achieving the final outcome? This study aims to understand the effect of immersion and involvement, as well as users' regulatory focus on usability for a somatosensory VR learning system. This study hypothesized that regulatory focus and presence can effectively enhance user's perceived usability. Survey data from 78 students in Taiwan indicated that promotion focus is positively related to user's perceived efficiency, whereas involvement and promotion focus are positively related to user's perceived effectiveness. Promotion focus also predicts user satisfaction and overall usability perception. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  7. Wearable Virtual White Cane Network for navigating people with visual impairment.

    PubMed

    Gao, Yabiao; Chandrawanshi, Rahul; Nau, Amy C; Tse, Zion Tsz Ho

    2015-09-01

    Navigating the world with visual impairments presents inconveniences and safety concerns. Although a traditional white cane is the most commonly used mobility aid due to its low cost and acceptable functionality, electronic traveling aids can provide more functionality as well as additional benefits. The Wearable Virtual Cane Network is an electronic traveling aid that utilizes ultrasound sonar technology to scan the surrounding environment for spatial information. The Wearable Virtual Cane Network is composed of four sensing nodes: one on each of the user's wrists, one on the waist, and one on the ankle. The Wearable Virtual Cane Network employs vibration and sound to communicate object proximity to the user. While conventional navigation devices are typically hand-held and bulky, the hands-free design of our prototype allows the user to perform other tasks while using the Wearable Virtual Cane Network. When the Wearable Virtual Cane Network prototype was tested for distance resolution and range detection limits at various displacements and compared with a traditional white cane, all participants performed significantly above the control bar (p < 4.3 × 10(-5), standard t-test) in distance estimation. Each sensor unit can detect an object with a surface area as small as 1 cm(2) (1 cm × 1 cm) located 70 cm away. Our results showed that the walking speed for an obstacle course was increased by 23% on average when subjects used the Wearable Virtual Cane Network rather than the white cane. The obstacle course experiment also shows that the use of the white cane in combination with the Wearable Virtual Cane Network can significantly improve navigation over using either the white cane or the Wearable Virtual Cane Network alone (p < 0.05, paired t-test). © IMechE 2015.

  8. The Fluids And Combustion Facility Combustion Integrated Rack And The Multi-User Droplet Combustion Apparatus: Microgravity Combustion Science Using Modular Multi-User Hardware

    NASA Technical Reports Server (NTRS)

    OMalley, Terence F.; Myhre, Craig A.

    2000-01-01

    The Fluids and Combustion Facility (FCF) is a multi-rack payload planned for the International Space Station (ISS) that will enable the study of fluid physics and combustion science in a microgravity environment. The Combustion Integrated Rack (CIR) is one of two International Standard Payload Racks of the FCF and is being designed primarily to support combustion science experiments. The Multi-user Droplet Combustion Apparatus (MDCA) is a multi-user apparatus designed to accommodate four different droplet combustion science experiments and is the first payload for CIR. The CIR will function independently until the later launch of the Fluids Integrated Rack component of the FCF. This paper provides an overview of the capabilities and the development status of the CIR and MDCA.

  9. Authoring Tours of Geospatial Data With KML and Google Earth

    NASA Astrophysics Data System (ADS)

    Barcay, D. P.; Weiss-Malik, M.

    2008-12-01

    As virtual globes become widely adopted by the general public, the use of geospatial data has expanded greatly. With the popularization of Google Earth and other platforms, GIS systems have become virtual reality platforms. Using these platforms, a casual user can easily explore the world, browse massive data-sets, create powerful 3D visualizations, and share those visualizations with millions of people using the KML language. This technology has raised the bar for professionals and academics alike. It is now expected that studies and projects will be accompanied by compelling, high-quality visualizations. In this new landscape, a presentation of geospatial data can be the most effective form of advertisement for a project: engaging both the general public and the scientific community in a unified interactive experience. On the other hand, merely dumping a dataset into a virtual globe can be a disorienting, alienating experience for many users. To create an effective, far-reaching presentation, an author must take care to make their data approachable to a wide variety of users with varying knowledge of the subject matter, expertise in virtual globes, and attention spans. To that end, we present techniques for creating self-guided interactive tours of data represented in KML and visualized in Google Earth. Using these methods, we provide the ability to move the camera through the world while dynamically varying the content, style, and visibility of the displayed data. Such tours can automatically guide users through massive, complex datasets: engaging a broad user-base, and conveying subtle concepts that aren't immediately apparent when viewing the raw data. To the casual user these techniques result in an extremely compelling experience similar to watching video. Unlike video though, these techniques maintain the rich interactive environment provided by the virtual globe, allowing users to explore the data in detail and to add other data sources to the presentation.

  10. The Impact of User-Input Devices on Virtual Desktop Trainers

    DTIC Science & Technology

    2010-09-01

    playing the game more enjoyable. Some of these changes include the design of controllers, the controller interface, and ergonomic changes made to...within subjects experimental design to evaluate young active duty Soldier’s ability to move and shoot in a virtual environment using different input...sufficient gaming proficiency, resulting in more time dedicated to training military skills. We employed a within subjects experimental design to

  11. MAI-free performance of PMU-OFDM transceiver in time-variant environment

    NASA Astrophysics Data System (ADS)

    Tadjpour, Layla; Tsai, Shang-Ho; Kuo, C.-C. J.

    2005-06-01

    An approximately multi-user OFDM transceiver was introduced to reduce the multi-access interference (MAI ) due to the carrier frequency offset (CFO) to a negligible amount via precoding by Tsai, Lin and Kuo. In this work, we investigate the performance of this precoded multi-user (PMU) OFDM system in a time-variant channel environment. We analyze and compare the MAI effect caused by time-variant channels in the PMU-OFDM and the OFDMA systems. Generally speaking, the MAI effect consists of two parts. The first part is due to the loss of orthogonality among subchannels for all users while the second part is due to the CFO effect caused by the Doppler shift. Simulation results show that, although OFDMA outperforms the PMU-OFDM transceiver in a fast time-variant environment without CFO, PMU-OFDM outperforms OFDMA in a slow time-variant channel via the use of M/2 symmetric or anti-symmetric codewords of M Hadamard-Walsh codes.

  12. Virtual pools for interactive analysis and software development through an integrated Cloud environment

    NASA Astrophysics Data System (ADS)

    Grandi, C.; Italiano, A.; Salomoni, D.; Calabrese Melcarne, A. K.

    2011-12-01

    WNoDeS, an acronym for Worker Nodes on Demand Service, is software developed at CNAF-Tier1, the National Computing Centre of the Italian Institute for Nuclear Physics (INFN) located in Bologna. WNoDeS provides on demand, integrated access to both Grid and Cloud resources through virtualization technologies. Besides the traditional use of computing resources in batch mode, users need to have interactive and local access to a number of systems. WNoDeS can dynamically select these computers instantiating Virtual Machines, according to the requirements (computing, storage and network resources) of users through either the Open Cloud Computing Interface API, or through a web console. An interactive use is usually limited to activities in user space, i.e. where the machine configuration is not modified. In some other instances the activity concerns development and testing of services and thus implies the modification of the system configuration (and, therefore, root-access to the resource). The former use case is a simple extension of the WNoDeS approach, where the resource is provided in interactive mode. The latter implies saving the virtual image at the end of each user session so that it can be presented to the user at subsequent requests. This work describes how the LHC experiments at INFN-Bologna are testing and making use of these dynamically created ad-hoc machines via WNoDeS to support flexible, interactive analysis and software development at the INFN Tier-1 Computing Centre.

  13. Virtual reality systems

    NASA Technical Reports Server (NTRS)

    Johnson, David W.

    1992-01-01

    Virtual realities are a type of human-computer interface (HCI) and as such may be understood from a historical perspective. In the earliest era, the computer was a very simple, straightforward machine. Interaction was human manipulation of an inanimate object, little more than the provision of an explicit instruction set to be carried out without deviation. In short, control resided with the user. In the second era of HCI, some level of intelligence and control was imparted to the system to enable a dialogue with the user. Simple context sensitive help systems are early examples, while more sophisticated expert system designs typify this era. Control was shared more equally. In this, the third era of the HCI, the constructed system emulates a particular environment, constructed with rules and knowledge about 'reality'. Control is, in part, outside the realm of the human-computer dialogue. Virtual reality systems are discussed.

  14. Synthetic environments

    NASA Astrophysics Data System (ADS)

    Lukes, George E.; Cain, Joel M.

    1996-02-01

    The Advanced Distributed Simulation (ADS) Synthetic Environments Program seeks to create robust virtual worlds from operational terrain and environmental data sources of sufficient fidelity and currency to interact with the real world. While some applications can be met by direct exploitation of standard digital terrain data, more demanding applications -- particularly those support operations 'close to the ground' -- are well-served by emerging capabilities for 'value-adding' by the user working with controlled imagery. For users to rigorously refine and exploit controlled imagery within functionally different workstations they must have a shared framework to allow interoperability within and between these environments in terms of passing image and object coordinates and other information using a variety of validated sensor models. The Synthetic Environments Program is now being expanded to address rapid construction of virtual worlds with research initiatives in digital mapping, softcopy workstations, and cartographic image understanding. The Synthetic Environments Program is also participating in a joint initiative for a sensor model applications programer's interface (API) to ensure that a common controlled imagery exploitation framework is available to all researchers, developers and users. This presentation provides an introduction to ADS and the associated requirements for synthetic environments to support synthetic theaters of war. It provides a technical rationale for exploring applications of image understanding technology to automated cartography in support of ADS and related programs benefitting from automated analysis of mapping, earth resources and reconnaissance imagery. And it provides an overview and status of the joint initiative for a sensor model API.

  15. A framework for interactive visual analysis of heterogeneous marine data in an integrated problem solving environment

    NASA Astrophysics Data System (ADS)

    Liu, Shuai; Chen, Ge; Yao, Shifeng; Tian, Fenglin; Liu, Wei

    2017-07-01

    This paper presents a novel integrated marine visualization framework which focuses on processing, analyzing the multi-dimension spatiotemporal marine data in one workflow. Effective marine data visualization is needed in terms of extracting useful patterns, recognizing changes, and understanding physical processes in oceanography researches. However, the multi-source, multi-format, multi-dimension characteristics of marine data pose a challenge for interactive and feasible (timely) marine data analysis and visualization in one workflow. And, global multi-resolution virtual terrain environment is also needed to give oceanographers and the public a real geographic background reference and to help them to identify the geographical variation of ocean phenomena. This paper introduces a data integration and processing method to efficiently visualize and analyze the heterogeneous marine data. Based on the data we processed, several GPU-based visualization methods are explored to interactively demonstrate marine data. GPU-tessellated global terrain rendering using ETOPO1 data is realized and the video memory usage is controlled to ensure high efficiency. A modified ray-casting algorithm for the uneven multi-section Argo volume data is also presented and the transfer function is designed to analyze the 3D structure of ocean phenomena. Based on the framework we designed, an integrated visualization system is realized. The effectiveness and efficiency of the framework is demonstrated. This system is expected to make a significant contribution to the demonstration and understanding of marine physical process in a virtual global environment.

  16. Development of a virtual flight simulator.

    PubMed

    Kuntz Rangel, Rodrigo; Guimarães, Lamartine N F; de Assis Correa, Francisco

    2002-10-01

    We present the development of a flight simulator that allows the user to interact in a created environment by means of virtual reality devices. This environment simulates the sight of a pilot in an airplane cockpit. The environment is projected in a helmet visor and allows the pilot to see inside as well as outside the cockpit. The movement of the airplane is independent of the movement of the pilot's head, which means that the airplane might travel in one direction while the pilot is looking at a 30 degrees angle with respect to the traveled direction. In this environment, the pilot will be able to take off, fly, and land the airplane. So far, the objects in the environment are geometrical figures. This is an ongoing project, and only partial results are available now.

  17. Inertial Sensor-Based Touch and Shake Metaphor for Expressive Control of 3D Virtual Avatars

    PubMed Central

    Patil, Shashidhar; Chintalapalli, Harinadha Reddy; Kim, Dubeom; Chai, Youngho

    2015-01-01

    In this paper, we present an inertial sensor-based touch and shake metaphor for expressive control of a 3D virtual avatar in a virtual environment. An intuitive six degrees-of-freedom wireless inertial motion sensor is used as a gesture and motion control input device with a sensor fusion algorithm. The algorithm enables user hand motions to be tracked in 3D space via magnetic, angular rate, and gravity sensors. A quaternion-based complementary filter is implemented to reduce noise and drift. An algorithm based on dynamic time-warping is developed for efficient recognition of dynamic hand gestures with real-time automatic hand gesture segmentation. Our approach enables the recognition of gestures and estimates gesture variations for continuous interaction. We demonstrate the gesture expressivity using an interactive flexible gesture mapping interface for authoring and controlling a 3D virtual avatar and its motion by tracking user dynamic hand gestures. This synthesizes stylistic variations in a 3D virtual avatar, producing motions that are not present in the motion database using hand gesture sequences from a single inertial motion sensor. PMID:26094629

  18. The Virtual Geophysics Laboratory (VGL): Scientific Workflows Operating Across Organizations and Across Infrastructures

    NASA Astrophysics Data System (ADS)

    Cox, S. J.; Wyborn, L. A.; Fraser, R.; Rankine, T.; Woodcock, R.; Vote, J.; Evans, B.

    2012-12-01

    The Virtual Geophysics Laboratory (VGL) is web portal that provides geoscientists with an integrated online environment that: seamlessly accesses geophysical and geoscience data services from the AuScope national geoscience information infrastructure; loosely couples these data to a variety of gesocience software tools; and provides large scale processing facilities via cloud computing. VGL is a collaboration between CSIRO, Geoscience Australia, National Computational Infrastructure, Monash University, Australian National University and the University of Queensland. The VGL provides a distributed system whereby a user can enter an online virtual laboratory to seamlessly connect to OGC web services for geoscience data. The data is supplied in open standards formats using international standards like GeoSciML. A VGL user uses a web mapping interface to discover and filter the data sources using spatial and attribute filters to define a subset. Once the data is selected the user is not required to download the data. VGL collates the service query information for later in the processing workflow where it will be staged directly to the computing facilities. The combination of deferring data download and access to Cloud computing enables VGL users to access their data at higher resolutions and to undertake larger scale inversions, more complex models and simulations than their own local computing facilities might allow. Inside the Virtual Geophysics Laboratory, the user has access to a library of existing models, complete with exemplar workflows for specific scientific problems based on those models. For example, the user can load a geological model published by Geoscience Australia, apply a basic deformation workflow provided by a CSIRO scientist, and have it run in a scientific code from Monash. Finally the user can publish these results to share with a colleague or cite in a paper. This opens new opportunities for access and collaboration as all the resources (models, code, data, processing) are shared in the one virtual laboratory. VGL provides end users with access to an intuitive, user-centered interface that leverages cloud storage and cloud and cluster processing from both the research communities and commercial suppliers (e.g. Amazon). As the underlying data and information services are agnostic of the scientific domain, they can support many other data types. This fundamental characteristic results in a highly reusable virtual laboratory infrastructure that could also be used for example natural hazards, satellite processing, soil geochemistry, climate modeling, agriculture crop modeling.

  19. Ambient Intelligence in Multimeda and Virtual Reality Environments for the rehabilitation

    NASA Astrophysics Data System (ADS)

    Benko, Attila; Cecilia, Sik Lanyi

    This chapter presents a general overview about the use of multimedia and virtual reality in rehabilitation and assistive and preventive healthcare. This chapter deals with multimedia, virtual reality applications based AI intended for use by medical doctors, nurses, special teachers and further interested persons. It describes methods how multimedia and virtual reality is able to assist their work. These include the areas how multimedia and virtual reality can help the patients everyday life and their rehabilitation. In the second part of the chapter we present the Virtual Therapy Room (VTR) a realized application for aphasic patients that was created for practicing communication and expressing emotions in a group therapy setting. The VTR shows a room that contains a virtual therapist and four virtual patients (avatars). The avatars are utilizing their knowledge base in order to answer the questions of the user providing an AI environment for the rehabilitation. The user of the VTR is the aphasic patient who has to solve the exercises. The picture that is relevant for the actual task appears on the virtual blackboard. Patient answers questions of the virtual therapist. Questions are about pictures describing an activity or an object in different levels. Patient can ask an avatar for answer. If the avatar knows the answer the avatars emotion changes to happy instead of sad. The avatar expresses its emotions in different dimensions. Its behavior, face-mimic, voice-tone and response also changes. The emotion system can be described as a deterministic finite automaton where places are emotion-states and the transition function of the automaton is derived from the input-response reaction of an avatar. Natural language processing techniques were also implemented in order to establish highquality human-computer interface windows for each of the avatars. Aphasic patients are able to interact with avatars via these interfaces. At the end of the chapter we visualize the possible future research field.

  20. Audited credential delegation: a usable security solution for the virtual physiological human toolkit.

    PubMed

    Haidar, Ali N; Zasada, Stefan J; Coveney, Peter V; Abdallah, Ali E; Beckles, Bruce; Jones, Mike A S

    2011-06-06

    We present applications of audited credential delegation (ACD), a usable security solution for authentication, authorization and auditing in distributed virtual physiological human (VPH) project environments that removes the use of digital certificates from end-users' experience. Current security solutions are based on public key infrastructure (PKI). While PKI offers strong security for VPH projects, it suffers from serious usability shortcomings in terms of end-user acquisition and management of credentials which deter scientists from exploiting distributed VPH environments. By contrast, ACD supports the use of local credentials. Currently, a local ACD username-password combination can be used to access grid-based resources while Shibboleth support is underway. Moreover, ACD provides seamless and secure access to shared patient data, tools and infrastructure, thus supporting the provision of personalized medicine for patients, scientists and clinicians participating in e-health projects from a local to the widest international scale.

Top