User-Centered Iterative Design of a Collaborative Virtual Environment
2001-03-01
cognitive task analysis methods to study land navigators. This study was intended to validate the use of user-centered design methodologies for the design of...have explored the cognitive aspects of collaborative human way finding and design for collaborative virtual environments. Further investigation of design paradigms should include cognitive task analysis and behavioral task analysis.
Virtual Environments in Scientific Visualization
NASA Technical Reports Server (NTRS)
Bryson, Steve; Lisinski, T. A. (Technical Monitor)
1994-01-01
Virtual environment technology is a new way of approaching the interface between computers and humans. Emphasizing display and user control that conforms to the user's natural ways of perceiving and thinking about space, virtual environment technologies enhance the ability to perceive and interact with computer generated graphic information. This enhancement potentially has a major effect on the field of scientific visualization. Current examples of this technology include the Virtual Windtunnel being developed at NASA Ames Research Center. Other major institutions such as the National Center for Supercomputing Applications and SRI International are also exploring this technology. This talk will be describe several implementations of virtual environments for use in scientific visualization. Examples include the visualization of unsteady fluid flows (the virtual windtunnel), the visualization of geodesics in curved spacetime, surface manipulation, and examples developed at various laboratories.
Shared virtual environments for aerospace training
NASA Technical Reports Server (NTRS)
Loftin, R. Bowen; Voss, Mark
1994-01-01
Virtual environments have the potential to significantly enhance the training of NASA astronauts and ground-based personnel for a variety of activities. A critical requirement is the need to share virtual environments, in real or near real time, between remote sites. It has been hypothesized that the training of international astronaut crews could be done more cheaply and effectively by utilizing such shared virtual environments in the early stages of mission preparation. The Software Technology Branch at NASA's Johnson Space Center has developed the capability for multiple users to simultaneously share the same virtual environment. Each user generates the graphics needed to create the virtual environment. All changes of object position and state are communicated to all users so that each virtual environment maintains its 'currency.' Examples of these shared environments will be discussed and plans for the utilization of the Department of Defense's Distributed Interactive Simulation (DIS) protocols for shared virtual environments will be presented. Finally, the impact of this technology on training and education in general will be explored.
Multi-modal virtual environment research at Armstrong Laboratory
NASA Technical Reports Server (NTRS)
Eggleston, Robert G.
1995-01-01
One mission of the Paul M. Fitts Human Engineering Division of Armstrong Laboratory is to improve the user interface for complex systems through user-centered exploratory development and research activities. In support of this goal, many current projects attempt to advance and exploit user-interface concepts made possible by virtual reality (VR) technologies. Virtual environments may be used as a general purpose interface medium, an alternative display/control method, a data visualization and analysis tool, or a graphically based performance assessment tool. An overview is given of research projects within the division on prototype interface hardware/software development, integrated interface concept development, interface design and evaluation tool development, and user and mission performance evaluation tool development.
Seamless 3D interaction for virtual tables, projection planes, and CAVEs
NASA Astrophysics Data System (ADS)
Encarnacao, L. M.; Bimber, Oliver; Schmalstieg, Dieter; Barton, Robert J., III
2000-08-01
The Virtual Table presents stereoscopic graphics to a user in a workbench-like setting. This device shares with other large- screen display technologies (such as data walls and surround- screen projection systems) the lack of human-centered unencumbered user interfaces and 3D interaction technologies. Such shortcomings present severe limitations to the application of virtual reality (VR) technology to time- critical applications as well as employment scenarios that involve heterogeneous groups of end-users without high levels of computer familiarity and expertise. Traditionally such employment scenarios are common in planning-related application areas such as mission rehearsal and command and control. For these applications, a high grade of flexibility with respect to the system requirements (display and I/O devices) as well as to the ability to seamlessly and intuitively switch between different interaction modalities and interaction are sought. Conventional VR techniques may be insufficient to meet this challenge. This paper presents novel approaches for human-centered interfaces to Virtual Environments focusing on the Virtual Table visual input device. It introduces new paradigms for 3D interaction in virtual environments (VE) for a variety of application areas based on pen-and-clipboard, mirror-in-hand, and magic-lens metaphors, and introduces new concepts for combining VR and augmented reality (AR) techniques. It finally describes approaches toward hybrid and distributed multi-user interaction environments and concludes by hypothesizing on possible use cases for defense applications.
User-Centered Design Strategies for Massive Open Online Courses (MOOCs)
ERIC Educational Resources Information Center
Mendoza-Gonzalez, Ricardo, Ed.
2016-01-01
In today's society, educational opportunities have evolved beyond the traditional classroom setting. Most universities have implemented virtual learning environments in an effort to provide more opportunities for potential or current students seeking alternative and more affordable learning solutions. "User-Centered Design Strategies for…
Virtual interface environment workstations
NASA Technical Reports Server (NTRS)
Fisher, S. S.; Wenzel, E. M.; Coler, C.; Mcgreevy, M. W.
1988-01-01
A head-mounted, wide-angle, stereoscopic display system controlled by operator position, voice and gesture has been developed at NASA's Ames Research Center for use as a multipurpose interface environment. This Virtual Interface Environment Workstation (VIEW) system provides a multisensory, interactive display environment in which a user can virtually explore a 360-degree synthesized or remotely sensed environment and can viscerally interact with its components. Primary applications of the system are in telerobotics, management of large-scale integrated information systems, and human factors research. System configuration, research scenarios, and research directions are described.
Computer Assisted Virtual Environment - CAVE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erickson, Phillip; Podgorney, Robert; Weingartner,
Research at the Center for Advanced Energy Studies is taking on another dimension with a 3-D device known as a Computer Assisted Virtual Environment. The CAVE uses projection to display high-end computer graphics on three walls and the floor. By wearing 3-D glasses to create depth perception and holding a wand to move and rotate images, users can delve into data.
Computer Assisted Virtual Environment - CAVE
Erickson, Phillip; Podgorney, Robert; Weingartner,
2018-05-30
Research at the Center for Advanced Energy Studies is taking on another dimension with a 3-D device known as a Computer Assisted Virtual Environment. The CAVE uses projection to display high-end computer graphics on three walls and the floor. By wearing 3-D glasses to create depth perception and holding a wand to move and rotate images, users can delve into data.
Virtual Reality: You Are There
NASA Technical Reports Server (NTRS)
1993-01-01
Telepresence or "virtual reality," allows a person, with assistance from advanced technology devices, to figuratively project himself into another environment. This technology is marketed by several companies, among them Fakespace, Inc., a former Ames Research Center contractor. Fakespace developed a teleoperational motion platform for transmitting sounds and images from remote locations. The "Molly" matches the user's head motion and, when coupled with a stereo viewing device and appropriate software, creates the telepresence experience. Its companion piece is the BOOM-the user's viewing device that provides the sense of involvement in the virtual environment. Either system may be used alone. Because suits, gloves, headphones, etc. are not needed, a whole range of commercial applications is possible, including computer-aided design techniques and virtual reality visualizations. Customers include Sandia National Laboratories, Stanford Research Institute and Mattel Toys.
An Online Image Analysis Tool for Science Education
ERIC Educational Resources Information Center
Raeside, L.; Busschots, B.; Waddington, S.; Keating, J. G.
2008-01-01
This paper describes an online image analysis tool developed as part of an iterative, user-centered development of an online Virtual Learning Environment (VLE) called the Education through Virtual Experience (EVE) Portal. The VLE provides a Web portal through which schoolchildren and their teachers create scientific proposals, retrieve images and…
Lahav, Orly; Schloerb, David W.; Srinivasan, Mandayam A.
2014-01-01
This paper presents the integration of a virtual environment (BlindAid) in an orientation and mobility rehabilitation program as a training aid for people who are blind. BlindAid allows the users to interact with different virtual structures and objects through auditory and haptic feedback. This research explores if and how use of the BlindAid in conjunction with a rehabilitation program can help people who are blind train themselves in familiar and unfamiliar spaces. The study, focused on nine participants who were congenitally, adventitiously, and newly blind, during their orientation and mobility rehabilitation program at the Carroll Center for the Blind (Newton, Massachusetts, USA). The research was implemented using virtual environment (VE) exploration tasks and orientation tasks in virtual environments and real spaces. The methodology encompassed both qualitative and quantitative methods, including interviews, a questionnaire, videotape recording, and user computer logs. The results demonstrated that the BlindAid training gave participants additional time to explore the virtual environment systematically. Secondly, it helped elucidate several issues concerning the potential strengths of the BlindAid system as a training aid for orientation and mobility for both adults and teenagers who are congenitally, adventitiously, and newly blind. PMID:25284952
The effective use of virtualization for selection of data centers in a cloud computing environment
NASA Astrophysics Data System (ADS)
Kumar, B. Santhosh; Parthiban, Latha
2018-04-01
Data centers are the places which consist of network of remote servers to store, access and process the data. Cloud computing is a technology where users worldwide will submit the tasks and the service providers will direct the requests to the data centers which are responsible for execution of tasks. The servers in the data centers need to employ the virtualization concept so that multiple tasks can be executed simultaneously. In this paper we proposed an algorithm for data center selection based on energy of virtual machines created in server. The virtualization energy in each of the server is calculated and total energy of the data center is obtained by the summation of individual server energy. The tasks submitted are routed to the data center with least energy consumption which will result in minimizing the operational expenses of a service provider.
3-D Imaging In Virtual Environment: A Scientific Clinical and Teaching Tool
NASA Technical Reports Server (NTRS)
Ross, Muriel D.; DeVincenzi, Donald L. (Technical Monitor)
1996-01-01
The advent of powerful graphics workstations and computers has led to the advancement of scientific knowledge through three-dimensional (3-D) reconstruction and imaging of biological cells and tissues. The Biocomputation Center at NASA Ames Research Center pioneered the effort to produce an entirely computerized method for reconstruction of objects from serial sections studied in a transmission electron microscope (TEM). The software developed, ROSS (Reconstruction of Serial Sections), is now being distributed to users across the United States through Space Act Agreements. The software is in widely disparate fields such as geology, botany, biology and medicine. In the Biocomputation Center, ROSS serves as the basis for development of virtual environment technologies for scientific and medical use. This report will describe the Virtual Surgery Workstation Project that is ongoing with clinicians at Stanford University Medical Center, and the role of the Visible Human data in the project.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Imura, K; Fujibuchi, T; Hirata, H
Purpose: Patient set-up skills in radiotherapy treatment room have a great influence on treatment effect for image guided radiotherapy. In this study, we have developed the training system for improving practical set-up skills considering rotational correction in the virtual environment away from the pressure of actual treatment room by using three-dimensional computer graphic (3DCG) engine. Methods: The treatment room for external beam radiotherapy was reproduced in the virtual environment by using 3DCG engine (Unity). The viewpoints to perform patient set-up in the virtual treatment room were arranged in both sides of the virtual operable treatment couch to assume actual performancemore » by two clinical staffs. The position errors to mechanical isocenter considering alignment between skin marker and laser on the virtual patient model were displayed by utilizing numerical values expressed in SI units and the directions of arrow marks. The rotational errors calculated with a point on the virtual body axis as the center of each rotation axis for the virtual environment were corrected by adjusting rotational position of the body phantom wound the belt with gyroscope preparing on table in a real space. These rotational errors were evaluated by describing vector outer product operations and trigonometric functions in the script for patient set-up technique. Results: The viewpoints in the virtual environment allowed individual user to visually recognize the position discrepancy to mechanical isocenter until eliminating the positional errors of several millimeters. The rotational errors between the two points calculated with the center point could be efficiently corrected to display the minimum technique mathematically by utilizing the script. Conclusion: By utilizing the script to correct the rotational errors as well as accurate positional recognition for patient set-up technique, the training system developed for improving patient set-up skills enabled individual user to indicate efficient positional correction methods easily.« less
AstroCloud, a Cyber-Infrastructure for Astronomy Research: Cloud Computing Environments
NASA Astrophysics Data System (ADS)
Li, C.; Wang, J.; Cui, C.; He, B.; Fan, D.; Yang, Y.; Chen, J.; Zhang, H.; Yu, C.; Xiao, J.; Wang, C.; Cao, Z.; Fan, Y.; Hong, Z.; Li, S.; Mi, L.; Wan, W.; Wang, J.; Yin, S.
2015-09-01
AstroCloud is a cyber-Infrastructure for Astronomy Research initiated by Chinese Virtual Observatory (China-VO) under funding support from NDRC (National Development and Reform commission) and CAS (Chinese Academy of Sciences). Based on CloudStack, an open source software, we set up the cloud computing environment for AstroCloud Project. It consists of five distributed nodes across the mainland of China. Users can use and analysis data in this cloud computing environment. Based on GlusterFS, we built a scalable cloud storage system. Each user has a private space, which can be shared among different virtual machines and desktop systems. With this environments, astronomer can access to astronomical data collected by different telescopes and data centers easily, and data producers can archive their datasets safely.
Mission Simulation Facility: Simulation Support for Autonomy Development
NASA Technical Reports Server (NTRS)
Pisanich, Greg; Plice, Laura; Neukom, Christian; Flueckiger, Lorenzo; Wagner, Michael
2003-01-01
The Mission Simulation Facility (MSF) supports research in autonomy technology for planetary exploration vehicles. Using HLA (High Level Architecture) across distributed computers, the MSF connects users autonomy algorithms with provided or third-party simulations of robotic vehicles and planetary surface environments, including onboard components and scientific instruments. Simulation fidelity is variable to meet changing needs as autonomy technology advances in Technical Readiness Level (TRL). A virtual robot operating in a virtual environment offers numerous advantages over actual hardware, including availability, simplicity, and risk mitigation. The MSF is in use by researchers at NASA Ames Research Center (ARC) and has demonstrated basic functionality. Continuing work will support the needs of a broader user base.
Temporal Issues in the Design of Virtual Learning Environments.
ERIC Educational Resources Information Center
Bergeron, Bryan; Obeid, Jihad
1995-01-01
Describes design methods used to influence user perception of time in virtual learning environments. Examines the use of temporal cues in medical education and clinical competence testing. Finds that user perceptions of time affects user acceptance, ease of use, and the level of realism of a virtual learning environment. Contains 51 references.…
Jones, Jake S.
1999-01-01
An apparatus and method of issuing commands to a computer by a user interfacing with a virtual reality environment. To issue a command, the user directs gaze at a virtual button within the virtual reality environment, causing a perceptible change in the virtual button, which then sends a command corresponding to the virtual button to the computer, optionally after a confirming action is performed by the user, such as depressing a thumb switch.
Intelligent Motion and Interaction Within Virtual Environments
NASA Technical Reports Server (NTRS)
Ellis, Stephen R. (Editor); Slater, Mel (Editor); Alexander, Thomas (Editor)
2007-01-01
What makes virtual actors and objects in virtual environments seem real? How can the illusion of their reality be supported? What sorts of training or user-interface applications benefit from realistic user-environment interactions? These are some of the central questions that designers of virtual environments face. To be sure simulation realism is not necessarily the major, or even a required goal, of a virtual environment intended to communicate specific information. But for some applications in entertainment, marketing, or aspects of vehicle simulation training, realism is essential. The following chapters will examine how a sense of truly interacting with dynamic, intelligent agents may arise in users of virtual environments. These chapters are based on presentations at the London conference on Intelligent Motion and Interaction within a Virtual Environments which was held at University College, London, U.K., 15-17 September 2003.
Jones, J.S.
1999-01-12
An apparatus and method of issuing commands to a computer by a user interfacing with a virtual reality environment are disclosed. To issue a command, the user directs gaze at a virtual button within the virtual reality environment, causing a perceptible change in the virtual button, which then sends a command corresponding to the virtual button to the computer, optionally after a confirming action is performed by the user, such as depressing a thumb switch. 4 figs.
ERIC Educational Resources Information Center
deNoyelles, Aimee; Seo, Kay Kyeong-Ju
2012-01-01
A 3D multi-user virtual environment holds promise to support and enhance student online learning communities due to its ability to promote global synchronous interaction and collaboration, rich multisensory experience and expression, and elaborate design capabilities. Second Life[R], a multi-user virtual environment intended for adult users 18 and…
Pollock, Brice; Burton, Melissa; Kelly, Jonathan W; Gilbert, Stephen; Winer, Eliot
2012-04-01
Stereoscopic depth cues improve depth perception and increase immersion within virtual environments (VEs). However, improper display of these cues can distort perceived distances and directions. Consider a multi-user VE, where all users view identical stereoscopic images regardless of physical location. In this scenario, cues are typically customized for one "leader" equipped with a head-tracking device. This user stands at the center of projection (CoP) and all other users ("followers") view the scene from other locations and receive improper depth cues. This paper examines perceived depth distortion when viewing stereoscopic VEs from follower perspectives and the impact of these distortions on collaborative spatial judgments. Pairs of participants made collaborative depth judgments of virtual shapes viewed from the CoP or after displacement forward or backward. Forward and backward displacement caused perceived depth compression and expansion, respectively, with greater compression than expansion. Furthermore, distortion was less than predicted by a ray-intersection model of stereo geometry. Collaboration times were significantly longer when participants stood at different locations compared to the same location, and increased with greater perceived depth discrepancy between the two viewing locations. These findings advance our understanding of spatial distortions in multi-user VEs, and suggest a strategy for reducing distortion.
NASA Technical Reports Server (NTRS)
Benford, Steve; Bowers, John; Fahlen, Lennart E.; Greenhalgh, Chris; Snowdon, Dave
1994-01-01
This paper explores the issue of user embodiment within collaborative virtual environments. By user embodiment we mean the provision of users with appropriate body images so as to represent them to others and also to themselves. By collaborative virtual environments we mean multi-user virtual reality systems which support cooperative work (although we argue that the results of our exploration may also be applied to other kinds of collaborative systems). The main part of the paper identifies a list of embodiment design issues including: presence, location, identity, activity, availability, history of activity, viewpoint, action point, gesture, facial expression, voluntary versus involuntary expression, degree of presence, reflecting capabilities, manipulating the user's view of others, representation across multiple media, autonomous and distributed body parts, truthfulness and efficiency. Following this, we show how these issues are reflected in our own DIVE and MASSIVE prototype collaborative virtual environments.
Technically Speaking: Transforming Language Learning through Virtual Learning Environments (MOOs).
ERIC Educational Resources Information Center
von der Emde, Silke; Schneider, Jeffrey; Kotter, Markus
2001-01-01
Draws on experiences from a 7-week exchange between students learning German at an American college and advanced students of English at a German university. Maps out the benefits to using a MOO (multiple user domains object-oriented) for language learning: a student-centered learning environment structured by such objectives as peer teaching,…
Towards Gesture-Based Multi-User Interactions in Collaborative Virtual Environments
NASA Astrophysics Data System (ADS)
Pretto, N.; Poiesi, F.
2017-11-01
We present a virtual reality (VR) setup that enables multiple users to participate in collaborative virtual environments and interact via gestures. A collaborative VR session is established through a network of users that is composed of a server and a set of clients. The server manages the communication amongst clients and is created by one of the users. Each user's VR setup consists of a Head Mounted Display (HMD) for immersive visualisation, a hand tracking system to interact with virtual objects and a single-hand joypad to move in the virtual environment. We use Google Cardboard as a HMD for the VR experience and a Leap Motion for hand tracking, thus making our solution low cost. We evaluate our VR setup though a forensics use case, where real-world objects pertaining to a simulated crime scene are included in a VR environment, acquired using a smartphone-based 3D reconstruction pipeline. Users can interact using virtual gesture-based tools such as pointers and rulers.
Virtual Collections: An Earth Science Data Curation Service
NASA Astrophysics Data System (ADS)
Bugbee, K.; Ramachandran, R.; Maskey, M.; Gatlin, P. N.
2016-12-01
The role of Earth science data centers has traditionally been to maintain central archives that serve openly available Earth observation data. However, in order to ensure data are as useful as possible to a diverse user community, Earth science data centers must move beyond simply serving as an archive to offering innovative data services to user communities. A virtual collection, the end product of a curation activity that searches, selects, and synthesizes diffuse data and information resources around a specific topic or event, is a data curation service that improves the discoverability, accessibility and usability of Earth science data and also supports the needs of unanticipated users. Virtual collections minimize the amount of time and effort needed to begin research by maximizing certainty of reward and by providing a trustworthy source of data for unanticipated users. This presentation will define a virtual collection in the context of an Earth science data center and will highlight a virtual collection case study created at the Global Hydrology Resource Center data center.
Virtual Collections: An Earth Science Data Curation Service
NASA Technical Reports Server (NTRS)
Bugbee, Kaylin; Ramachandran, Rahul; Maskey, Manil; Gatlin, Patrick
2016-01-01
The role of Earth science data centers has traditionally been to maintain central archives that serve openly available Earth observation data. However, in order to ensure data are as useful as possible to a diverse user community, Earth science data centers must move beyond simply serving as an archive to offering innovative data services to user communities. A virtual collection, the end product of a curation activity that searches, selects, and synthesizes diffuse data and information resources around a specific topic or event, is a data curation service that improves the discoverability, accessibility, and usability of Earth science data and also supports the needs of unanticipated users. Virtual collections minimize the amount of the time and effort needed to begin research by maximizing certainty of reward and by providing a trustworthy source of data for unanticipated users. This presentation will define a virtual collection in the context of an Earth science data center and will highlight a virtual collection case study created at the Global Hydrology Resource Center data center.
Virtual World Currency Value Fluctuation Prediction System Based on User Sentiment Analysis.
Kim, Young Bin; Lee, Sang Hyeok; Kang, Shin Jin; Choi, Myung Jin; Lee, Jung; Kim, Chang Hun
2015-01-01
In this paper, we present a method for predicting the value of virtual currencies used in virtual gaming environments that support multiple users, such as massively multiplayer online role-playing games (MMORPGs). Predicting virtual currency values in a virtual gaming environment has rarely been explored; it is difficult to apply real-world methods for predicting fluctuating currency values or shares to the virtual gaming world on account of differences in domains between the two worlds. To address this issue, we herein predict virtual currency value fluctuations by collecting user opinion data from a virtual community and analyzing user sentiments or emotions from the opinion data. The proposed method is straightforward and applicable to predicting virtual currencies as well as to gaming environments, including MMORPGs. We test the proposed method using large-scale MMORPGs and demonstrate that virtual currencies can be effectively and efficiently predicted with it.
Virtual World Currency Value Fluctuation Prediction System Based on User Sentiment Analysis
Kim, Young Bin; Lee, Sang Hyeok; Kang, Shin Jin; Choi, Myung Jin; Lee, Jung; Kim, Chang Hun
2015-01-01
In this paper, we present a method for predicting the value of virtual currencies used in virtual gaming environments that support multiple users, such as massively multiplayer online role-playing games (MMORPGs). Predicting virtual currency values in a virtual gaming environment has rarely been explored; it is difficult to apply real-world methods for predicting fluctuating currency values or shares to the virtual gaming world on account of differences in domains between the two worlds. To address this issue, we herein predict virtual currency value fluctuations by collecting user opinion data from a virtual community and analyzing user sentiments or emotions from the opinion data. The proposed method is straightforward and applicable to predicting virtual currencies as well as to gaming environments, including MMORPGs. We test the proposed method using large-scale MMORPGs and demonstrate that virtual currencies can be effectively and efficiently predicted with it. PMID:26241496
User-centered virtual environment assessment and design for cognitive rehabilitation applications
NASA Astrophysics Data System (ADS)
Fidopiastis, Cali Michael
Virtual environment (VE) design for cognitive rehabilitation necessitates a new methodology to ensure the validity of the resulting rehabilitation assessment. We propose that benchmarking the VE system technology utilizing a user-centered approach should precede the VE construction. Further, user performance baselines should be measured throughout testing as a control for adaptive effects that may confound the metrics chosen to evaluate the rehabilitation treatment. To support these claims we present data obtained from two modules of a user-centered head-mounted display (HMD) assessment battery, specifically resolution visual acuity and stereoacuity. Resolution visual acuity and stereoacuity assessments provide information about the image quality achieved by an HMD based upon its unique system parameters. When applying a user-centered approach, we were able to quantify limitations in the VE system components (e.g., low microdisplay resolution) and separately point to user characteristics (e.g., changes in dark focus) that may introduce error in the evaluation of VE based rehabilitation protocols. Based on these results, we provide guidelines for calibrating and benchmarking HMDs. In addition, we discuss potential extensions of the assessment to address higher level usability issues. We intend to test the proposed framework within the Human Experience Modeler (HEM), a testbed created at the University of Central Florida to evaluate technologies that may enhance cognitive rehabilitation effectiveness. Preliminary results of a feasibility pilot study conducted with a memory impaired participant showed that the HEM provides the control and repeatability needed to conduct such technology comparisons. Further, the HEM affords the opportunity to integrate new brain imaging technologies (i.e., functional Near Infrared Imaging) to evaluate brain plasticity associated with VE based cognitive rehabilitation.
De Leo, Gianluca; Diggs, Leigh A; Radici, Elena; Mastaglio, Thomas W
2014-02-01
Virtual-reality solutions have successfully been used to train distributed teams. This study aimed to investigate the correlation between user characteristics and sense of presence in an online virtual-reality environment where distributed teams are trained. A greater sense of presence has the potential to make training in the virtual environment more effective, leading to the formation of teams that perform better in a real environment. Being able to identify, before starting online training, those user characteristics that are predictors of a greater sense of presence can lead to the selection of trainees who would benefit most from the online simulated training. This is an observational study with a retrospective postsurvey of participants' user characteristics and degree of sense of presence. Twenty-nine members from 3 Air Force National Guard Medical Service expeditionary medical support teams participated in an online virtual environment training exercise and completed the Independent Television Commission-Sense of Presence Inventory survey, which measures sense of presence and user characteristics. Nonparametric statistics were applied to determine the statistical significance of user characteristics to sense of presence. Comparing user characteristics to the 4 scales of the Independent Television Commission-Sense of Presence Inventory using Kendall τ test gave the following results: the user characteristics "how often you play video games" (τ(26)=-0.458, P<0.01) and "television/film production knowledge" (τ(27)=-0.516, P<0.01) were significantly related to negative effects. Negative effects refer to adverse physiologic reactions owing to the virtual environment experience such as dizziness, nausea, headache, and eyestrain. The user characteristic "knowledge of virtual reality" was significantly related to engagement (τ(26)=0.463, P<0.01) and negative effects (τ(26)=-0.404, P<0.05). Individuals who have knowledge about virtual environments and experience with gaming environments report a higher sense of presence that indicates that they will likely benefit more from online virtual training. Future research studies could include a larger population of expeditionary medical support, and the results obtained could be used to create a model that predicts the level of presence based on the user characteristics. To maximize results and minimize costs, only those individuals who, based on their characteristics, are supposed to have a higher sense of presence and less negative effects could be selected for online simulated virtual environment training.
Development of a HIPAA-compliant environment for translational research data and analytics.
Bradford, Wayne; Hurdle, John F; LaSalle, Bernie; Facelli, Julio C
2014-01-01
High-performance computing centers (HPC) traditionally have far less restrictive privacy management policies than those encountered in healthcare. We show how an HPC can be re-engineered to accommodate clinical data while retaining its utility in computationally intensive tasks such as data mining, machine learning, and statistics. We also discuss deploying protected virtual machines. A critical planning step was to engage the university's information security operations and the information security and privacy office. Access to the environment requires a double authentication mechanism. The first level of authentication requires access to the university's virtual private network and the second requires that the users be listed in the HPC network information service directory. The physical hardware resides in a data center with controlled room access. All employees of the HPC and its users take the university's local Health Insurance Portability and Accountability Act training series. In the first 3 years, researcher count has increased from 6 to 58.
Human Motion Tracking and Glove-Based User Interfaces for Virtual Environments in ANVIL
NASA Technical Reports Server (NTRS)
Dumas, Joseph D., II
2002-01-01
The Army/NASA Virtual Innovations Laboratory (ANVIL) at Marshall Space Flight Center (MSFC) provides an environment where engineers and other personnel can investigate novel applications of computer simulation and Virtual Reality (VR) technologies. Among the many hardware and software resources in ANVIL are several high-performance Silicon Graphics computer systems and a number of commercial software packages, such as Division MockUp by Parametric Technology Corporation (PTC) and Jack by Unigraphics Solutions, Inc. These hardware and software platforms are used in conjunction with various VR peripheral I/O (input / output) devices, CAD (computer aided design) models, etc. to support the objectives of the MSFC Engineering Systems Department/Systems Engineering Support Group (ED42) by studying engineering designs, chiefly from the standpoint of human factors and ergonomics. One of the more time-consuming tasks facing ANVIL personnel involves the testing and evaluation of peripheral I/O devices and the integration of new devices with existing hardware and software platforms. Another important challenge is the development of innovative user interfaces to allow efficient, intuitive interaction between simulation users and the virtual environments they are investigating. As part of his Summer Faculty Fellowship, the author was tasked with verifying the operation of some recently acquired peripheral interface devices and developing new, easy-to-use interfaces that could be used with existing VR hardware and software to better support ANVIL projects.
A virtual therapeutic environment with user projective agents.
Ookita, S Y; Tokuda, H
2001-02-01
Today, we see the Internet as more than just an information infrastructure, but a socializing place and a safe outlet of inner feelings. Many personalities develop aside from real world life due to its anonymous environment. Virtual world interactions are bringing about new psychological illnesses ranging from netaddiction to technostress, as well as online personality disorders and conflicts in multiple identities that exist in the virtual world. Presently, there are no standard therapy models for the virtual environment. There are very few therapeutic environments, or tools especially made for virtual therapeutic environments. The goal of our research is to provide the therapy model and middleware tools for psychologists to use in virtual therapeutic environments. We propose the Cyber Therapy Model, and Projective Agents, a tool used in the therapeutic environment. To evaluate the effectiveness of the tool, we created a prototype system, called the Virtual Group Counseling System, which is a therapeutic environment that allows the user to participate in group counseling through the eyes of their Projective Agent. Projective Agents inherit the user's personality traits. During the virtual group counseling, the user's Projective Agent interacts and collaborates to recover and increase their psychological growth. The prototype system provides a simulation environment where psychologists can adjust the parameters and customize their own simulation environment. The model and tool is a first attempt toward simulating online personalities that may exist only online, and provide data for observation.
Perception of Graphical Virtual Environments by Blind Users via Sensory Substitution
Maidenbaum, Shachar; Buchs, Galit; Abboud, Sami; Lavi-Rotbain, Ori; Amedi, Amir
2016-01-01
Graphical virtual environments are currently far from accessible to blind users as their content is mostly visual. This is especially unfortunate as these environments hold great potential for this population for purposes such as safe orientation, education, and entertainment. Previous tools have increased accessibility but there is still a long way to go. Visual-to-audio Sensory-Substitution-Devices (SSDs) can increase accessibility generically by sonifying on-screen content regardless of the specific environment and offer increased accessibility without the use of expensive dedicated peripherals like electrode/vibrator arrays. Using SSDs virtually utilizes similar skills as when using them in the real world, enabling both training on the device and training on environments virtually before real-world visits. This could enable more complex, standardized and autonomous SSD training and new insights into multisensory interaction and the visually-deprived brain. However, whether congenitally blind users, who have never experienced virtual environments, will be able to use this information for successful perception and interaction within them is currently unclear.We tested this using the EyeMusic SSD, which conveys whole-scene visual information, to perform virtual tasks otherwise impossible without vision. Congenitally blind users had to navigate virtual environments and find doors, differentiate between them based on their features (Experiment1:task1) and surroundings (Experiment1:task2) and walk through them; these tasks were accomplished with a 95% and 97% success rate, respectively. We further explored the reactions of congenitally blind users during their first interaction with a more complex virtual environment than in the previous tasks–walking down a virtual street, recognizing different features of houses and trees, navigating to cross-walks, etc. Users reacted enthusiastically and reported feeling immersed within the environment. They highlighted the potential usefulness of such environments for understanding what visual scenes are supposed to look like and their potential for complex training and suggested many future environments they wished to experience. PMID:26882473
Perception of Graphical Virtual Environments by Blind Users via Sensory Substitution.
Maidenbaum, Shachar; Buchs, Galit; Abboud, Sami; Lavi-Rotbain, Ori; Amedi, Amir
2016-01-01
Graphical virtual environments are currently far from accessible to blind users as their content is mostly visual. This is especially unfortunate as these environments hold great potential for this population for purposes such as safe orientation, education, and entertainment. Previous tools have increased accessibility but there is still a long way to go. Visual-to-audio Sensory-Substitution-Devices (SSDs) can increase accessibility generically by sonifying on-screen content regardless of the specific environment and offer increased accessibility without the use of expensive dedicated peripherals like electrode/vibrator arrays. Using SSDs virtually utilizes similar skills as when using them in the real world, enabling both training on the device and training on environments virtually before real-world visits. This could enable more complex, standardized and autonomous SSD training and new insights into multisensory interaction and the visually-deprived brain. However, whether congenitally blind users, who have never experienced virtual environments, will be able to use this information for successful perception and interaction within them is currently unclear.We tested this using the EyeMusic SSD, which conveys whole-scene visual information, to perform virtual tasks otherwise impossible without vision. Congenitally blind users had to navigate virtual environments and find doors, differentiate between them based on their features (Experiment1:task1) and surroundings (Experiment1:task2) and walk through them; these tasks were accomplished with a 95% and 97% success rate, respectively. We further explored the reactions of congenitally blind users during their first interaction with a more complex virtual environment than in the previous tasks-walking down a virtual street, recognizing different features of houses and trees, navigating to cross-walks, etc. Users reacted enthusiastically and reported feeling immersed within the environment. They highlighted the potential usefulness of such environments for understanding what visual scenes are supposed to look like and their potential for complex training and suggested many future environments they wished to experience.
Virtual hand: a 3D tactile interface to virtual environments
NASA Astrophysics Data System (ADS)
Rogowitz, Bernice E.; Borrel, Paul
2008-02-01
We introduce a novel system that allows users to experience the sensation of touch in a computer graphics environment. In this system, the user places his/her hand on an array of pins, which is moved about space on a 6 degree-of-freedom robot arm. The surface of the pins defines a surface in the virtual world. This "virtual hand" can move about the virtual world. When the virtual hand encounters an object in the virtual world, the heights of the pins are adjusted so that they represent the object's shape, surface, and texture. A control system integrates pin and robot arm motions to transmit information about objects in the computer graphics world to the user. It also allows the user to edit, change and move the virtual objects, shapes and textures. This system provides a general framework for touching, manipulating, and modifying objects in a 3-D computer graphics environment, which may be useful in a wide range of applications, including computer games, computer aided design systems, and immersive virtual worlds.
Suma, Evan A; Lipps, Zachary; Finkelstein, Samantha; Krum, David M; Bolas, Mark
2012-04-01
Walking is only possible within immersive virtual environments that fit inside the boundaries of the user's physical workspace. To reduce the severity of the restrictions imposed by limited physical area, we introduce "impossible spaces," a new design mechanic for virtual environments that wish to maximize the size of the virtual environment that can be explored with natural locomotion. Such environments make use of self-overlapping architectural layouts, effectively compressing comparatively large interior environments into smaller physical areas. We conducted two formal user studies to explore the perception and experience of impossible spaces. In the first experiment, we showed that reasonably small virtual rooms may overlap by as much as 56% before users begin to detect that they are in an impossible space, and that the larger virtual rooms that expanded to maximally fill our available 9.14 m x 9.14 m workspace may overlap by up to 31%. Our results also demonstrate that users perceive distances to objects in adjacent overlapping rooms as if the overall space was uncompressed, even at overlap levels that were overtly noticeable. In our second experiment, we combined several well-known redirection techniques to string together a chain of impossible spaces in an expansive outdoor scene. We then conducted an exploratory analysis of users' verbal feedback during exploration, which indicated that impossible spaces provide an even more powerful illusion when users are naive to the manipulation.
ERIC Educational Resources Information Center
Liu, Chang; Zhong, Ying; Ozercan, Sertac; Zhu, Qing
2013-01-01
This paper presents a template-based solution to overcome technical barriers non-technical computer end users face when developing functional learning environments in three-dimensional virtual worlds (3DVW). "iVirtualWorld," a prototype of a platform-independent 3DVW creation tool that implements the proposed solution, facilitates 3DVW…
Pre-Service Teachers' Perspectives on Using Scenario-Based Virtual Worlds in Science Education
ERIC Educational Resources Information Center
Kennedy-Clark, Shannon
2011-01-01
This paper presents the findings of a study on the current knowledge and attitudes of pre-service teachers on the use of scenario-based multi-user virtual environments in science education. The 28 participants involved in the study were introduced to "Virtual Singapura," a multi-user virtual environment, and completed an open-ended questionnaire.…
Virtual Atomic and Molecular Data Center (VAMDC) and Stark-B Database
NASA Astrophysics Data System (ADS)
Dimitrijevic, M. S.; Sahal-Brechot, S.; Kovacevic, A.; Jevremovic, D.; Popovic, L. C.; VAMDC Consortium; Dubernet, Marie-Lise
2012-01-01
Virtual Atomic and Molecular Data Center (VAMDC) is an European FP7 project with aims to build a flexible and interoperable e-science environment based interface to the existing Atomic and Molecular data. The VAMDC will be built upon the expertise of existing Atomic and Molecular databases, data producers and service providers with the specific aim of creating an infrastructure that is easily tuned to the requirements of a wide variety of users in academic, governmental, industrial or public communities. In VAMDC will enter also STARK-B database, containing Stark broadening parameters for a large number of lines, obtained by the semiclassical perturbation method during more than 30 years of collaboration of authors of this work (MSD and SSB) and their co-workers. In this contribution we will review the VAMDC project, STARK-B database and discuss the benefits of both for the corresponding data users.
Exploring the User Experience of Three-Dimensional Virtual Learning Environments
ERIC Educational Resources Information Center
Shin, Dong-Hee; Biocca, Frank; Choo, Hyunseung
2013-01-01
This study examines the users' experiences with three-dimensional (3D) virtual environments to investigate the areas of development as a learning application. For the investigation, the modified technology acceptance model (TAM) is used with constructs from expectation-confirmation theory (ECT). Users' responses to questions about cognitive…
3D multiplayer virtual pets game using Google Card Board
NASA Astrophysics Data System (ADS)
Herumurti, Darlis; Riskahadi, Dimas; Kuswardayan, Imam
2017-08-01
Virtual Reality (VR) is a technology which allows user to interact with the virtual environment. This virtual environment is generated and simulated by computer. This technology can make user feel the sensation when they are in the virtual environment. The VR technology provides real virtual environment view for user and it is not viewed from screen. But it needs another additional device to show the view of virtual environment. This device is known as Head Mounted Device (HMD). Oculust Rift and Microsoft Hololens are the most famous HMD devices used in VR. And in 2014, Google Card Board was introduced at Google I/O developers conference. Google Card Board is VR platform which allows user to enjoy the VR with simple and cheap way. In this research, we explore Google Card Board to develop simulation game of raising pet. The Google Card Board is used to create view for the VR environment. The view and control in VR environment is built using Unity game engine. And the simulation process is designed using Finite State Machine (FSM). This FSM can help to design the process clearly. So the simulation process can describe the simulation of raising pet well. Raising pet is fun activity. But sometimes, there are many conditions which cause raising pet become difficult to do, i.e. environment condition, disease, high cost, etc. this research aims to explore and implement Google Card Board in simulation of raising pet.
Adaptive User Model for Web-Based Learning Environment.
ERIC Educational Resources Information Center
Garofalakis, John; Sirmakessis, Spiros; Sakkopoulos, Evangelos; Tsakalidis, Athanasios
This paper describes the design of an adaptive user model and its implementation in an advanced Web-based Virtual University environment that encompasses combined and synchronized adaptation between educational material and well-known communication facilities. The Virtual University environment has been implemented to support a postgraduate…
Virtual reality and telerobotics applications of an Address Recalculation Pipeline
NASA Technical Reports Server (NTRS)
Regan, Matthew; Pose, Ronald
1994-01-01
The technology described in this paper was designed to reduce latency to user interactions in immersive virtual reality environments. It is also ideally suited to telerobotic applications such as interaction with remote robotic manipulators in space or in deep sea operations. in such circumstances the significant latency is observed response to user stimulus which is due to communications delays, and the disturbing jerkiness due to low and unpredictable frame rates on compressed video user feedback or computationally limited virtual worlds, can be masked by our techniques. The user is provided with highly responsive visual feedback independent of communication or computational delays in providing physical video feedback or in rendering virtual world images. Virtual and physical environments can be combined seamlessly using these techniques.
A Multi-User Virtual Environment for Building and Assessing Higher Order Inquiry Skills in Science
ERIC Educational Resources Information Center
Ketelhut, Diane Jass; Nelson, Brian C.; Clarke, Jody; Dede, Chris
2010-01-01
This study investigated novel pedagogies for helping teachers infuse inquiry into a standards-based science curriculum. Using a multi-user virtual environment (MUVE) as a pedagogical vehicle, teams of middle-school students collaboratively solved problems around disease in a virtual town called River City. The students interacted with "avatars" of…
Civic Participation among Seventh-Grade Social Studies Students in Multi-User Virtual Environments
ERIC Educational Resources Information Center
Zieger, Laura; Farber, Matthew
2012-01-01
Technological advances on the Internet now enable students to develop participation skills in virtual worlds. Similar to controlling a character in a video game, multi-user virtual environments, or MUVEs, allow participants to interact with others in synchronous, online settings. The authors of this study created a link between MUVEs and…
NASA Astrophysics Data System (ADS)
Ikeda, Sei; Sato, Tomokazu; Kanbara, Masayuki; Yokoya, Naokazu
2004-05-01
Technology that enables users to experience a remote site virtually is called telepresence. A telepresence system using real environment images is expected to be used in the field of entertainment, medicine, education and so on. This paper describes a novel telepresence system which enables users to walk through a photorealistic virtualized environment by actual walking. To realize such a system, a wide-angle high-resolution movie is projected on an immersive multi-screen display to present users the virtualized environments and a treadmill is controlled according to detected user's locomotion. In this study, we use an omnidirectional multi-camera system to acquire images real outdoor scene. The proposed system provides users with rich sense of walking in a remote site.
ERIC Educational Resources Information Center
Zhong, Ying
2013-01-01
Virtual worlds are well-suited for building virtual laboratories for educational purposes to complement hands-on physical laboratories. However, educators may face technical challenges because developing virtual worlds requires skills in programming and 3D design. Current virtual world building tools are developed for users who have programming…
NASA Astrophysics Data System (ADS)
Michaelis, A.; Wang, W.; Melton, F. S.; Votava, P.; Milesi, C.; Hashimoto, H.; Nemani, R. R.; Hiatt, S. H.
2009-12-01
As the length and diversity of the global earth observation data records grow, modeling and analyses of biospheric conditions increasingly requires multiple terabytes of data from a diversity of models and sensors. With network bandwidth beginning to flatten, transmission of these data from centralized data archives presents an increasing challenge, and costs associated with local storage and management of data and compute resources are often significant for individual research and application development efforts. Sharing community valued intermediary data sets, results and codes from individual efforts with others that are not in direct funded collaboration can also be a challenge with respect to time, cost and expertise. We purpose a modeling, data and knowledge center that houses NASA satellite data, climate data and ancillary data where a focused community may come together to share modeling and analysis codes, scientific results, knowledge and expertise on a centralized platform, named Ecosystem Modeling Center (EMC). With the recent development of new technologies for secure hardware virtualization, an opportunity exists to create specific modeling, analysis and compute environments that are customizable, “archiveable” and transferable. Allowing users to instantiate such environments on large compute infrastructures that are directly connected to large data archives may significantly reduce costs and time associated with scientific efforts by alleviating users from redundantly retrieving and integrating data sets and building modeling analysis codes. The EMC platform also provides the possibility for users receiving indirect assistance from expertise through prefabricated compute environments, potentially reducing study “ramp up” times.
Validation of virtual reality as a tool to understand and prevent child pedestrian injury.
Schwebel, David C; Gaines, Joanna; Severson, Joan
2008-07-01
In recent years, virtual reality has emerged as an innovative tool for health-related education and training. Among the many benefits of virtual reality is the opportunity for novice users to engage unsupervised in a safe environment when the real environment might be dangerous. Virtual environments are only useful for health-related research, however, if behavior in the virtual world validly matches behavior in the real world. This study was designed to test the validity of an immersive, interactive virtual pedestrian environment. A sample of 102 children and 74 adults was recruited to complete simulated road-crossings in both the virtual environment and the identical real environment. In both the child and adult samples, construct validity was demonstrated via significant correlations between behavior in the virtual and real worlds. Results also indicate construct validity through developmental differences in behavior; convergent validity by showing correlations between parent-reported child temperament and behavior in the virtual world; internal reliability of various measures of pedestrian safety in the virtual world; and face validity, as measured by users' self-reported perception of realism in the virtual world. We discuss issues of generalizability to other virtual environments, and the implications for application of virtual reality to understanding and preventing pediatric pedestrian injuries.
ERIC Educational Resources Information Center
Pares-Toral, Maria T.
2013-01-01
The ever increasing popularity of virtual worlds, also known as 3-D multi-user virtual environments (MUVEs) or simply virtual worlds provides language instructors with a new tool they can exploit in their courses. For now, "Second Life" is one of the most popular MUVEs used for teaching and learning, and although "Second Life"…
Altering User Movement Behaviour in Virtual Environments.
Simeone, Adalberto L; Mavridou, Ifigeneia; Powell, Wendy
2017-04-01
In immersive Virtual Reality systems, users tend to move in a Virtual Environment as they would in an analogous physical environment. In this work, we investigated how user behaviour is affected when the Virtual Environment differs from the physical space. We created two sets of four environments each, plus a virtual replica of the physical environment as a baseline. The first focused on aesthetic discrepancies, such as a water surface in place of solid ground. The second focused on mixing immaterial objects together with those paired to tangible objects. For example, barring an area with walls or obstacles. We designed a study where participants had to reach three waypoints laid out in such a way to prompt a decision on which path to follow based on the conflict between the mismatching visual stimuli and their awareness of the real layout of the room. We analysed their performances to determine whether their trajectories were altered significantly from the shortest route. Our results indicate that participants altered their trajectories in presence of surfaces representing higher walking difficulty (for example, water instead of grass). However, when the graphical appearance was found to be ambiguous, there was no significant trajectory alteration. The environments mixing immaterial with physical objects had the most impact on trajectories with a mean deviation from the shortest route of 60 cm against the 37 cm of environments with aesthetic alterations. The co-existance of paired and unpaired virtual objects was reported to support the idea that all objects participants saw were backed by physical props. From these results and our observations, we derive guidelines on how to alter user movement behaviour in Virtual Environments.
Rosa, Pedro J; Morais, Diogo; Gamito, Pedro; Oliveira, Jorge; Saraiva, Tomaz
2016-03-01
Immersive virtual reality is thought to be advantageous by leading to higher levels of presence. However, and despite users getting actively involved in immersive three-dimensional virtual environments that incorporate sound and motion, there are individual factors, such as age, video game knowledge, and the predisposition to immersion, that may be associated with the quality of virtual reality experience. Moreover, one particular concern for users engaged in immersive virtual reality environments (VREs) is the possibility of side effects, such as cybersickness. The literature suggests that at least 60% of virtual reality users report having felt symptoms of cybersickness, which reduces the quality of the virtual reality experience. The aim of this study was thus to profile the right user to be involved in a VRE through head-mounted display. To examine which user characteristics are associated with the most effective virtual reality experience (lower cybersickness), a multiple correspondence analysis combined with cluster analysis technique was performed. Results revealed three distinct profiles, showing that the PC gamer profile is more associated with higher levels of virtual reality effectiveness, that is, higher predisposition to be immersed and reduced cybersickness symptoms in the VRE than console gamer and nongamer. These findings can be a useful orientation in clinical practice and future research as they help identify which users are more predisposed to benefit from immersive VREs.
SmallTool - a toolkit for realizing shared virtual environments on the Internet
NASA Astrophysics Data System (ADS)
Broll, Wolfgang
1998-09-01
With increasing graphics capabilities of computers and higher network communication speed, networked virtual environments have become available to a large number of people. While the virtual reality modelling language (VRML) provides users with the ability to exchange 3D data, there is still a lack of appropriate support to realize large-scale multi-user applications on the Internet. In this paper we will present SmallTool, a toolkit to support shared virtual environments on the Internet. The toolkit consists of a VRML-based parsing and rendering library, a device library, and a network library. This paper will focus on the networking architecture, provided by the network library - the distributed worlds transfer and communication protocol (DWTP). DWTP provides an application-independent network architecture to support large-scale multi-user environments on the Internet.
Can virtual reality be used to conduct mass prophylaxis clinic training? A pilot program.
Yellowlees, Peter; Cook, James N; Marks, Shayna L; Wolfe, Daniel; Mangin, Elanor
2008-03-01
To create and evaluate a pilot bioterrorism defense training environment using virtual reality technology. The present pilot project used Second Life, an internet-based virtual world system, to construct a virtual reality environment to mimic an actual setting that might be used as a Strategic National Stockpile (SNS) distribution site for northern California in the event of a bioterrorist attack. Scripted characters were integrated into the system as mock patients to analyze various clinic workflow scenarios. Users tested the virtual environment over two sessions. Thirteen users who toured the environment were asked to complete an evaluation survey. Respondents reported that the virtual reality system was relevant to their practice and had potential as a method of bioterrorism defense training. Computer simulations of bioterrorism defense training scenarios are feasible with existing personal computer technology. The use of internet-connected virtual environments holds promise for bioterrorism defense training. Recommendations are made for public health agencies regarding the implementation and benefits of using virtual reality for mass prophylaxis clinic training.
Librarians without Borders? Virtual Reference Service to Unaffiliated Users
ERIC Educational Resources Information Center
Kibbee, Jo
2006-01-01
The author investigates issues faced by academic research libraries in providing virtual reference services to unaffiliated users. These libraries generally welcome visitors who use on-site collections and reference services, but are these altruistic policies feasible in a virtual environment? This paper reviews the use of virtual reference…
ERIC Educational Resources Information Center
Nelson, Brian C.; Erlandson, Benjamin E.
2008-01-01
In this paper, we explore how the application of multimedia design principles may inform the development of educational multi-user virtual environments (MUVEs). We look at design principles that have been shown to help learners manage cognitive load within multimedia environments and conduct a conjectural analysis of the extent to which such…
Socialisation for Learning at a Distance in a 3-D Multi-User Virtual Environment
ERIC Educational Resources Information Center
Edirisingha, Palitha; Nie, Ming; Pluciennik, Mark; Young, Ruth
2009-01-01
This paper reports findings of a pilot study that examined the pedagogical potential of "Second Life" (SL), a popular three-dimensional multi-user virtual environment (3-D MUVE) developed by the Linden Lab. The study is part of a 1-year research and development project titled "Modelling of Secondlife Environments"…
Virtual community centre for power wheelchair training: Experience of children and clinicians.
Torkia, Caryne; Ryan, Stephen E; Reid, Denise; Boissy, Patrick; Lemay, Martin; Routhier, François; Contardo, Resi; Woodhouse, Janet; Archambault, Phillipe S
2017-11-02
To: 1) characterize the overall experience in using the McGill immersive wheelchair - community centre (miWe-CC) simulator; and 2) investigate the experience of presence (i.e., sense of being in the virtual rather than in the real, physical environment) while driving a PW in the miWe-CC. A qualitative research design with structured interviews was used. Fifteen clinicians and 11 children were interviewed after driving a power wheelchair (PW) in the miWe-CC simulator. Data were analyzed using the conventional and directed content analysis approaches. Overall, participants enjoyed using the simulator and experienced a sense of presence in the virtual space. They felt a sense of being in the virtual environment, involved and focused on driving the virtual PW rather than on the surroundings of the actual room where they were. Participants reported several similarities between the virtual community centre layout and activities of the miWe-CC and the day-to-day reality of paediatric PW users. The simulator replicated participants' expectations of real-life PW use and promises to have an effect on improving the driving skills of new PW users. Implications for rehabilitation Among young users, the McGill immersive wheelchair (miWe) simulator provides an experience of presence within the virtual environment. This experience of presence is generated by a sense of being in the virtual scene, a sense of being involved, engaged, and focused on interacting within the virtual environment, and by the perception that the virtual environment is consistent with the real world. The miWe is a relevant and accessible approach, complementary to real world power wheelchair training for young users.
NASA Astrophysics Data System (ADS)
Beckhaus, Steffi
Virtual Reality aims at creating an artificial environment that can be perceived as a substitute to a real setting. Much effort in research and development goes into the creation of virtual environments that in their majority are perceivable only by eyes and hands. The multisensory nature of our perception, however, allows and, arguably, also expects more than that. As long as we are not able to simulate and deliver a fully sensory believable virtual environment to a user, we could make use of the fully sensory, multi-modal nature of real objects to fill in for this deficiency. The idea is to purposefully integrate real artifacts into the application and interaction, instead of dismissing anything real as hindering the virtual experience. The term virtual reality - denoting the goal, not the technology - shifts from a core virtual reality to an “enriched” reality, technologically encompassing both the computer generated and the real, physical artifacts. Together, either simultaneously or in a hybrid way, real and virtual jointly provide stimuli that are perceived by users through their senses and are later formed into an experience by the user's mind.
ERIC Educational Resources Information Center
Ehrlich, Justin
2010-01-01
The application of virtual reality is becoming ever more important as technology reaches new heights allowing virtual environments (VE) complete with global illumination. One successful application of virtual environments is educational interventions meant to treat individuals with autism spectrum disorder (ASD). VEs are effective with these…
Virtual goods recommendations in virtual worlds.
Chen, Kuan-Yu; Liao, Hsiu-Yu; Chen, Jyun-Hung; Liu, Duen-Ren
2015-01-01
Virtual worlds (VWs) are computer-simulated environments which allow users to create their own virtual character as an avatar. With the rapidly growing user volume in VWs, platform providers launch virtual goods in haste and stampede users to increase sales revenue. However, the rapidity of development incurs virtual unrelated items which will be difficult to remarket. It not only wastes virtual global companies' intelligence resources, but also makes it difficult for users to find suitable virtual goods fit for their virtual home in daily virtual life. In the VWs, users decorate their houses, visit others' homes, create families, host parties, and so forth. Users establish their social life circles through these activities. This research proposes a novel virtual goods recommendation method based on these social interactions. The contact strength and contact influence result from interactions with social neighbors and influence users' buying intention. Our research highlights the importance of social interactions in virtual goods recommendation. The experiment's data were retrieved from an online VW platform, and the results show that the proposed method, considering social interactions and social life circle, has better performance than existing recommendation methods.
Virtual Goods Recommendations in Virtual Worlds
Chen, Kuan-Yu; Liao, Hsiu-Yu; Chen, Jyun-Hung; Liu, Duen-Ren
2015-01-01
Virtual worlds (VWs) are computer-simulated environments which allow users to create their own virtual character as an avatar. With the rapidly growing user volume in VWs, platform providers launch virtual goods in haste and stampede users to increase sales revenue. However, the rapidity of development incurs virtual unrelated items which will be difficult to remarket. It not only wastes virtual global companies' intelligence resources, but also makes it difficult for users to find suitable virtual goods fit for their virtual home in daily virtual life. In the VWs, users decorate their houses, visit others' homes, create families, host parties, and so forth. Users establish their social life circles through these activities. This research proposes a novel virtual goods recommendation method based on these social interactions. The contact strength and contact influence result from interactions with social neighbors and influence users' buying intention. Our research highlights the importance of social interactions in virtual goods recommendation. The experiment's data were retrieved from an online VW platform, and the results show that the proposed method, considering social interactions and social life circle, has better performance than existing recommendation methods. PMID:25834837
An Intelligent Crawler for a Virtual World
ERIC Educational Resources Information Center
Eno, Joshua
2010-01-01
Virtual worlds, which allow users to create and interact with content in a 3D, multi-user environment, growing and becoming more integrated with the traditional flat web. However, little is empirically known about the content users create in virtual world and how it can be indexed and searched effectively. In order to gain a better understanding…
Global Village as Virtual Community (On Writing, Thinking, and Teacher Education).
ERIC Educational Resources Information Center
Polin, Linda
1993-01-01
Describes virtual communities known as Multi-User Simulated Environment (MUSE) or Multi-User Object Oriented environment (MOO), text-based computer "communities" whose inhabitants are a combination of the real people and constructed objects that people agree to treat as real. Describes their uses in the classroom. (SR)
Virtual reality and robotics for stroke rehabilitation: where do we go from here?
Wade, Eric; Winstein, Carolee J
2011-01-01
Promoting functional recovery after stroke requires collaborative and innovative approaches to neurorehabilitation research. Task-oriented training (TOT) approaches that include challenging, adaptable, and meaningful activities have led to successful outcomes in several large-scale multisite definitive trials. This, along with recent technological advances of virtual reality and robotics, provides a fertile environment for furthering clinical research in neurorehabilitation. Both virtual reality and robotics make use of multimodal sensory interfaces to affect human behavior. In the therapeutic setting, these systems can be used to quantitatively monitor, manipulate, and augment the users' interaction with their environment, with the goal of promoting functional recovery. This article describes recent advances in virtual reality and robotics and the synergy with best clinical practice. Additionally, we describe the promise shown for automated assessments and in-home activity-based interventions. Finally, we propose a broader approach to ensuring that technology-based assessment and intervention complement evidence-based practice and maintain a patient-centered perspective.
Encarnação, L Miguel; Bimber, Oliver
2002-01-01
Collaborative virtual environments for diagnosis and treatment planning are increasingly gaining importance in our global society. Virtual and Augmented Reality approaches promised to provide valuable means for the involved interactive data analysis, but the underlying technologies still create a cumbersome work environment that is inadequate for clinical employment. This paper addresses two of the shortcomings of such technology: Intuitive interaction with multi-dimensional data in immersive and semi-immersive environments as well as stereoscopic multi-user displays combining the advantages of Virtual and Augmented Reality technology.
Software Architecture for a Virtual Environment for Nano Scale Assembly (VENSA).
Lee, Yong-Gu; Lyons, Kevin W; Feng, Shaw C
2004-01-01
A Virtual Environment (VE) uses multiple computer-generated media to let a user experience situations that are temporally and spatially prohibiting. The information flow between the user and the VE is bidirectional and the user can influence the environment. The software development of a VE requires orchestrating multiple peripherals and computers in a synchronized way in real time. Although a multitude of useful software components for VEs exists, many of these are packaged within a complex framework and can not be used separately. In this paper, an architecture is presented which is designed to let multiple frameworks work together while being shielded from the application program. This architecture, which is called the Virtual Environment for Nano Scale Assembly (VENSA), has been constructed for interfacing with an optical tweezers instrument for nanotechnology development. However, this approach can be generalized for most virtual environments. Through the use of VENSA, the programmer can rely on existing solutions and concentrate more on the application software design.
Software Architecture for a Virtual Environment for Nano Scale Assembly (VENSA)
Lee, Yong-Gu; Lyons, Kevin W.; Feng, Shaw C.
2004-01-01
A Virtual Environment (VE) uses multiple computer-generated media to let a user experience situations that are temporally and spatially prohibiting. The information flow between the user and the VE is bidirectional and the user can influence the environment. The software development of a VE requires orchestrating multiple peripherals and computers in a synchronized way in real time. Although a multitude of useful software components for VEs exists, many of these are packaged within a complex framework and can not be used separately. In this paper, an architecture is presented which is designed to let multiple frameworks work together while being shielded from the application program. This architecture, which is called the Virtual Environment for Nano Scale Assembly (VENSA), has been constructed for interfacing with an optical tweezers instrument for nanotechnology development. However, this approach can be generalized for most virtual environments. Through the use of VENSA, the programmer can rely on existing solutions and concentrate more on the application software design. PMID:27366610
Opportunities and Constraints in Disseminating Qualitative Research in Web 2.0 Virtual Environments.
Hays, Charles A; Spiers, Judith A; Paterson, Barbara
2015-11-01
The Web 2.0 digital environment is revolutionizing how users communicate and relate to each other, and how information is shared, created, and recreated within user communities. The social media technologies in the Web 2.0 digital ecosystem are fundamentally changing the opportunities and dangers in disseminating qualitative health research. The social changes influenced by digital innovations shift dissemination from passive consumption to user-centered, apomediated cooperative approaches, the features of which are underutilized by many qualitative researchers. We identify opportunities new digital media presents for knowledge dissemination activities including access to wider audiences with few gatekeeper constraints, new perspectives, and symbiotic relationships between researchers and users. We also address some of the challenges in embracing these technologies including lack of control, potential for unethical co-optation of work, and cyberbullying. Finally, we offer solutions to enhance research dissemination in sustainable, ethical, and effective strategies. © The Author(s) 2015.
Preservice Teachers Experience Reading Response Pedagogy in a Multi-User Virtual Environment
ERIC Educational Resources Information Center
Dooley, Caitlin McMunn; Calandra, Brendan; Harmon, Stephen
2014-01-01
This qualitative case study describes how 18 preservice teachers learned to nurture literary meaning-making via activities based on Louise Rosenblatt's Reader Response Theory within a multi-user virtual environment (MUVE). Participants re-created and responded to scenes from selected works of children's literature in Second Life as a way to…
Dynamic provisioning of local and remote compute resources with OpenStack
NASA Astrophysics Data System (ADS)
Giffels, M.; Hauth, T.; Polgart, F.; Quast, G.
2015-12-01
Modern high-energy physics experiments rely on the extensive usage of computing resources, both for the reconstruction of measured events as well as for Monte-Carlo simulation. The Institut fur Experimentelle Kernphysik (EKP) at KIT is participating in both the CMS and Belle experiments with computing and storage resources. In the upcoming years, these requirements are expected to increase due to growing amount of recorded data and the rise in complexity of the simulated events. It is therefore essential to increase the available computing capabilities by tapping into all resource pools. At the EKP institute, powerful desktop machines are available to users. Due to the multi-core nature of modern CPUs, vast amounts of CPU time are not utilized by common desktop usage patterns. Other important providers of compute capabilities are classical HPC data centers at universities or national research centers. Due to the shared nature of these installations, the standardized software stack required by HEP applications cannot be installed. A viable way to overcome this constraint and offer a standardized software environment in a transparent manner is the usage of virtualization technologies. The OpenStack project has become a widely adopted solution to virtualize hardware and offer additional services like storage and virtual machine management. This contribution will report on the incorporation of the institute's desktop machines into a private OpenStack Cloud. The additional compute resources provisioned via the virtual machines have been used for Monte-Carlo simulation and data analysis. Furthermore, a concept to integrate shared, remote HPC centers into regular HEP job workflows will be presented. In this approach, local and remote resources are merged to form a uniform, virtual compute cluster with a single point-of-entry for the user. Evaluations of the performance and stability of this setup and operational experiences will be discussed.
Ergonomic aspects of a virtual environment.
Ahasan, M R; Väyrynen, S
1999-01-01
A virtual environment is an interactive graphic system mediated through computer technology that allows a certain level of reality or a sense of presence to access virtual information. To create reality in a virtual environment, ergonomics issues are explored in this paper, aiming to develop the design of presentation formats with related information, that is possible to attain and to maintain user-friendly application.
An interactive VR system based on full-body tracking and gesture recognition
NASA Astrophysics Data System (ADS)
Zeng, Xia; Sang, Xinzhu; Chen, Duo; Wang, Peng; Guo, Nan; Yan, Binbin; Wang, Kuiru
2016-10-01
Most current virtual reality (VR) interactions are realized with the hand-held input device which leads to a low degree of presence. There is other solutions using sensors like Leap Motion to recognize the gestures of users in order to interact in a more natural way, but the navigation in these systems is still a problem, because they fail to map the actual walking to virtual walking only with a partial body of the user represented in the synthetic environment. Therefore, we propose a system in which users can walk around in the virtual environment as a humanoid model, selecting menu items and manipulating with the virtual objects using natural hand gestures. With a Kinect depth camera, the system tracks the joints of the user, mapping them to a full virtual body which follows the move of the tracked user. The movements of the feet can be detected to determine whether the user is in walking state, so that the walking of model in the virtual world can be activated and stopped by means of animation control in Unity engine. This method frees the hands of users comparing to traditional navigation way using hand-held device. We use the point cloud data getting from Kinect depth camera to recognize the gestures of users, such as swiping, pressing and manipulating virtual objects. Combining the full body tracking and gestures recognition using Kinect, we achieve our interactive VR system in Unity engine with a high degree of presence.
Greene, D D; Heeter, C
1998-01-01
Two new cancer patient information CD-ROMs extend the personal stories within virtual environments model of cancer patient information developed for Breast Cancer Lighthouse. Cancer Pain Retreat and Cancer Prevention Park: Games for Life are intended to inform and inspire users in an emotionally calming and intimately informative manner. The software offers users an experience--of visiting a virtual place and meeting and talking with patients and health care professionals.
Small satellite multi mission C2 for maximum effect
Miller, E.; Medina, O.; Lane, C.R.; Kirkham, A.; Ivancic, W.; Jones, B.; Risty, R.
2006-01-01
This paper discusses US Air Force, US Army, US Navy, and NASA demonstrations based around the Virtual Mission Operations Center (VMOC) and its application in fielding a Multi Mission Satellite Operations Center (MMSOC) designed to integrate small satellites into the inherently tiered system environment of operations. The intent is to begin standardizing the spacecraft to ground interfaces needed to reduce costs, maximize space effects to the user, and allow the generation of Tactics, Techniques and Procedures (TTPs) that lead to Responsive Space employment. Combining the US Air Force/Army focus of theater command and control of payloads with the US Navy's user collaboration and FORCEnet consistent approach lays the groundwork for the fundamental change needed to maximize responsive space effects.
Inertial Motion-Tracking Technology for Virtual 3-D
NASA Technical Reports Server (NTRS)
2005-01-01
In the 1990s, NASA pioneered virtual reality research. The concept was present long before, but, prior to this, the technology did not exist to make a viable virtual reality system. Scientists had theories and ideas they knew that the concept had potential, but the computers of the 1970s and 1980s were not fast enough, sensors were heavy and cumbersome, and people had difficulty blending fluidly with the machines. Scientists at Ames Research Center built upon the research of previous decades and put the necessary technology behind them, making the theories of virtual reality a reality. Virtual reality systems depend on complex motion-tracking sensors to convey information between the user and the computer to give the user the feeling that he is operating in the real world. These motion-tracking sensors measure and report an object s position and orientation as it changes. A simple example of motion tracking would be the cursor on a computer screen moving in correspondence to the shifting of the mouse. Tracking in 3-D, necessary to create virtual reality, however, is much more complex. To be successful, the perspective of the virtual image seen on the computer must be an accurate representation of what is seen in the real world. As the user s head or camera moves, turns, or tilts, the computer-generated environment must change accordingly with no noticeable lag, jitter, or distortion. Historically, the lack of smooth and rapid tracking of the user s motion has thwarted the widespread use of immersive 3-D computer graphics. NASA uses virtual reality technology for a variety of purposes, mostly training of astronauts. The actual missions are costly and dangerous, so any opportunity the crews have to practice their maneuvering in accurate situations before the mission is valuable and instructive. For that purpose, NASA has funded a great deal of virtual reality research, and benefited from the results.
NASA Technical Reports Server (NTRS)
Nguyen, Lac; Kenney, Patrick J.
1993-01-01
Development of interactive virtual environments (VE) has typically consisted of three primary activities: model (object) development, model relationship tree development, and environment behavior definition and coding. The model and relationship tree development activities are accomplished with a variety of well-established graphic library (GL) based programs - most utilizing graphical user interfaces (GUI) with point-and-click interactions. Because of this GUI format, little programming expertise on the part of the developer is necessary to create the 3D graphical models or to establish interrelationships between the models. However, the third VE development activity, environment behavior definition and coding, has generally required the greatest amount of time and programmer expertise. Behaviors, characteristics, and interactions between objects and the user within a VE must be defined via command line C coding prior to rendering the environment scenes. In an effort to simplify this environment behavior definition phase for non-programmers, and to provide easy access to model and tree tools, a graphical interface and development tool has been created. The principal thrust of this research is to effect rapid development and prototyping of virtual environments. This presentation will discuss the 'Visual Interface for Virtual Interaction Development' (VIVID) tool; an X-Windows based system employing drop-down menus for user selection of program access, models, and trees, behavior editing, and code generation. Examples of these selection will be highlighted in this presentation, as will the currently available program interfaces. The functionality of this tool allows non-programming users access to all facets of VE development while providing experienced programmers with a collection of pre-coded behaviors. In conjunction with its existing, interfaces and predefined suite of behaviors, future development plans for VIVID will be described. These include incorporation of dual user virtual environment enhancements, tool expansion, and additional behaviors.
ERIC Educational Resources Information Center
Crane, Laura; Benachour, Phillip
2013-01-01
The paper describes the analysis of user location and time stamp information automatically logged when students receive and interact with electronic updates from the University's virtual learning environment. The electronic updates are sent to students' mobile devices using RSS feeds. The mobile reception of such information can be received in…
ERIC Educational Resources Information Center
Sohn, Johannah Eve
2014-01-01
This descriptive case study explores the implementation of a multi-user virtual environment (MUVE) in a Jewish supplemental school setting. The research was conducted to present the recollections and reflections of three constituent populations of a new technology exploring constructivist education in the context of supplemental and online…
NASA Astrophysics Data System (ADS)
Tsoupikova, Daria
2006-02-01
This paper will explore how the aesthetics of the virtual world affects, transforms, and enhances the immersive emotional experience of the user. What we see and what we do upon entering the virtual environment influences our feelings, mental state, physiological changes and sensibility. To create a unique virtual experience the important component to design is the beauty of the virtual world based on the aesthetics of the graphical objects such as textures, models, animation, and special effects. The aesthetic potency of the images that comprise the virtual environment can make the immersive experience much stronger and more compelling. The aesthetic qualities of the virtual world as born out through images and graphics can influence the user's state of mind. Particular changes and effects on the user can be induced through the application of techniques derived from the research fields of psychology, anthropology, biology, color theory, education, art therapy, music, and art history. Many contemporary artists and developers derive much inspiration for their work from their experience with traditional arts such as painting, sculpture, design, architecture and music. This knowledge helps them create a higher quality of images and stereo graphics in the virtual world. The understanding of the close relation between the aesthetic quality of the virtual environment and the resulting human perception is the key to developing an impressive virtual experience.
Nunnerley, Joanne; Gupta, Swati; Snell, Deborah; King, Marcus
2017-05-01
A user-centred design was used to develop and test the feasibility of an immersive 3D virtual reality wheelchair training tool for people with spinal cord injury (SCI). A Wheelchair Training System was designed and modelled using the Oculus Rift headset and a Dynamic Control wheelchair joystick. The system was tested by clinicians and expert wheelchair users with SCI. Data from focus groups and individual interviews were analysed using a general inductive approach to thematic analysis. Four themes emerged: Realistic System, which described the advantages of a realistic virtual environment; a Wheelchair Training System, which described participants' thoughts on the wheelchair training applications; Overcoming Resistance to Technology, the obstacles to introducing technology within the clinical setting; and Working outside the Rehabilitation Bubble which described the protective hospital environment. The Oculus Rift Wheelchair Training System has the potential to provide a virtual rehabilitation setting which could allow wheelchair users to learn valuable community wheelchair use in a safe environment. Nausea appears to be a side effect of the system, which will need to be resolved before this can be a viable clinical tool. Implications for Rehabilitation Immersive virtual reality shows promising benefit for wheelchair training in a rehabilitation setting. Early engagement with consumers can improve product development.
Guidelines for developing distributed virtual environment applications
NASA Astrophysics Data System (ADS)
Stytz, Martin R.; Banks, Sheila B.
1998-08-01
We have conducted a variety of projects that served to investigate the limits of virtual environments and distributed virtual environment (DVE) technology for the military and medical professions. The projects include an application that allows the user to interactively explore a high-fidelity, dynamic scale model of the Solar System and a high-fidelity, photorealistic, rapidly reconfigurable aircraft simulator. Additional projects are a project for observing, analyzing, and understanding the activity in a military distributed virtual environment, a project to develop a distributed threat simulator for training Air Force pilots, a virtual spaceplane to determine user interface requirements for a planned military spaceplane system, and an automated wingman for use in supplementing or replacing human-controlled systems in a DVE. The last two projects are a virtual environment user interface framework; and a project for training hospital emergency department personnel. In the process of designing and assembling the DVE applications in support of these projects, we have developed rules of thumb and insights into assembling DVE applications and the environment itself. In this paper, we open with a brief review of the applications that were the source for our insights and then present the lessons learned as a result of these projects. The lessons we have learned fall primarily into five areas. These areas are requirements development, software architecture, human-computer interaction, graphical database modeling, and construction of computer-generated forces.
Virtual Environments Supporting Learning and Communication in Special Needs Education
ERIC Educational Resources Information Center
Cobb, Sue V. G.
2007-01-01
Virtual reality (VR) describes a set of technologies that allow users to explore and experience 3-dimensional computer-generated "worlds" or "environments." These virtual environments can contain representations of real or imaginary objects on a small or large scale (from modeling of molecular structures to buildings, streets, and scenery of a…
Visualizing vascular structures in virtual environments
NASA Astrophysics Data System (ADS)
Wischgoll, Thomas
2013-01-01
In order to learn more about the cause of coronary heart diseases and develop diagnostic tools, the extraction and visualization of vascular structures from volumetric scans for further analysis is an important step. By determining a geometric representation of the vasculature, the geometry can be inspected and additional quantitative data calculated and incorporated into the visualization of the vasculature. To provide a more user-friendly visualization tool, virtual environment paradigms can be utilized. This paper describes techniques for interactive rendering of large-scale vascular structures within virtual environments. This can be applied to almost any virtual environment configuration, such as CAVE-type displays. Specifically, the tools presented in this paper were tested on a Barco I-Space and a large 62x108 inch passive projection screen with a Kinect sensor for user tracking.
Concept mapping for virtual rehabilitation and training of the blind.
Sanchez, Jaime; Flores, Hector
2010-04-01
Concept mapping is a technique that allows for the strengthening of the learning process, based on graphic representations of the learner's mental schemes. However, due to its graphic nature, it cannot be utilized by learners with visual disabilities. In response to this limitation we implemented a study that involves the design of AudiodMC, an audio-based, virtual environment for concept mapping designed for use by blind users and aimed at virtual training and rehabilitation. We analyzed the stages involved in the design of AudiodMC from a user-centered design perspective, considering user involvement and usability testing. These include an observation stage to learn how blind learners construct conceptual maps using concrete materials, a design stage to design of a software tool that aids blind users in creating concept maps, and a cognitive evaluation stage using AudiodMC. We also present the results of a study implemented in order to determine the impact of the use of this software on the development of essential skills for concept mapping (association, classification, categorization, sorting and summarizing). The results point to a high level of user acceptance, having identified key sound characteristics that help blind learners to learn concept codification and selection skills. The use of AudiodMC also allowed for the effective development of the skills under review in our research, thus facilitating meaningful learning.
Virtual GEOINT Center: C2ISR through an avatar's eyes
NASA Astrophysics Data System (ADS)
Seibert, Mark; Tidbal, Travis; Basil, Maureen; Muryn, Tyler; Scupski, Joseph; Williams, Robert
2013-05-01
As the number of devices collecting and sending data in the world are increasing, finding ways to visualize and understand that data is becoming more and more of a problem. This has often been coined as the problem of "Big Data." The Virtual Geoint Center (VGC) aims to aid in solving that problem by providing a way to combine the use of the virtual world with outside tools. Using open-source software such as OpenSim and Blender, the VGC uses a visually stunning 3D environment to display the data sent to it. The VGC is broken up into two major components: The Kinect Minimap, and the Geoint Map. The Kinect Minimap uses the Microsoft Kinect and its open-source software to make a miniature display of people the Kinect detects in front of it. The Geoint Map collect smartphone sensor information from online databases and displays them in real time onto a map generated by Google Maps. By combining outside tools and the virtual world, the VGC can help a user "visualize" data, and provide additional tools to "understand" the data.
VIPER: Virtual Intelligent Planetary Exploration Rover
NASA Technical Reports Server (NTRS)
Edwards, Laurence; Flueckiger, Lorenzo; Nguyen, Laurent; Washington, Richard
2001-01-01
Simulation and visualization of rover behavior are critical capabilities for scientists and rover operators to construct, test, and validate plans for commanding a remote rover. The VIPER system links these capabilities. using a high-fidelity virtual-reality (VR) environment. a kinematically accurate simulator, and a flexible plan executive to allow users to simulate and visualize possible execution outcomes of a plan under development. This work is part of a larger vision of a science-centered rover control environment, where a scientist may inspect and explore the environment via VR tools, specify science goals, and visualize the expected and actual behavior of the remote rover. The VIPER system is constructed from three generic systems, linked together via a minimal amount of customization into the integrated system. The complete system points out the power of combining plan execution, simulation, and visualization for envisioning rover behavior; it also demonstrates the utility of developing generic technologies. which can be combined in novel and useful ways.
Efficient Learning Using a Virtual Learning Environment in a University Class
ERIC Educational Resources Information Center
Stricker, Daniel; Weibel, David; Wissmath, Bartholomaus
2011-01-01
This study examines a blended learning setting in an undergraduate course in psychology. A virtual learning environment (VLE) complemented the face-to-face lecture. The usage was voluntary and the VLE was designed to support the learning process of the students. Data from users (N = 80) and non-users (N = 82) from two cohorts were collected.…
ERIC Educational Resources Information Center
Ketelhut, Diane Jass
2007-01-01
This exploratory study investigated data-gathering behaviors exhibited by 100 seventh-grade students as they participated in a scientific inquiry-based curriculum project delivered by a multi-user virtual environment (MUVE). This research examined the relationship between students' self-efficacy on entry into the authentic scientific activity and…
Real-Life Migrants on the MUVE: Stories of Virtual Transitions
ERIC Educational Resources Information Center
Perkins, Ross A.; Arreguin, Cathy
2007-01-01
The communication and collaborative interface known as a multi-user virtual environment (MUVE), has existed since as early as the late 1970s. MUVEs refer to programs that have an animated character ("avatar") controlled by a user within a wider environment that can be explored--or built--at will. Second Life, a MUVE created by San Francisco-based…
Connors, Erin C; Yazzolino, Lindsay A; Sánchez, Jaime; Merabet, Lotfi B
2013-03-27
Audio-based Environment Simulator (AbES) is virtual environment software designed to improve real world navigation skills in the blind. Using only audio based cues and set within the context of a video game metaphor, users gather relevant spatial information regarding a building's layout. This allows the user to develop an accurate spatial cognitive map of a large-scale three-dimensional space that can be manipulated for the purposes of a real indoor navigation task. After game play, participants are then assessed on their ability to navigate within the target physical building represented in the game. Preliminary results suggest that early blind users were able to acquire relevant information regarding the spatial layout of a previously unfamiliar building as indexed by their performance on a series of navigation tasks. These tasks included path finding through the virtual and physical building, as well as a series of drop off tasks. We find that the immersive and highly interactive nature of the AbES software appears to greatly engage the blind user to actively explore the virtual environment. Applications of this approach may extend to larger populations of visually impaired individuals.
ERIC Educational Resources Information Center
Murphrey, Theresa Pesl; Rutherford, Tracy A.; Doerfert, David L.; Edgar, Leslie D.; Edgar, Don W.
2014-01-01
Educational technology continues to expand with multi-user virtual environments (e.g., Second Life™) being the latest technology. Understanding a virtual environment's usability can enhance educational planning and effective use. Usability includes the interaction quality between an individual and the item being assessed. The purpose was to assess…
Low cost heads-up virtual reality (HUVR) with optical tracking and haptic feedback
NASA Astrophysics Data System (ADS)
Margolis, Todd; DeFanti, Thomas A.; Dawe, Greg; Prudhomme, Andrew; Schulze, Jurgen P.; Cutchin, Steve
2011-03-01
Researchers at the University of California, San Diego, have created a new, relatively low-cost augmented reality system that enables users to touch the virtual environment they are immersed in. The Heads-Up Virtual Reality device (HUVR) couples a consumer 3D HD flat screen TV with a half-silvered mirror to project any graphic image onto the user's hands and into the space surrounding them. With his or her head position optically tracked to generate the correct perspective view, the user maneuvers a force-feedback (haptic) device to interact with the 3D image, literally 'touching' the object's angles and contours as if it was a tangible physical object. HUVR can be used for training and education in structural and mechanical engineering, archaeology and medicine as well as other tasks that require hand-eye coordination. One of the most unique characteristics of HUVR is that a user can place their hands inside of the virtual environment without occluding the 3D image. Built using open-source software and consumer level hardware, HUVR offers users a tactile experience in an immersive environment that is functional, affordable and scalable.
Relating Narrative, Inquiry, and Inscriptions: Supporting Consequential Play
NASA Astrophysics Data System (ADS)
Barab, Sasha A.; Sadler, Troy D.; Heiselt, Conan; Hickey, Daniel; Zuiker, Steven
2007-02-01
In this paper we describe our research using a multi-user virtual environment, Quest Atlantis, to embed fourth grade students in an aquatic habitat simulation. Specifically targeted towards engaging students in a rich inquiry investigation, we layered a socio-scientific narrative and an interactive rule set into a multi-user virtual environment gaming engine to establish a virtual world through which students learned about science inquiry, water quality concepts, and the challenges in balancing scientific and socio-economic factors. Overall, students were clearly engaged, participated in rich scientific discourse, submitted quality work, and learned science content. Further, through participation in this narrative, students developed a rich perceptual, conceptual, and ethical understanding of science. This study suggests that multi-user virtual worlds can be effectively leveraged to support academic content learning.
Erratum to: Relating Narrative, Inquiry, and Inscriptions: Supporting Consequential Play
NASA Astrophysics Data System (ADS)
Barab, Sasha A.; Sadler, Troy D.; Heiselt, Conan; Hickey, Daniel; Zuiker, Steven
2010-08-01
In this paper we describe our research using a multi-user virtual environment, Quest Atlantis, to embed fourth grade students in an aquatic habitat simulation. Specifically targeted towards engaging students in a rich inquiry investigation, we layered a socio-scientific narrative and an interactive rule set into a multi-user virtual environment gaming engine to establish a virtual world through which students learned about science inquiry, water quality concepts, and the challenges in balancing scientific and socio-economic factors. Overall, students were clearly engaged, participated in rich scientific discourse, submitted quality work, and learned science content. Further, through participation in this narrative, students developed a rich perceptual, conceptual, and ethical understanding of science. This study suggests that multi-user virtual worlds can be effectively leveraged to support academic content learning.
Rapid prototyping 3D virtual world interfaces within a virtual factory environment
NASA Technical Reports Server (NTRS)
Kosta, Charles Paul; Krolak, Patrick D.
1993-01-01
On-going work into user requirements analysis using CLIPS (NASA/JSC) expert systems as an intelligent event simulator has led to research into three-dimensional (3D) interfaces. Previous work involved CLIPS and two-dimensional (2D) models. Integral to this work was the development of the University of Massachusetts Lowell parallel version of CLIPS, called PCLIPS. This allowed us to create both a Software Bus and a group problem-solving environment for expert systems development. By shifting the PCLIPS paradigm to use the VEOS messaging protocol we have merged VEOS (HlTL/Seattle) and CLIPS into a distributed virtual worlds prototyping environment (VCLIPS). VCLIPS uses the VEOS protocol layer to allow multiple experts to cooperate on a single problem. We have begun to look at the control of a virtual factory. In the virtual factory there are actors and objects as found in our Lincoln Logs Factory of the Future project. In this artificial reality architecture there are three VCLIPS entities in action. One entity is responsible for display and user events in the 3D virtual world. Another is responsible for either simulating the virtual factory or communicating with the real factory. The third is a user interface expert. The interface expert maps user input levels, within the current prototype, to control information for the factory. The interface to the virtual factory is based on a camera paradigm. The graphics subsystem generates camera views of the factory on standard X-Window displays. The camera allows for view control and object control. Control or the factory is accomplished by the user reaching into the camera views to perform object interactions. All communication between the separate CLIPS expert systems is done through VEOS.
A Virtual Mission Operations Center: Collaborative Environment
NASA Technical Reports Server (NTRS)
Medina, Barbara; Bussman, Marie; Obenschain, Arthur F. (Technical Monitor)
2002-01-01
The Virtual Mission Operations Center - Collaborative Environment (VMOC-CE) intent is to have a central access point for all the resources used in a collaborative mission operations environment to assist mission operators in communicating on-site and off-site in the investigation and resolution of anomalies. It is a framework that as a minimum incorporates online chat, realtime file sharing and remote application sharing components in one central location. The use of a collaborative environment in mission operations opens up the possibilities for a central framework for other project members to access and interact with mission operations staff remotely. The goal of the Virtual Mission Operations Center (VMOC) Project is to identify, develop, and infuse technology to enable mission control by on-call personnel in geographically dispersed locations. In order to achieve this goal, the following capabilities are needed: Autonomous mission control systems Automated systems to contact on-call personnel Synthesis and presentation of mission control status and history information Desktop tools for data and situation analysis Secure mechanism for remote collaboration commanding Collaborative environment for remote cooperative work The VMOC-CE is a collaborative environment that facilitates remote cooperative work. It is an application instance of the Virtual System Design Environment (VSDE), developed by NASA Goddard Space Flight Center's (GSFC) Systems Engineering Services & Advanced Concepts (SESAC) Branch. The VSDE is a web-based portal that includes a knowledge repository and collaborative environment to serve science and engineering teams in product development. It is a "one stop shop" for product design, providing users real-time access to product development data, engineering and management tools, and relevant design specifications and resources through the Internet. The initial focus of the VSDE has been to serve teams working in the early portion of the system/product lifecycle - concept development, proposal preparation, and formulation. The VMOC-CE expands the application of the VSDE into the operations portion of the system lifecycle. It will enable meaningful and real-time collaboration regardless of the geographical distribution of project team members. Team members will be able to interact in satellite operations, specifically for resolving anomalies, through access to a desktop computer and the Internet. Mission Operations Management will be able to participate and monitor up to the minute status of anomalies or other mission operations issues. In this paper we present the VMOC-CE project, system capabilities, and technologies.
Multi-Level Adaptation in End-User Development of 3D Virtual Chemistry Experiments
ERIC Educational Resources Information Center
Liu, Chang; Zhong, Ying
2014-01-01
Multi-level adaptation in end-user development (EUD) is an effective way to enable non-technical end users such as educators to gradually introduce more functionality with increasing complexity to 3D virtual learning environments developed by themselves using EUD approaches. Parameterization, integration, and extension are three levels of…
Meeting and Serving Users in Their New Work (and Play) Spaces
ERIC Educational Resources Information Center
Peters, Tom
2008-01-01
This article examines the public services component of digital and virtual libraries, focusing on the end-user experience. As the number and types of "places" where library users access library collections and services continue to expand (now including cell phones, iPods, and three-dimensional virtual reality environments populated by avatars),…
Proposal for Implementing Multi-User Database (MUD) Technology in an Academic Library.
ERIC Educational Resources Information Center
Filby, A. M. Iliana
1996-01-01
Explores the use of MOO (multi-user object oriented) virtual environments in academic libraries to enhance reference services. Highlights include the development of multi-user database (MUD) technology from gaming to non-recreational settings; programming issues; collaborative MOOs; MOOs as distinguished from other types of virtual reality; audio…
Using Virtual Worlds in Education: Second Life[R] as an Educational Tool
ERIC Educational Resources Information Center
Baker, Suzanne C.; Wentz, Ryan K.; Woods, Madison M.
2009-01-01
The online virtual world Second Life (www.secondlife.com) has multiple potential uses in teaching. In Second Life (SL), users create avatars that represent them in the virtual world. Within SL, avatars can interact with each other and with objects and environments. SL offers tremendous creative potential in that users can create content within the…
A MOO-Based Virtual Training Environment.
ERIC Educational Resources Information Center
Mateas, Michael; Lewis, Scott
1996-01-01
Describes the implementation of a virtual environment to support the training of engineers in Panels of Experts (POE), a vehicle for gathering customer data. Describes the environment, discusses some issues of communication and interaction raised by the technology, and relays the experiences of new users within this environment. (RS)
Ambient clumsiness in virtual environments
NASA Astrophysics Data System (ADS)
Ruzanka, Silvia; Behar, Katherine
2010-01-01
A fundamental pursuit of Virtual Reality is the experience of a seamless connection between the user's body and actions within the simulation. Virtual worlds often mediate the relationship between the physical and virtual body through creating an idealized representation of the self in an idealized space. This paper argues that the very ubiquity of the medium of virtual environments, such as the massively popular Second Life, has now made them mundane, and that idealized representations are no longer appropriate. In our artwork we introduce the attribute of clumsiness to Second Life by creating and distributing scripts that cause users' avatars to exhibit unpredictable stumbling, tripping, and momentary poor coordination, thus subtly and unexpectedly intervening with, rather than amplifying, a user's intent. These behaviors are publicly distributed, and manifest only occasionally - rather than intentional, conscious actions, they are involuntary and ambient. We suggest that the physical human body is itself an imperfect interface, and that the continued blurring of distinctions between the physical body and virtual representations calls for the introduction of these mundane, clumsy elements.
Interaction Design and Usability of Learning Spaces in 3D Multi-user Virtual Worlds
NASA Astrophysics Data System (ADS)
Minocha, Shailey; Reeves, Ahmad John
Three-dimensional virtual worlds are multimedia, simulated environments, often managed over the Web, which users can 'inhabit' and interact via their own graphical, self-representations known as 'avatars'. 3D virtual worlds are being used in many applications: education/training, gaming, social networking, marketing and commerce. Second Life is the most widely used 3D virtual world in education. However, problems associated with usability, navigation and way finding in 3D virtual worlds may impact on student learning and engagement. Based on empirical investigations of learning spaces in Second Life, this paper presents design guidelines to improve the usability and ease of navigation in 3D spaces. Methods of data collection include semi-structured interviews with Second Life students, educators and designers. The findings have revealed that design principles from the fields of urban planning, Human- Computer Interaction, Web usability, geography and psychology can influence the design of spaces in 3D multi-user virtual environments.
NASA Astrophysics Data System (ADS)
Schmeil, Andreas; Eppler, Martin J.
Despite the fact that virtual worlds and other types of multi-user 3D collaboration spaces have long been subjects of research and of application experiences, it still remains unclear how to best benefit from meeting with colleagues and peers in a virtual environment with the aim of working together. Making use of the potential of virtual embodiment, i.e. being immersed in a space as a personal avatar, allows for innovative new forms of collaboration. In this paper, we present a framework that serves as a systematic formalization of collaboration elements in virtual environments. The framework is based on the semiotic distinctions among pragmatic, semantic and syntactic perspectives. It serves as a blueprint to guide users in designing, implementing, and executing virtual collaboration patterns tailored to their needs. We present two team and two community collaboration pattern examples as a result of the application of the framework: Virtual Meeting, Virtual Design Studio, Spatial Group Configuration, and Virtual Knowledge Fair. In conclusion, we also point out future research directions for this emerging domain.
A Case Study in User Support for Managing OpenSim Based Multi User Learning Environments
ERIC Educational Resources Information Center
Perera, Indika; Miller, Alan; Allison, Colin
2017-01-01
Immersive 3D Multi User Learning Environments (MULE) have shown sufficient success to warrant their consideration as a mainstream educational paradigm. These are based on 3D Multi User Virtual Environment platforms (MUVE), and although they have been used for various innovative educational projects their complex permission systems and large…
Virtual Diagnostic Interface: Aerospace Experimentation in the Synthetic Environment
NASA Technical Reports Server (NTRS)
Schwartz, Richard J.; McCrea, Andrew C.
2009-01-01
The Virtual Diagnostics Interface (ViDI) methodology combines two-dimensional image processing and three-dimensional computer modeling to provide comprehensive in-situ visualizations commonly utilized for in-depth planning of wind tunnel and flight testing, real time data visualization of experimental data, and unique merging of experimental and computational data sets in both real-time and post-test analysis. The preparation of such visualizations encompasses the realm of interactive three-dimensional environments, traditional and state of the art image processing techniques, database management and development of toolsets with user friendly graphical user interfaces. ViDI has been under development at the NASA Langley Research Center for over 15 years, and has a long track record of providing unique and insightful solutions to a wide variety of experimental testing techniques and validation of computational simulations. This report will address the various aspects of ViDI and how it has been applied to test programs as varied as NASCAR race car testing in NASA wind tunnels to real-time operations concerning Space Shuttle aerodynamic flight testing. In addition, future trends and applications will be outlined in the paper.
Virtual Diagnostic Interface: Aerospace Experimentation in the Synthetic Environment
NASA Technical Reports Server (NTRS)
Schwartz, Richard J.; McCrea, Andrew C.
2010-01-01
The Virtual Diagnostics Interface (ViDI) methodology combines two-dimensional image processing and three-dimensional computer modeling to provide comprehensive in-situ visualizations commonly utilized for in-depth planning of wind tunnel and flight testing, real time data visualization of experimental data, and unique merging of experimental and computational data sets in both real-time and post-test analysis. The preparation of such visualizations encompasses the realm of interactive three-dimensional environments, traditional and state of the art image processing techniques, database management and development of toolsets with user friendly graphical user interfaces. ViDI has been under development at the NASA Langley Research Center for over 15 years, and has a long track record of providing unique and insightful solutions to a wide variety of experimental testing techniques and validation of computational simulations. This report will address the various aspects of ViDI and how it has been applied to test programs as varied as NASCAR race car testing in NASA wind tunnels to real-time operations concerning Space Shuttle aerodynamic flight testing. In addition, future trends and applications will be outlined in the paper.
Meal-Maker: A Virtual Meal Preparation Environment for Children with Cerebral Palsy
ERIC Educational Resources Information Center
Kirshner, Sharon; Weiss, Patrice L.; Tirosh, Emanuel
2011-01-01
Virtual reality (VR) technology enables evaluation and practice of specific skills in a motivating, user-friendly and safe way. The implementation of virtual game environments within clinical settings has increased substantially in recent years. However, the psychometric properties and feasibility of many applications have not been fully…
Introducing and Evaluating the Behavior of Non-Verbal Features in the Virtual Learning
ERIC Educational Resources Information Center
Dharmawansa, Asanka D.; Fukumura, Yoshimi; Marasinghe, Ashu; Madhuwanthi, R. A. M.
2015-01-01
The objective of this research is to introduce the behavior of non-verbal features of e-Learners in the virtual learning environment to establish a fair representation of the real user by an avatar who represents the e-Learner in the virtual environment and to distinguish the deportment of the non-verbal features during the virtual learning…
Real-time tracking of visually attended objects in virtual environments and its application to LOD.
Lee, Sungkil; Kim, Gerard Jounghyun; Choi, Seungmoon
2009-01-01
This paper presents a real-time framework for computationally tracking objects visually attended by the user while navigating in interactive virtual environments. In addition to the conventional bottom-up (stimulus-driven) saliency map, the proposed framework uses top-down (goal-directed) contexts inferred from the user's spatial and temporal behaviors, and identifies the most plausibly attended objects among candidates in the object saliency map. The computational framework was implemented using GPU, exhibiting high computational performance adequate for interactive virtual environments. A user experiment was also conducted to evaluate the prediction accuracy of the tracking framework by comparing objects regarded as visually attended by the framework to actual human gaze collected with an eye tracker. The results indicated that the accuracy was in the level well supported by the theory of human cognition for visually identifying single and multiple attentive targets, especially owing to the addition of top-down contextual information. Finally, we demonstrate how the visual attention tracking framework can be applied to managing the level of details in virtual environments, without any hardware for head or eye tracking.
DWTP: a basis for networked VR on the Internet
NASA Astrophysics Data System (ADS)
Broll, Wolfgang; Schick, Daniel
1998-04-01
Shared virtual worlds are one of today's major research topics. While limited to particular application areas and high speed networks in the past, they become more and more available to a large number of users. One reason for this development was the introduction of VRML (the Virtual Reality Modeling Language), which has been established as a standard of the exchange of 3D worlds on the Internet. Although a number of prototype systems have been developed to realize shared multi-user worlds based on VRML, no suitable network protocol to support the demands of such environments has yet been established. In this paper we will introduce our approach of a network protocol for shared virtual environments: DWTP--the Distributed Worlds Transfer and communication Protocol. We will show how DWTP meets the demands of shared virtual environments on the Internet. We will further present SmallView, our prototype of a distributed multi-user VR system, to show how DWTP can be used to realize shared worlds.
Determinants of Presence in 3D Virtual Worlds: A Structural Equation Modelling Analysis
ERIC Educational Resources Information Center
Chow, Meyrick
2016-01-01
There is a growing body of evidence that feeling present in virtual environments contributes to effective learning. Presence is a psychological state of the user; hence, it is generally agreed that individual differences in user characteristics can lead to different experiences of presence. Despite the fact that user characteristics can play a…
NASA Technical Reports Server (NTRS)
Duncan, K. M.; Harm, D. L.; Crosier, W. G.; Worthington, J. W.
1993-01-01
A unique training device is being developed at the Johnson Space Center Neurosciences Laboratory to help reduce or eliminate Space Motion Sickness (SMS) and spatial orientation disturbances that occur during spaceflight. The Device for Orientation and Motion Environments Preflight Adaptation Trainer (DOME PAT) uses virtual reality technology to simulate some sensory rearrangements experienced by astronauts in microgravity. By exposing a crew member to this novel environment preflight, it is expected that he/she will become partially adapted, and thereby suffer fewer symptoms inflight. The DOME PAT is a 3.7 m spherical dome, within which a 170 by 100 deg field of view computer-generated visual database is projected. The visual database currently in use depicts the interior of a Shuttle spacelab. The trainee uses a six degree-of-freedom, isometric force hand controller to navigate through the virtual environment. Alternatively, the trainee can be 'moved' about within the virtual environment by the instructor, or can look about within the environment by wearing a restraint that controls scene motion in response to head movements. The computer system is comprised of four personal computers that provide the real time control and user interface, and two Silicon Graphics computers that generate the graphical images. The image generator computers use custom algorithms to compensate for spherical image distortion, while maintaining a video update rate of 30 Hz. The DOME PAT is the first such system known to employ virtual reality technology to reduce the untoward effects of the sensory rearrangement associated with exposure to microgravity, and it does so in a very cost-effective manner.
Possibilities and Determinants of Using Low-Cost Devices in Virtual Education Applications
ERIC Educational Resources Information Center
Bun, Pawel Kazimierz; Wichniarek, Radoslaw; Górski, Filip; Grajewski, Damian; Zawadzki, Przemyslaw; Hamrol, Adam
2017-01-01
Virtual reality (VR) may be used as an innovative educational tool. However, in order to fully exploit its potential, it is essential to achieve the effect of immersion. To more completely submerge the user in a virtual environment, it is necessary to ensure that the user's actions are directly translated into the image generated by the…
Design Concerns in the Engineering of Virtual Worlds for Learning
ERIC Educational Resources Information Center
Rapanotti, Lucia; Hall, Jon G.
2011-01-01
The convergence of 3D simulation and social networking into current multi-user virtual environments has opened the door to new forms of interaction for learning in order to complement the face-to-face and Web 2.0-based systems. Yet, despite a growing user community, design knowledge for virtual worlds remains patchy, particularly when it comes to…
Costa, Nuno; Domingues, Patricio; Fdez-Riverola, Florentino; Pereira, António
2014-01-01
Ambient Intelligence promises to transform current spaces into electronic environments that are responsive, assistive and sensitive to human presence. Those electronic environments will be fully populated with dozens, hundreds or even thousands of connected devices that share information and thus become intelligent. That massive wave of electronic devices will also invade everyday objects, turning them into smart entities, keeping their native features and characteristics while seamlessly promoting them to a new class of thinking and reasoning everyday objects. Although there are strong expectations that most of the users' needs can be fulfilled without their intervention, there are still situations where interaction is required. This paper presents work being done in the field of human-computer interaction, focusing on smart home environments, while being a part of a larger project called Aging Inside a Smart Home. This initiative arose as a way to deal with a large scourge of our country, where lots of elderly persons live alone in their homes, often with limited or no physical mobility. The project relies on the mobile agent computing paradigm in order to create a Virtual Butler that provides the interface between the elderly and the smart home infrastructure. The Virtual Butler is receptive to user questions, answering them according to the context and knowledge of the AISH. It is also capable of interacting with the user whenever it senses that something has gone wrong, notifying next of kin and/or medical services, etc. The Virtual Butler is aware of the user location and moves to the computing device which is closest to the user, in order to be always present. Its avatar can also run in handheld devices keeping its main functionality in order to track user when s/he goes out. According to the evaluation carried out, the Virtual Butler is assessed as a very interesting and loved digital friend, filling the gap between the user and the smart home. The evaluation also showed that the Virtual Butler concept can be easily ported to other types of possible smart and assistive environments like airports, hospitals, shopping malls, offices, etc. PMID:25102342
Costa, Nuno; Domingues, Patricio; Fdez-Riverola, Florentino; Pereira, António
2014-08-06
Ambient Intelligence promises to transform current spaces into electronic environments that are responsive, assistive and sensitive to human presence. Those electronic environments will be fully populated with dozens, hundreds or even thousands of connected devices that share information and thus become intelligent. That massive wave of electronic devices will also invade everyday objects, turning them into smart entities, keeping their native features and characteristics while seamlessly promoting them to a new class of thinking and reasoning everyday objects. Although there are strong expectations that most of the users' needs can be fulfilled without their intervention, there are still situations where interaction is required. This paper presents work being done in the field of human-computer interaction, focusing on smart home environments, while being a part of a larger project called Aging Inside a Smart Home. This initiative arose as a way to deal with a large scourge of our country, where lots of elderly persons live alone in their homes, often with limited or no physical mobility. The project relies on the mobile agent computing paradigm in order to create a Virtual Butler that provides the interface between the elderly and the smart home infrastructure. The Virtual Butler is receptive to user questions, answering them according to the context and knowledge of the AISH. It is also capable of interacting with the user whenever it senses that something has gone wrong, notifying next of kin and/or medical services, etc. The Virtual Butler is aware of the user location and moves to the computing device which is closest to the user, in order to be always present. Its avatar can also run in handheld devices keeping its main functionality in order to track user when s/he goes out. According to the evaluation carried out, the Virtual Butler is assessed as a very interesting and loved digital friend, filling the gap between the user and the smart home. The evaluation also showed that the Virtual Butler concept can be easily ported to other types of possible smart and assistive environments like airports, hospitals, shopping malls, offices, etc.
NASA Astrophysics Data System (ADS)
Cox, S. J.; Wyborn, L. A.; Fraser, R.; Rankine, T.; Woodcock, R.; Vote, J.; Evans, B.
2012-12-01
The Virtual Geophysics Laboratory (VGL) is web portal that provides geoscientists with an integrated online environment that: seamlessly accesses geophysical and geoscience data services from the AuScope national geoscience information infrastructure; loosely couples these data to a variety of gesocience software tools; and provides large scale processing facilities via cloud computing. VGL is a collaboration between CSIRO, Geoscience Australia, National Computational Infrastructure, Monash University, Australian National University and the University of Queensland. The VGL provides a distributed system whereby a user can enter an online virtual laboratory to seamlessly connect to OGC web services for geoscience data. The data is supplied in open standards formats using international standards like GeoSciML. A VGL user uses a web mapping interface to discover and filter the data sources using spatial and attribute filters to define a subset. Once the data is selected the user is not required to download the data. VGL collates the service query information for later in the processing workflow where it will be staged directly to the computing facilities. The combination of deferring data download and access to Cloud computing enables VGL users to access their data at higher resolutions and to undertake larger scale inversions, more complex models and simulations than their own local computing facilities might allow. Inside the Virtual Geophysics Laboratory, the user has access to a library of existing models, complete with exemplar workflows for specific scientific problems based on those models. For example, the user can load a geological model published by Geoscience Australia, apply a basic deformation workflow provided by a CSIRO scientist, and have it run in a scientific code from Monash. Finally the user can publish these results to share with a colleague or cite in a paper. This opens new opportunities for access and collaboration as all the resources (models, code, data, processing) are shared in the one virtual laboratory. VGL provides end users with access to an intuitive, user-centered interface that leverages cloud storage and cloud and cluster processing from both the research communities and commercial suppliers (e.g. Amazon). As the underlying data and information services are agnostic of the scientific domain, they can support many other data types. This fundamental characteristic results in a highly reusable virtual laboratory infrastructure that could also be used for example natural hazards, satellite processing, soil geochemistry, climate modeling, agriculture crop modeling.
Active Learning through the Use of Virtual Environments
ERIC Educational Resources Information Center
Mayrose, James
2012-01-01
Immersive Virtual Reality (VR) has seen explosive growth over the last decade. Immersive VR attempts to give users the sensation of being fully immersed in a synthetic environment by providing them with 3D hardware, and allowing them to interact with objects in virtual worlds. The technology is extremely effective for learning and exploration, and…
A Virtual World Workshop Environment for Learning Agile Software Development Techniques
ERIC Educational Resources Information Center
Parsons, David; Stockdale, Rosemary
2012-01-01
Multi-User Virtual Environments (MUVEs) are the subject of increasing interest for educators and trainers. This article reports on a longitudinal project that seeks to establish a virtual agile software development workshop hosted in the Open Wonderland MUVE, designed to help learners to understand the basic principles of some core agile software…
Advanced Maintenance Simulation by Means of Hand-Based Haptic Interfaces
NASA Astrophysics Data System (ADS)
Nappi, Michele; Paolino, Luca; Ricciardi, Stefano; Sebillo, Monica; Vitiello, Giuliana
Aerospace industry has been involved in virtual simulation for design and testing since the birth of virtual reality. Today this industry is showing a growing interest in the development of haptic-based maintenance training applications, which represent the most advanced way to simulate maintenance and repair tasks within a virtual environment by means of a visual-haptic approach. The goal is to allow the trainee to experiment the service procedures not only as a workflow reproduced at a visual level but also in terms of the kinaesthetic feedback involved with the manipulation of tools and components. This study, conducted in collaboration with aerospace industry specialists, is aimed to the development of an immersive virtual capable of immerging the trainees into a virtual environment where mechanics and technicians can perform maintenance simulation or training tasks by directly manipulating 3D virtual models of aircraft parts while perceiving force feedback through the haptic interface. The proposed system is based on ViRstperson, a virtual reality engine under development at the Italian Center for Aerospace Research (CIRA) to support engineering and technical activities such as design-time maintenance procedure validation, and maintenance training. This engine has been extended to support haptic-based interaction, enabling a more complete level of interaction, also in terms of impedance control, and thus fostering the development of haptic knowledge in the user. The user’s “sense of touch” within the immersive virtual environment is simulated through an Immersion CyberForce® hand-based force-feedback device. Preliminary testing of the proposed system seems encouraging.
Virtual Tour Environment of Cuba's National School of Art
NASA Astrophysics Data System (ADS)
Napolitano, R. K.; Douglas, I. P.; Garlock, M. E.; Glisic, B.
2017-08-01
Innovative technologies have enabled new opportunities for collecting, analyzing, and sharing information about cultural heritage sites. Through a combination of two of these technologies, spherical imaging and virtual tour environment, we preliminarily documented one of Cuba's National Schools of Art, the National Ballet School.The Ballet School is one of the five National Art Schools built in Havana, Cuba after the revolution. Due to changes in the political climate, construction was halted on the schools before completion. The Ballet School in particular was partially completed but never used for the intended purpose. Over the years, the surrounding vegetation and environment have started to overtake the buildings; damages such as missing bricks, corroded rebar, and broken tie bars can be seen. We created a virtual tour through the Ballet School which highlights key satellite classrooms and the main domed performance spaces. Scenes of the virtual tour were captured utilizing the Ricoh Theta S spherical imaging camera and processed with Kolor Panotour virtual environment software. Different forms of data can be included in this environment in order to provide a user with pertinent information. Image galleries, hyperlinks to websites, videos, PDFs, and links to databases can be embedded within the scene and interacted with by a user. By including this information within the virtual tour, a user can better understand how the site was constructed as well as the existing types of damage. The results of this work are recommendations for how a site can be preliminarily documented and information can be initially organized and shared.
NASA Technical Reports Server (NTRS)
Yeske, Lanny A.
1998-01-01
Numerous FY1998 student research projects were sponsored by the Mississippi State University Center for Air Sea Technology. This technical note describes these projects which include research on: (1) Graphical User Interfaces, (2) Master Environmental Library, (3) Database Management Systems, (4) Naval Interactive Data Analysis System, (5) Relocatable Modeling Environment, (6) Tidal Models, (7) Book Inventories, (8) System Analysis, (9) World Wide Web Development, (10) Virtual Data Warehouse, (11) Enterprise Information Explorer, (12) Equipment Inventories, (13) COADS, and (14) JavaScript Technology.
The Virtual Tablet: Virtual Reality as a Control System
NASA Technical Reports Server (NTRS)
Chronister, Andrew
2016-01-01
In the field of human-computer interaction, Augmented Reality (AR) and Virtual Reality (VR) have been rapidly growing areas of interest and concerted development effort thanks to both private and public research. At NASA, a number of groups have explored the possibilities afforded by AR and VR technology, among which is the IT Advanced Concepts Lab (ITACL). Within ITACL, the AVR (Augmented/Virtual Reality) Lab focuses on VR technology specifically for its use in command and control. Previous work in the AVR lab includes the Natural User Interface (NUI) project and the Virtual Control Panel (VCP) project, which created virtual three-dimensional interfaces that users could interact with while wearing a VR headset thanks to body- and hand-tracking technology. The Virtual Tablet (VT) project attempts to improve on these previous efforts by incorporating a physical surrogate which is mirrored in the virtual environment, mitigating issues with difficulty of visually determining the interface location and lack of tactile feedback discovered in the development of previous efforts. The physical surrogate takes the form of a handheld sheet of acrylic glass with several infrared-range reflective markers and a sensor package attached. Using the sensor package to track orientation and a motion-capture system to track the marker positions, a model of the surrogate is placed in the virtual environment at a position which corresponds with the real-world location relative to the user's VR Head Mounted Display (HMD). A set of control mechanisms is then projected onto the surface of the surrogate such that to the user, immersed in VR, the control interface appears to be attached to the object they are holding. The VT project was taken from an early stage where the sensor package, motion-capture system, and physical surrogate had been constructed or tested individually but not yet combined or incorporated into the virtual environment. My contribution was to combine the pieces of hardware, write software to incorporate each piece of position or orientation data into a coherent description of the object's location in space, place the virtual analogue accordingly, and project the control interface onto it, resulting in a functioning object which has both a physical and a virtual presence. Additionally, the virtual environment was enhanced with two live video feeds from cameras mounted on the robotic device being used as an example target of the virtual interface. The working VT allows users to naturally interact with a control interface with little to no training and without the issues found in previous efforts.
NASA Astrophysics Data System (ADS)
Mousas, Christos; Anagnostopoulos, Christos-Nikolaos
2017-06-01
This paper presents a hybrid character control interface that provides the ability to synthesize in real-time a variety of actions based on the user's performance capture. The proposed methodology enables three different performance interaction modules: the performance animation control that enables the direct mapping of the user's pose to the character, the motion controller that synthesizes the desired motion of the character based on an activity recognition methodology, and the hybrid control that lies within the performance animation and the motion controller. With the methodology presented, the user will have the freedom to interact within the virtual environment, as well as the ability to manipulate the character and to synthesize a variety of actions that cannot be performed directly by him/her, but which the system synthesizes. Therefore, the user is able to interact with the virtual environment in a more sophisticated fashion. This paper presents examples of different scenarios based on the three different full-body character control methodologies.
Experimental Internet Environment Software Development
NASA Technical Reports Server (NTRS)
Maddux, Gary A.
1998-01-01
Geographically distributed project teams need an Internet based collaborative work environment or "Intranet." The Virtual Research Center (VRC) is an experimental Intranet server that combines several services such as desktop conferencing, file archives, on-line publishing, and security. Using the World Wide Web (WWW) as a shared space paradigm, the Graphical User Interface (GUI) presents users with images of a lunar colony. Each project has a wing of the colony and each wing has a conference room, library, laboratory, and mail station. In FY95, the VRC development team proved the feasibility of this shared space concept by building a prototype using a Netscape commerce server and several public domain programs. Successful demonstrations of the prototype resulted in approval for a second phase. Phase 2, documented by this report, will produce a seamlessly integrated environment by introducing new technologies such as Java and Adobe Web Links to replace less efficient interface software.
Comparative study on collaborative interaction in non-immersive and immersive systems
NASA Astrophysics Data System (ADS)
Shahab, Qonita M.; Kwon, Yong-Moo; Ko, Heedong; Mayangsari, Maria N.; Yamasaki, Shoko; Nishino, Hiroaki
2007-09-01
This research studies the Virtual Reality simulation for collaborative interaction so that different people from different places can interact with one object concurrently. Our focus is the real-time handling of inputs from multiple users, where object's behavior is determined by the combination of the multiple inputs. Issues addressed in this research are: 1) The effects of using haptics on a collaborative interaction, 2) The possibilities of collaboration between users from different environments. We conducted user tests on our system in several cases: 1) Comparison between non-haptics and haptics collaborative interaction over LAN, 2) Comparison between non-haptics and haptics collaborative interaction over Internet, and 3) Analysis of collaborative interaction between non-immersive and immersive display environments. The case studies are the interaction of users in two cases: collaborative authoring of a 3D model by two users, and collaborative haptic interaction by multiple users. In Virtual Dollhouse, users can observe physics law while constructing a dollhouse using existing building blocks, under gravity effects. In Virtual Stretcher, multiple users can collaborate on moving a stretcher together while feeling each other's haptic motions.
The Effect of Procedural Guidance on Students' Skill Enhancement in a Virtual Chemistry Laboratory
ERIC Educational Resources Information Center
Ullah, Sehat; Ali, Numan; Rahman, Sami Ur
2016-01-01
Various cognitive aids (such as change of color, arrows, etc.) are provided in virtual environments to assist users in task realization. These aids increase users' performance but lead to reduced learning because there is less cognitive load on the users. In this paper we present a new concept of procedural guidance in which textual information…
Cognitive Styles and Virtual Environments.
ERIC Educational Resources Information Center
Ford, Nigel
2000-01-01
Discussion of navigation through virtual information environments focuses on the need for robust user models that take into account individual differences. Considers Pask's information processing styles and strategies; deep (transformational) and surface (reproductive) learning; field dependence/independence; divergent/convergent thinking;…
A Novel Cloud-Based Service Robotics Application to Data Center Environmental Monitoring
Russo, Ludovico Orlando; Rosa, Stefano; Maggiora, Marcello; Bona, Basilio
2016-01-01
This work presents a robotic application aimed at performing environmental monitoring in data centers. Due to the high energy density managed in data centers, environmental monitoring is crucial for controlling air temperature and humidity throughout the whole environment, in order to improve power efficiency, avoid hardware failures and maximize the life cycle of IT devices. State of the art solutions for data center monitoring are nowadays based on environmental sensor networks, which continuously collect temperature and humidity data. These solutions are still expensive and do not scale well in large environments. This paper presents an alternative to environmental sensor networks that relies on autonomous mobile robots equipped with environmental sensors. The robots are controlled by a centralized cloud robotics platform that enables autonomous navigation and provides a remote client user interface for system management. From the user point of view, our solution simulates an environmental sensor network. The system can easily be reconfigured in order to adapt to management requirements and changes in the layout of the data center. For this reason, it is called the virtual sensor network. This paper discusses the implementation choices with regards to the particular requirements of the application and presents and discusses data collected during a long-term experiment in a real scenario. PMID:27509505
NASA Astrophysics Data System (ADS)
Weiland, C.; Chadwick, W. W.
2004-12-01
Several years ago we created an exciting and engaging multimedia exhibit for the Hatfield Marine Science Center that lets visitors simulate making a dive to the seafloor with the remotely operated vehicle (ROV) named ROPOS. The exhibit immerses the user in an interactive experience that is naturally fun but also educational. The public display is located at the Hatfield Marine Science Visitor Center in Newport, Oregon. We are now completing a revision to the project that will make this engaging virtual exploration accessible to a much larger audience. With minor modifications we will be able to put the exhibit onto the world wide web so that any person with internet access can view and learn about exciting volcanic and hydrothermal activity at Axial Seamount on the Juan de Fuca Ridge. The modifications address some cosmetic and logistic ISSUES confronted in the museum environment, but will mainly involve compressing video clips so they can be delivered more efficiently over the internet. The web version, like the museum version, will allow users to choose from 1 of 3 different dives sites in the caldera of Axial Volcano. The dives are based on real seafloor settings at Axial seamount, an active submarine volcano on the Juan de Fuca Ridge (NE Pacific) that is also the location of a seafloor observatory called NeMO. Once a dive is chosen, then the user watches ROPOS being deployed and then arrives into a 3-D computer-generated seafloor environment that is based on the real world but is easier to visualize and navigate. Once on the bottom, the user is placed within a 360 degree panorama and can look in all directions by manipulating the computer mouse. By clicking on markers embedded in the scene, the user can then either move to other panorama locations via movies that travel through the 3-D virtual environment, or they can play video clips from actual ROPOS dives specifically related to that scene. Audio accompanying the video clips informs the user where they are going or what they are looking at. After the user is finished exploring the dive site they end the dive by leaving the bottom and watching the ROV being recovered onto the ship at the surface. Within the three simulated dives there are a total of 6 arrival and departure movies, 7 seafloor panoramas, 12 travel movies, and 23 ROPOS video clips. This virtual exploration is part of the NeMO web site and will be at this URL http://www.pmel.noaa.gov/vents/dive.html
A strategic map for high-impact virtual experience design
NASA Astrophysics Data System (ADS)
Faste, Haakon; Bergamasco, Massimo
2009-02-01
We have employed methodologies of human centered design to inspire and guide the engineering of a definitive low-cost aesthetic multimodal experience intended to stimulate cultural growth. Using a combination of design research, trend analysis and the programming of immersive virtual 3D worlds, over 250 innovative concepts have been brainstormed, prototyped, evaluated and refined. These concepts have been used to create a strategic map for the development of highimpact virtual art experiences, the most promising of which have been incorporated into a multimodal environment programmed in the online interactive 3D platform XVR. A group of test users have evaluated the experience as it has evolved, using a multimodal interface with stereo vision, 3D audio and haptic feedback. This paper discusses the process, content, results, and impact on our engineering laboratory that this research has produced.
DHM simulation in virtual environments: a case-study on control room design.
Zamberlan, M; Santos, V; Streit, P; Oliveira, J; Cury, R; Negri, T; Pastura, F; Guimarães, C; Cid, G
2012-01-01
This paper will present the workflow developed for the application of serious games in the design of complex cooperative work settings. The project was based on ergonomic studies and development of a control room among participative design process. Our main concerns were the 3D human virtual representation acquired from 3D scanning, human interaction, workspace layout and equipment designed considering ergonomics standards. Using Unity3D platform to design the virtual environment, the virtual human model can be controlled by users on dynamic scenario in order to evaluate the new work settings and simulate work activities. The results obtained showed that this virtual technology can drastically change the design process by improving the level of interaction between final users and, managers and human factors team.
ERIC Educational Resources Information Center
Dukas, Georg
2009-01-01
Though research in emerging technologies is vital to fulfilling their incredible potential for educational applications, it is often fraught with analytic challenges related to large datasets. This thesis explores these challenges in researching multiuser virtual environments (MUVEs). In a MUVE, users assume a persona and traverse a virtual space…
The virtual windtunnel: Visualizing modern CFD datasets with a virtual environment
NASA Technical Reports Server (NTRS)
Bryson, Steve
1993-01-01
This paper describes work in progress on a virtual environment designed for the visualization of pre-computed fluid flows. The overall problems involved in the visualization of fluid flow are summarized, including computational, data management, and interface issues. Requirements for a flow visualization are summarized. Many aspects of the implementation of the virtual windtunnel were uniquely determined by these requirements. The user interface is described in detail.
User modeling for distributed virtual environment intelligent agents
NASA Astrophysics Data System (ADS)
Banks, Sheila B.; Stytz, Martin R.
1999-07-01
This paper emphasizes the requirement for user modeling by presenting the necessary information to motivate the need for and use of user modeling for intelligent agent development. The paper will present information on our current intelligent agent development program, the Symbiotic Information Reasoning and Decision Support (SIRDS) project. We then discuss the areas of intelligent agents and user modeling, which form the foundation of the SIRDS project. Included in the discussion of user modeling are its major components, which are cognitive modeling and behavioral modeling. We next motivate the need for and user of a methodology to develop user models to encompass work within cognitive task analysis. We close the paper by drawing conclusions from our current intelligent agent research project and discuss avenues of future research in the utilization of user modeling for the development of intelligent agents for virtual environments.
ERIC Educational Resources Information Center
Panettieri, Joseph C.
2007-01-01
Across the globe, progressive universities are embracing any number of MUVEs (multi-user virtual environments), 3D environments, and "immersive" virtual reality tools. And within the next few months, several universities are expected to test so-called "telepresence" videoconferencing systems from Cisco Systems and other leading…
Using smartphone technology to deliver a virtual pedestrian environment: usability and validation.
Schwebel, David C; Severson, Joan; He, Yefei
2017-09-01
Various programs effectively teach children to cross streets more safely, but all are labor- and cost-intensive. Recent developments in mobile phone technology offer opportunity to deliver virtual reality pedestrian environments to mobile smartphone platforms. Such an environment may offer a cost- and labor-effective strategy to teach children to cross streets safely. This study evaluated usability, feasibility, and validity of a smartphone-based virtual pedestrian environment. A total of 68 adults completed 12 virtual crossings within each of two virtual pedestrian environments, one delivered by smartphone and the other a semi-immersive kiosk virtual environment. Participants completed self-report measures of perceived realism and simulator sickness experienced in each virtual environment, plus self-reported demographic and personality characteristics. All participants followed system instructions and used the smartphone-based virtual environment without difficulty. No significant simulator sickness was reported or observed. Users rated the smartphone virtual environment as highly realistic. Convergent validity was detected, with many aspects of pedestrian behavior in the smartphone-based virtual environment matching behavior in the kiosk virtual environment. Anticipated correlations between personality and kiosk virtual reality pedestrian behavior emerged for the smartphone-based system. A smartphone-based virtual environment can be usable and valid. Future research should develop and evaluate such a training system.
Dong, Yu-Shuang; Xu, Gao-Chao; Fu, Xiao-Dong
2014-01-01
The cloud platform provides various services to users. More and more cloud centers provide infrastructure as the main way of operating. To improve the utilization rate of the cloud center and to decrease the operating cost, the cloud center provides services according to requirements of users by sharding the resources with virtualization. Considering both QoS for users and cost saving for cloud computing providers, we try to maximize performance and minimize energy cost as well. In this paper, we propose a distributed parallel genetic algorithm (DPGA) of placement strategy for virtual machines deployment on cloud platform. It executes the genetic algorithm parallelly and distributedly on several selected physical hosts in the first stage. Then it continues to execute the genetic algorithm of the second stage with solutions obtained from the first stage as the initial population. The solution calculated by the genetic algorithm of the second stage is the optimal one of the proposed approach. The experimental results show that the proposed placement strategy of VM deployment can ensure QoS for users and it is more effective and more energy efficient than other placement strategies on the cloud platform. PMID:25097872
Dong, Yu-Shuang; Xu, Gao-Chao; Fu, Xiao-Dong
2014-01-01
The cloud platform provides various services to users. More and more cloud centers provide infrastructure as the main way of operating. To improve the utilization rate of the cloud center and to decrease the operating cost, the cloud center provides services according to requirements of users by sharding the resources with virtualization. Considering both QoS for users and cost saving for cloud computing providers, we try to maximize performance and minimize energy cost as well. In this paper, we propose a distributed parallel genetic algorithm (DPGA) of placement strategy for virtual machines deployment on cloud platform. It executes the genetic algorithm parallelly and distributedly on several selected physical hosts in the first stage. Then it continues to execute the genetic algorithm of the second stage with solutions obtained from the first stage as the initial population. The solution calculated by the genetic algorithm of the second stage is the optimal one of the proposed approach. The experimental results show that the proposed placement strategy of VM deployment can ensure QoS for users and it is more effective and more energy efficient than other placement strategies on the cloud platform.
The Heliophysics Data Environment Today
NASA Technical Reports Server (NTRS)
Fung, Shing F.; McGuire, R.; Roberts, D. A.
2008-01-01
Driven by the nature of the research questions now most critical to further progress in heliophysics science, data-driven research has evolved from a model once centered on individual instrument Principal investigator groups and a circle of immediate collaborators into a more inclusive and open environment where data gathered ay great public cost must then be findable and useable throughout the broad national and international research community. In this paper and as an introduction to this special session, we will draw a picture of existing and evolving resources throughout the heliophyscs community, the capabilities and data now available to end users, and the relationships and complementarity of different elements in the environment today. We will cite the relative roles of mission and instrument data centers and resident archives, multi-mission data centers, and the growing importance of virtual discipline observatories and cross-cutting services including the evolution of a common data dictionary. We will briefly summarize our view of the most important challenges still faced by users and providers, and our vision in ow the efforts today can evolve into a more and more enabling data framework for the global research community to tap the widest range of existing missions and their data to address a full range of critical science questions from the scale of microphysics to the heliospheric system as a whole.
NASA Astrophysics Data System (ADS)
Sasaki, T.; Azuma, S.; Matsuda, S.; Nagayama, A.; Ogido, M.; Saito, H.; Hanafusa, Y.
2016-12-01
The Japan Agency for Marine-Earth Science and Technology (JAMSTEC) archives a large amount of deep-sea research videos and photos obtained by JAMSTEC's research submersibles and vehicles with cameras. The web site "JAMSTEC E-library of Deep-sea Images : J-EDI" (http://www.godac.jamstec.go.jp/jedi/e/) has made videos and photos available to the public via the Internet since 2011. Users can search for target videos and photos by keywords, easy-to-understand icons, and dive information at J-EDI because operating staffs classify videos and photos as to contents, e.g. living organism and geological environment, and add comments to them.Dive survey data including videos and photos are not only valiant academically but also helpful for education and outreach activities. With the aim of the improvement of visibility for broader communities, we added new functions of 3-dimensional display synchronized various dive survey data with videos in this year.New Functions Users can search for dive survey data by 3D maps with plotted dive points using the WebGL virtual map engine "Cesium". By selecting a dive point, users can watch deep-sea videos and photos and associated environmental data, e.g. water temperature, salinity, rock and biological sample photos, obtained by the dive survey. Users can browse a dive track visualized in 3D virtual spaces using the WebGL JavaScript library. By synchronizing this virtual dive track with videos, users can watch deep-sea videos recorded at a point on a dive track. Users can play an animation which a submersible-shaped polygon automatically traces a 3D virtual dive track and displays of dive survey data are synchronized with tracing a dive track. Users can directly refer to additional information of other JAMSTEC data sites such as marine biodiversity database, marine biological sample database, rock sample database, and cruise and dive information database, on each page which a 3D virtual dive track is displayed. A 3D visualization of a dive track makes users experience a virtual dive survey. In addition, by synchronizing a virtual dive track with videos, it is easy to understand living organisms and geological environments of a dive point. Therefore, these functions will visually support understanding of deep-sea environments in lectures and educational activities.
Maidenbaum, Shachar; Levy-Tzedek, Shelly; Chebat, Daniel-Robert; Amedi, Amir
2013-01-01
Virtual worlds and environments are becoming an increasingly central part of our lives, yet they are still far from accessible to the blind. This is especially unfortunate as such environments hold great potential for them for uses such as social interaction, online education and especially for use with familiarizing the visually impaired user with a real environment virtually from the comfort and safety of his own home before visiting it in the real world. We have implemented a simple algorithm to improve this situation using single-point depth information, enabling the blind to use a virtual cane, modeled on the “EyeCane” electronic travel aid, within any virtual environment with minimal pre-processing. Use of the Virtual-EyeCane, enables this experience to potentially be later used in real world environments with identical stimuli to those from the virtual environment. We show the fast-learned practical use of this algorithm for navigation in simple environments. PMID:23977316
ERIC Educational Resources Information Center
Warburton, Steven
2009-01-01
"Second Life" (SL) is currently the most mature and popular multi-user virtual world platform being used in education. Through an in-depth examination of SL, this article explores its potential and the barriers that multi-user virtual environments present to educators wanting to use immersive 3-D spaces in their teaching. The context is set by…
Generating Contextual Descriptions of Virtual Reality (VR) Spaces
NASA Astrophysics Data System (ADS)
Olson, D. M.; Zaman, C. H.; Sutherland, A.
2017-12-01
Virtual reality holds great potential for science communication, education, and research. However, interfaces for manipulating data and environments in virtual worlds are limited and idiosyncratic. Furthermore, speech and vision are the primary modalities by which humans collect information about the world, but the linking of visual and natural language domains is a relatively new pursuit in computer vision. Machine learning techniques have been shown to be effective at image and speech classification, as well as at describing images with language (Karpathy 2016), but have not yet been used to describe potential actions. We propose a technique for creating a library of possible context-specific actions associated with 3D objects in immersive virtual worlds based on a novel dataset generated natively in virtual reality containing speech, image, gaze, and acceleration data. We will discuss the design and execution of a user study in virtual reality that enabled the collection and the development of this dataset. We will also discuss the development of a hybrid machine learning algorithm linking vision data with environmental affordances in natural language. Our findings demonstrate that it is possible to develop a model which can generate interpretable verbal descriptions of possible actions associated with recognized 3D objects within immersive VR environments. This suggests promising applications for more intuitive user interfaces through voice interaction within 3D environments. It also demonstrates the potential to apply vast bodies of embodied and semantic knowledge to enrich user interaction within VR environments. This technology would allow for applications such as expert knowledge annotation of 3D environments, complex verbal data querying and object manipulation in virtual spaces, and computer-generated, dynamic 3D object affordances and functionality during simulations.
Enhancing Navigation Skills through Audio Gaming.
Sánchez, Jaime; Sáenz, Mauricio; Pascual-Leone, Alvaro; Merabet, Lotfi
2010-01-01
We present the design, development and initial cognitive evaluation of an Audio-based Environment Simulator (AbES). This software allows a blind user to navigate through a virtual representation of a real space for the purposes of training orientation and mobility skills. Our findings indicate that users feel satisfied and self-confident when interacting with the audio-based interface, and the embedded sounds allow them to correctly orient themselves and navigate within the virtual world. Furthermore, users are able to transfer spatial information acquired through virtual interactions into real world navigation and problem solving tasks.
Enhancing Navigation Skills through Audio Gaming
Sánchez, Jaime; Sáenz, Mauricio; Pascual-Leone, Alvaro; Merabet, Lotfi
2014-01-01
We present the design, development and initial cognitive evaluation of an Audio-based Environment Simulator (AbES). This software allows a blind user to navigate through a virtual representation of a real space for the purposes of training orientation and mobility skills. Our findings indicate that users feel satisfied and self-confident when interacting with the audio-based interface, and the embedded sounds allow them to correctly orient themselves and navigate within the virtual world. Furthermore, users are able to transfer spatial information acquired through virtual interactions into real world navigation and problem solving tasks. PMID:25505796
Improving User Notification on Frequently Changing HPC Environments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fuson, Christopher B; Renaud, William A
2016-01-01
Today s HPC centers user environments can be very complex. Centers often contain multiple large complicated computational systems each with their own user environment. Changes to a system s environment can be very impactful; however, a center s user environment is, in one-way or another, frequently changing. Because of this, it is vital for centers to notify users of change. For users, untracked changes can be costly, resulting in unnecessary debug time as well as wasting valuable compute allocations and research time. Communicating frequent change to diverse user communities is a common and ongoing task for HPC centers. This papermore » will cover the OLCF s current processes and methods used to communicate change to users of the center s large Cray systems and supporting resources. The paper will share lessons learned and goals as well as practices, tools, and methods used to continually improve and reach members of the OLCF user community.« less
The expert surgical assistant. An intelligent virtual environment with multimodal input.
Billinghurst, M; Savage, J; Oppenheimer, P; Edmond, C
1996-01-01
Virtual Reality has made computer interfaces more intuitive but not more intelligent. This paper shows how an expert system can be coupled with multimodal input in a virtual environment to provide an intelligent simulation tool or surgical assistant. This is accomplished in three steps. First, voice and gestural input is interpreted and represented in a common semantic form. Second, a rule-based expert system is used to infer context and user actions from this semantic representation. Finally, the inferred user actions are matched against steps in a surgical procedure to monitor the user's progress and provide automatic feedback. In addition, the system can respond immediately to multimodal commands for navigational assistance and/or identification of critical anatomical structures. To show how these methods are used we present a prototype sinus surgery interface. The approach described here may easily be extended to a wide variety of medical and non-medical training applications by making simple changes to the expert system database and virtual environment models. Successful implementation of an expert system in both simulated and real surgery has enormous potential for the surgeon both in training and clinical practice.
Scripting human animations in a virtual environment
NASA Technical Reports Server (NTRS)
Goldsby, Michael E.; Pandya, Abhilash K.; Maida, James C.
1994-01-01
The current deficiencies of virtual environment (VE) are well known: annoying lag time in drawing the current view, drastically simplified environments to reduce that time lag, low resolution and narrow field of view. Animation scripting is an application of VE technology which can be carried out successfully despite these deficiencies. The final product is a smoothly moving high resolution animation displaying detailed models. In this system, the user is represented by a human computer model with the same body proportions. Using magnetic tracking, the motions of the model's upper torso, head and arms are controlled by the user's movements (18 degrees of freedom). The model's lower torso and global position and orientation are controlled by a spaceball and keypad (12 degrees of freedom). Using this system human motion scripts can be extracted from the user's movements while immersed in a simplified virtual environment. Recorded data is used to define key frames; motion is interpolated between them and post processing adds a more detailed environment. The result is a considerable savings in time and a much more natural-looking movement of a human figure in a smooth and seamless animation.
Manually locating physical and virtual reality objects.
Chen, Karen B; Kimmel, Ryan A; Bartholomew, Aaron; Ponto, Kevin; Gleicher, Michael L; Radwin, Robert G
2014-09-01
In this study, we compared how users locate physical and equivalent three-dimensional images of virtual objects in a cave automatic virtual environment (CAVE) using the hand to examine how human performance (accuracy, time, and approach) is affected by object size, location, and distance. Virtual reality (VR) offers the promise to flexibly simulate arbitrary environments for studying human performance. Previously, VR researchers primarily considered differences between virtual and physical distance estimation rather than reaching for close-up objects. Fourteen participants completed manual targeting tasks that involved reaching for corners on equivalent physical and virtual boxes of three different sizes. Predicted errors were calculated from a geometric model based on user interpupillary distance, eye location, distance from the eyes to the projector screen, and object. Users were 1.64 times less accurate (p < .001) and spent 1.49 times more time (p = .01) targeting virtual versus physical box corners using the hands. Predicted virtual targeting errors were on average 1.53 times (p < .05) greater than the observed errors for farther virtual targets but not significantly different for close-up virtual targets. Target size, location, and distance, in addition to binocular disparity, affected virtual object targeting inaccuracy. Observed virtual box inaccuracy was less than predicted for farther locations, suggesting possible influence of cues other than binocular vision. Human physical interaction with objects in VR for simulation, training, and prototyping involving reaching and manually handling virtual objects in a CAVE are more accurate than predicted when locating farther objects.
Individual Differences in a Spatial-Semantic Virtual Environment.
ERIC Educational Resources Information Center
Chen, Chaomei
2000-01-01
Presents two empirical case studies concerning the role of individual differences in searching through a spatial-semantic virtual environment. Discusses information visualization in information systems; cognitive factors, including associative memory, spatial ability, and visual memory; user satisfaction; and cognitive abilities and search…
Virtual acoustic environments for comprehensive evaluation of model-based hearing devices.
Grimm, Giso; Luberadzka, Joanna; Hohmann, Volker
2018-06-01
Create virtual acoustic environments (VAEs) with interactive dynamic rendering for applications in audiology. A toolbox for creation and rendering of dynamic virtual acoustic environments (TASCAR) that allows direct user interaction was developed for application in hearing aid research and audiology. The software architecture and the simulation methods used to produce VAEs are outlined. Example environments are described and analysed. With the proposed software, a tool for simulation of VAEs is available. A set of VAEs rendered with the proposed software was described.
Rocinante, a virtual collaborative visualizer
DOE Office of Scientific and Technical Information (OSTI.GOV)
McDonald, M.J.; Ice, L.G.
1996-12-31
With the goal of improving the ability of people around the world to share the development and use of intelligent systems, Sandia National Laboratories` Intelligent Systems and Robotics Center is developing new Virtual Collaborative Engineering (VCE) and Virtual Collaborative Control (VCC) technologies. A key area of VCE and VCC research is in shared visualization of virtual environments. This paper describes a Virtual Collaborative Visualizer (VCV), named Rocinante, that Sandia developed for VCE and VCC applications. Rocinante allows multiple participants to simultaneously view dynamic geometrically-defined environments. Each viewer can exclude extraneous detail or include additional information in the scene as desired.more » Shared information can be saved and later replayed in a stand-alone mode. Rocinante automatically scales visualization requirements with computer system capabilities. Models with 30,000 polygons and 4 Megabytes of texture display at 12 to 15 frames per second (fps) on an SGI Onyx and at 3 to 8 fps (without texture) on Indigo 2 Extreme computers. In its networked mode, Rocinante synchronizes its local geometric model with remote simulators and sensory systems by monitoring data transmitted through UDP packets. Rocinante`s scalability and performance make it an ideal VCC tool. Users throughout the country can monitor robot motions and the thinking behind their motion planners and simulators.« less
Information Virtulization in Virtual Environments
NASA Technical Reports Server (NTRS)
Bryson, Steve; Kwak, Dochan (Technical Monitor)
2001-01-01
Virtual Environments provide a natural setting for a wide range of information visualization applications, particularly wlieit the information to be visualized is defined on a three-dimensional domain (Bryson, 1996). This chapter provides an overview of the issues that arise when designing and implementing an information visualization application in a virtual environment. Many design issues that arise, such as, e.g., issues of display, user tracking are common to any application of virtual environments. In this chapter we focus on those issues that are special to information visualization applications, as issues of wider concern are addressed elsewhere in this book.
NASA Technical Reports Server (NTRS)
Ross, M. D.; Montgomery, K.; Linton, S.; Cheng, R.; Smith, J.
1998-01-01
This report describes the three-dimensional imaging and virtual environment technologies developed in NASA's Biocomputation Center for scientific purposes that have now led to applications in the field of medicine. A major goal is to develop a virtual environment surgery workbench for planning complex craniofacial and breast reconstructive surgery, and for training surgeons.
Building intuitive 3D interfaces for virtual reality systems
NASA Astrophysics Data System (ADS)
Vaidya, Vivek; Suryanarayanan, Srikanth; Seitel, Mathias; Mullick, Rakesh
2007-03-01
An exploration of techniques for developing intuitive, and efficient user interfaces for virtual reality systems. Work seeks to understand which paradigms from the better-understood world of 2D user interfaces remain viable within 3D environments. In order to establish this a new user interface was created that applied various understood principles of interface design. A user study was then performed where it was compared with an earlier interface for a series of medical visualization tasks.
Constructing Virtual Training Demonstrations
2008-12-01
virtual environments have been shown to be effective for training, and distributed game -based architectures contribute an added benefit of wide...investigation of how a demonstration authoring toolset can be constructed from existing virtual training environments using 3-D multiplayer gaming ...intelligent agents project to create AI middleware for simulations and videogames . The result was SimBionic®, which enables users to graphically author
Measurement Tools for the Immersive Visualization Environment: Steps Toward the Virtual Laboratory.
Hagedorn, John G; Dunkers, Joy P; Satterfield, Steven G; Peskin, Adele P; Kelso, John T; Terrill, Judith E
2007-01-01
This paper describes a set of tools for performing measurements of objects in a virtual reality based immersive visualization environment. These tools enable the use of the immersive environment as an instrument for extracting quantitative information from data representations that hitherto had be used solely for qualitative examination. We provide, within the virtual environment, ways for the user to analyze and interact with the quantitative data generated. We describe results generated by these methods to obtain dimensional descriptors of tissue engineered medical products. We regard this toolbox as our first step in the implementation of a virtual measurement laboratory within an immersive visualization environment.
Tuning self-motion perception in virtual reality with visual illusions.
Bruder, Gerd; Steinicke, Frank; Wieland, Phil; Lappe, Markus
2012-07-01
Motion perception in immersive virtual environments significantly differs from the real world. For example, previous work has shown that users tend to underestimate travel distances in virtual environments (VEs). As a solution to this problem, researchers proposed to scale the mapped virtual camera motion relative to the tracked real-world movement of a user until real and virtual motion are perceived as equal, i.e., real-world movements could be mapped with a larger gain to the VE in order to compensate for the underestimation. However, introducing discrepancies between real and virtual motion can become a problem, in particular, due to misalignments of both worlds and distorted space cognition. In this paper, we describe a different approach that introduces apparent self-motion illusions by manipulating optic flow fields during movements in VEs. These manipulations can affect self-motion perception in VEs, but omit a quantitative discrepancy between real and virtual motions. In particular, we consider to which regions of the virtual view these apparent self-motion illusions can be applied, i.e., the ground plane or peripheral vision. Therefore, we introduce four illusions and show in experiments that optic flow manipulation can significantly affect users' self-motion judgments. Furthermore, we show that with such manipulations of optic flow fields the underestimation of travel distances can be compensated.
A Virtual Walk through London: Culture Learning through a Cultural Immersion Experience
ERIC Educational Resources Information Center
Shih, Ya-Chun
2015-01-01
Integrating Google Street View into a three-dimensional virtual environment in which users control personal avatars provides these said users with access to an innovative, interactive, and real-world context for communication and culture learning. We have selected London, a city famous for its rich historical, architectural, and artistic heritage,…
The SEE Experience: Edutainment in 3D Virtual Worlds.
ERIC Educational Resources Information Center
Di Blas, Nicoletta; Paolini, Paolo; Hazan, Susan
Shared virtual worlds are innovative applications where several users, represented by Avatars, simultaneously access via Internet a 3D space. Users cooperate through interaction with the environment and with each other, manipulating objects and chatting as they go. Apart from in the well documented online action games industry, now often played…
Educational MOO: Text-Based Virtual Reality for Learning in Community. ERIC Digest.
ERIC Educational Resources Information Center
Turbee, Lonnie
MOO stands for "Multi-user domain, Object-Oriented." Early multi-user domains, or "MUDs," began as net-based dungeons-and-dragons type games, but MOOs have evolved from these origins to become some of cyberspace's most fascinating and engaging online communities. MOOs are social environments in a text-based virtual reality…
Relating Narrative, Inquiry, and Inscriptions: Supporting Consequential Play
ERIC Educational Resources Information Center
Barab, Sasha A.; Sadler, Troy D.; Heiselt, Conan; Hickey, Daniel; Zuiker, Steven
2007-01-01
In this paper we describe our research using a multi-user virtual environment, "Quest Atlantis," to embed fourth grade students in an aquatic habitat simulation. Specifically targeted towards engaging students in a rich inquiry investigation, we layered a socio-scientific narrative and an interactive rule set into a multi-user virtual environment…
Libraries' Place in Virtual Social Networks
ERIC Educational Resources Information Center
Mathews, Brian S.
2007-01-01
Do libraries belong in the virtual world of social networking? With more than 100 million users, this environment is impossible to ignore. A rising philosophy for libraries, particularly in blog-land, involves the concept of being where the users are. Simply using new media to deliver an old message is not progress. Instead, librarians should…
ERIC Educational Resources Information Center
Bouras, Christos; Triglianos, Vasileios; Tsiatsos, Thrasyvoulos
2014-01-01
Three dimensional Collaborative Virtual Environments are a powerful form of collaborative telecommunication applications, enabling the users to share a common three-dimensional space and interact with each other as well as with the environment surrounding them, in order to collaboratively solve problems or aid learning processes. Such an…
The Influences of the 2D Image-Based Augmented Reality and Virtual Reality on Student Learning
ERIC Educational Resources Information Center
Liou, Hsin-Hun; Yang, Stephen J. H.; Chen, Sherry Y.; Tarng, Wernhuar
2017-01-01
Virtual reality (VR) learning environments can provide students with concepts of the simulated phenomena, but users are not allowed to interact with real elements. Conversely, augmented reality (AR) learning environments blend real-world environments so AR could enhance the effects of computer simulation and promote students' realistic experience.…
ERIC Educational Resources Information Center
Ingerham, Laura
2012-01-01
Recent studies of online learning environments reveal the importance of interaction within the virtual environment. Abrami, Bernard, Bures, Borokhovski, and Tamim (2011) identify and study 3 types of student interactions: student-content, student-teacher, and student-student. This article builds on this classification of interactions as it…
DELIVERing Library Resources to the Virtual Learning Environment
ERIC Educational Resources Information Center
Secker, Jane
2005-01-01
Purpose: Examines a project to integrate digital libraries and virtual learning environments (VLE) focusing on requirements for online reading list systems. Design/methodology/approach: Conducted a user needs analysis using interviews and focus groups and evaluated three reading or resource list management systems. Findings: Provides a technical…
Virtual Environment Training: Auxiliary Machinery Room (AMR) Watchstation Trainer.
ERIC Educational Resources Information Center
Hriber, Dennis C.; And Others
1993-01-01
Describes a project implemented at Newport News Shipbuilding that used Virtual Environment Training to improve the performance of submarine crewmen. Highlights include development of the Auxiliary Machine Room (AMR) Watchstation Trainer; Digital Video Interactive (DVI); screen layout; test design and evaluation; user reactions; authoring language;…
NASA Astrophysics Data System (ADS)
Martin, P.; Tseu, A.; Férey, N.; Touraine, D.; Bourdot, P.
2014-02-01
Most advanced immersive devices provide collaborative environment within several users have their distinct head-tracked stereoscopic point of view. Combining with common used interactive features such as voice and gesture recognition, 3D mouse, haptic feedback, and spatialized audio rendering, these environments should faithfully reproduce a real context. However, even if many studies have been carried out on multimodal systems, we are far to definitively solve the issue of multimodal fusion, which consists in merging multimodal events coming from users and devices, into interpretable commands performed by the application. Multimodality and collaboration was often studied separately, despite of the fact that these two aspects share interesting similarities. We discuss how we address this problem, thought the design and implementation of a supervisor that is able to deal with both multimodal fusion and collaborative aspects. The aim of this supervisor is to ensure the merge of user's input from virtual reality devices in order to control immersive multi-user applications. We deal with this problem according to a practical point of view, because the main requirements of this supervisor was defined according to a industrial task proposed by our automotive partner, that as to be performed with multimodal and collaborative interactions in a co-located multi-user environment. In this task, two co-located workers of a virtual assembly chain has to cooperate to insert a seat into the bodywork of a car, using haptic devices to feel collision and to manipulate objects, combining speech recognition and two hands gesture recognition as multimodal instructions. Besides the architectural aspect of this supervisor, we described how we ensure the modularity of our solution that could apply on different virtual reality platforms, interactive contexts and virtual contents. A virtual context observer included in this supervisor in was especially designed to be independent to the content of the virtual scene of targeted application, and is use to report high-level interactive and collaborative events. This context observer allows the supervisor to merge these interactive and collaborative events, but is also used to deal with new issues coming from our observation of two co-located users in an immersive device performing this assembly task. We highlight the fact that when speech recognition features are provided to the two users, it is required to automatically detect according to the interactive context, whether the vocal instructions must be translated into commands that have to be performed by the machine, or whether they take a part of the natural communication necessary for collaboration. Information coming from this context observer that indicates a user is looking at its collaborator, is important to detect if the user is talking to its partner. Moreover, as the users are physically co-localised and head-tracking is used to provide high fidelity stereoscopic rendering, and natural walking navigation in the virtual scene, we have to deals with collision and screen occlusion between the co-located users in the physical work space. Working area and focus of each user, computed and reported by the context observer is necessary to prevent or avoid these situations.
The NASA Goddard Space Flight Center Virtual Science Fair
NASA Technical Reports Server (NTRS)
Bolognese, Jeff; Walden, Harvey; Obenschain, Arthur F. (Technical Monitor)
2002-01-01
This report describes the development of the NASA Goddard Space Flight Center Virtual Science Fair, including its history and outgrowth from the traditional regional science fairs supported by NASA. The results of the 1999 Virtual Science Fair pilot program, the mechanics of running the 2000 Virtual Science Fair and its results, and comments and suggestions for future Virtual Science Fairs are provided. The appendices to the report include the original proposal for this project, the judging criteria, the user's guide and the judge's guide to the Virtual Science Fair Web site, the Fair publicity brochure and the Fair award designs, judges' and students' responses to survey questions about the Virtual Science Fair, and lists of student entries to both the 1999 and 2000 Fairs.
Dynamic provisioning of a HEP computing infrastructure on a shared hybrid HPC system
NASA Astrophysics Data System (ADS)
Meier, Konrad; Fleig, Georg; Hauth, Thomas; Janczyk, Michael; Quast, Günter; von Suchodoletz, Dirk; Wiebelt, Bernd
2016-10-01
Experiments in high-energy physics (HEP) rely on elaborate hardware, software and computing systems to sustain the high data rates necessary to study rare physics processes. The Institut fr Experimentelle Kernphysik (EKP) at KIT is a member of the CMS and Belle II experiments, located at the LHC and the Super-KEKB accelerators, respectively. These detectors share the requirement, that enormous amounts of measurement data must be processed and analyzed and a comparable amount of simulated events is required to compare experimental results with theoretical predictions. Classical HEP computing centers are dedicated sites which support multiple experiments and have the required software pre-installed. Nowadays, funding agencies encourage research groups to participate in shared HPC cluster models, where scientist from different domains use the same hardware to increase synergies. This shared usage proves to be challenging for HEP groups, due to their specialized software setup which includes a custom OS (often Scientific Linux), libraries and applications. To overcome this hurdle, the EKP and data center team of the University of Freiburg have developed a system to enable the HEP use case on a shared HPC cluster. To achieve this, an OpenStack-based virtualization layer is installed on top of a bare-metal cluster. While other user groups can run their batch jobs via the Moab workload manager directly on bare-metal, HEP users can request virtual machines with a specialized machine image which contains a dedicated operating system and software stack. In contrast to similar installations, in this hybrid setup, no static partitioning of the cluster into a physical and virtualized segment is required. As a unique feature, the placement of the virtual machine on the cluster nodes is scheduled by Moab and the job lifetime is coupled to the lifetime of the virtual machine. This allows for a seamless integration with the jobs sent by other user groups and honors the fairshare policies of the cluster. The developed thin integration layer between OpenStack and Moab can be adapted to other batch servers and virtualization systems, making the concept also applicable for other cluster operators. This contribution will report on the concept and implementation of an OpenStack-virtualized cluster used for HEP workflows. While the full cluster will be installed in spring 2016, a test-bed setup with 800 cores has been used to study the overall system performance and dedicated HEP jobs were run in a virtualized environment over many weeks. Furthermore, the dynamic integration of the virtualized worker nodes, depending on the workload at the institute's computing system, will be described.
Yovcheva, Zornitza; van Elzakker, Corné P J M; Köbben, Barend
2013-11-01
Web-based tools developed in the last couple of years offer unique opportunities to effectively support scientists in their effort to collaborate. Communication among environmental researchers often involves not only work with geographical (spatial), but also with temporal data and information. Literature still provides limited documentation when it comes to user requirements for effective geo-collaborative work with spatio-temporal data. To start filling this gap, our study adopted a User-Centered Design approach and first explored the user requirements of environmental researchers working on distributed research projects for collaborative dissemination, exchange and work with spatio-temporal data. Our results show that system design will be mainly influenced by the nature and type of data users work with. From the end-users' perspective, optimal conversion of huge files of spatio-temporal data for further dissemination, accuracy of conversion, organization of content and security have a key role for effective geo-collaboration. Copyright © 2012 Elsevier Ltd and The Ergonomics Society. All rights reserved.
The Virtual Environment for Rapid Prototyping of the Intelligent Environment
Bouzouane, Abdenour; Gaboury, Sébastien
2017-01-01
Advances in domains such as sensor networks and electronic and ambient intelligence have allowed us to create intelligent environments (IEs). However, research in IE is being held back by the fact that researchers face major difficulties, such as a lack of resources for their experiments. Indeed, they cannot easily build IEs to evaluate their approaches. This is mainly because of economic and logistical issues. In this paper, we propose a simulator to build virtual IEs. Simulators are a good alternative to physical IEs because they are inexpensive, and experiments can be conducted easily. Our simulator is open source and it provides users with a set of virtual sensors that simulates the behavior of real sensors. This simulator gives the user the capacity to build their own environment, providing a model to edit inhabitants’ behavior and an interactive mode. In this mode, the user can directly act upon IE objects. This simulator gathers data generated by the interactions in order to produce datasets. These datasets can be used by scientists to evaluate several approaches in IEs. PMID:29112175
The Virtual Environment for Rapid Prototyping of the Intelligent Environment.
Francillette, Yannick; Boucher, Eric; Bouzouane, Abdenour; Gaboury, Sébastien
2017-11-07
Advances in domains such as sensor networks and electronic and ambient intelligence have allowed us to create intelligent environments (IEs). However, research in IE is being held back by the fact that researchers face major difficulties, such as a lack of resources for their experiments. Indeed, they cannot easily build IEs to evaluate their approaches. This is mainly because of economic and logistical issues. In this paper, we propose a simulator to build virtual IEs. Simulators are a good alternative to physical IEs because they are inexpensive, and experiments can be conducted easily. Our simulator is open source and it provides users with a set of virtual sensors that simulates the behavior of real sensors. This simulator gives the user the capacity to build their own environment, providing a model to edit inhabitants' behavior and an interactive mode. In this mode, the user can directly act upon IE objects. This simulator gathers data generated by the interactions in order to produce datasets. These datasets can be used by scientists to evaluate several approaches in IEs.
ERIC Educational Resources Information Center
Yildirim, Gürkan
2017-01-01
Today, it is seen that developing technologies are tried to be used continuously in the learning environments. These technologies have rapidly been diversifying and changing. Recently, virtual reality technology has become one of the technologies that experts have often been dwelling on. The present research tries to determine users' opinions and…
Virtual environments simulation in research reactor
NASA Astrophysics Data System (ADS)
Muhamad, Shalina Bt. Sheik; Bahrin, Muhammad Hannan Bin
2017-01-01
Virtual reality based simulations are interactive and engaging. It has the useful potential in improving safety training. Virtual reality technology can be used to train workers who are unfamiliar with the physical layout of an area. In this study, a simulation program based on the virtual environment at research reactor was developed. The platform used for virtual simulation is 3DVia software for which it's rendering capabilities, physics for movement and collision and interactive navigation features have been taken advantage of. A real research reactor was virtually modelled and simulated with the model of avatars adopted to simulate walking. Collision detection algorithms were developed for various parts of the 3D building and avatars to restrain the avatars to certain regions of the virtual environment. A user can control the avatar to move around inside the virtual environment. Thus, this work can assist in the training of personnel, as in evaluating the radiological safety of the research reactor facility.
Claiming Unclaimed Spaces: Virtual Spaces for Learning
ERIC Educational Resources Information Center
Miller, Nicole C.
2016-01-01
The purpose of this study was to describe and examine the environments used by teacher candidates in multi-user virtual environments. Secondary data analysis of a case study methodology was employed. Multiple data sources including interviews, surveys, observations, snapshots, course artifacts, and the researcher's journal were used in the initial…
On Being Bored and Lost (in Virtuality)
ERIC Educational Resources Information Center
Moore, Kristen; Pflugfelder, Ehren Helmut
2010-01-01
Education in virtual worlds has the potential, it seems, for engaging students in innovative ways and for enabling new discourses on a host of issues. Virtual locations like "Second Life," "Kaneva," or "World of Warcraft," among other multi-user virtual environments (MUVEs), also come with unique challenges for educators as they consider the…
Teaching with Virtual Worlds: Factors to Consider for Instructional Use of Second Life
ERIC Educational Resources Information Center
Mayrath, Michael C.; Traphagan, Tomoko; Jarmon, Leslie; Trivedi, Avani; Resta, Paul
2010-01-01
Substantial evidence now supports pedagogical applications of virtual worlds; however, most research supporting virtual worlds for education has been conducted using researcher-developed Multi-User Virtual Environments (MUVE). Second Life (SL) is a MUVE that has been adopted by a large number of academic institutions; however, little research has…
CAVE2: a hybrid reality environment for immersive simulation and information analysis
NASA Astrophysics Data System (ADS)
Febretti, Alessandro; Nishimoto, Arthur; Thigpen, Terrance; Talandis, Jonas; Long, Lance; Pirtle, J. D.; Peterka, Tom; Verlo, Alan; Brown, Maxine; Plepys, Dana; Sandin, Dan; Renambot, Luc; Johnson, Andrew; Leigh, Jason
2013-03-01
Hybrid Reality Environments represent a new kind of visualization spaces that blur the line between virtual environments and high resolution tiled display walls. This paper outlines the design and implementation of the CAVE2TM Hybrid Reality Environment. CAVE2 is the world's first near-seamless flat-panel-based, surround-screen immersive system. Unique to CAVE2 is that it will enable users to simultaneously view both 2D and 3D information, providing more flexibility for mixed media applications. CAVE2 is a cylindrical system of 24 feet in diameter and 8 feet tall, and consists of 72 near-seamless, off-axisoptimized passive stereo LCD panels, creating an approximately 320 degree panoramic environment for displaying information at 37 Megapixels (in stereoscopic 3D) or 74 Megapixels in 2D and at a horizontal visual acuity of 20/20. Custom LCD panels with shifted polarizers were built so the images in the top and bottom rows of LCDs are optimized for vertical off-center viewing- allowing viewers to come closer to the displays while minimizing ghosting. CAVE2 is designed to support multiple operating modes. In the Fully Immersive mode, the entire room can be dedicated to one virtual simulation. In 2D model, the room can operate like a traditional tiled display wall enabling users to work with large numbers of documents at the same time. In the Hybrid mode, a mixture of both 2D and 3D applications can be simultaneously supported. The ability to treat immersive work spaces in this Hybrid way has never been achieved before, and leverages the special abilities of CAVE2 to enable researchers to seamlessly interact with large collections of 2D and 3D data. To realize this hybrid ability, we merged the Scalable Adaptive Graphics Environment (SAGE) - a system for supporting 2D tiled displays, with Omegalib - a virtual reality middleware supporting OpenGL, OpenSceneGraph and Vtk applications.
ERIC Educational Resources Information Center
Leonard, Elizabeth; Morasch, Maureen J.
2012-01-01
Despite the old-fashioned view of the academic library as a static institution, libraries can and do change in response to the needs of users and stakeholders. Perhaps the most dramatic shift in services has been the transition from a purely physical to a combination physical/virtual or even virtual-only environment. This article examines how…
Lewis, Gwyn N; Rosie, Juliet A
2012-01-01
To review quantitative and qualitative studies that have examined the users' response to virtual reality game-based interventions in people with movement disorders associated with chronic neurological conditions. We aimed to determine key themes that influenced users' enjoyment and engagement in the games and develop suggestions as to how future systems could best address their needs and expectations. There were a limited number of studies that evaluated user opinions. From those found, seven common themes emerged: technology limitations, user control and therapist assistance, the novel physical and cognitive challenge, feedback, social interaction, game purpose and expectations, and the virtual environments. Our key recommendations derived from the review were to avoid technology failure, maintain overt therapeutic principles within the games, encompass progression to promote continuing physical and cognitive challenge, and to provide feedback that is easily and readily associated with success. While there have been few studies that have evaluated the users' perspective of virtual rehabilitation games, our findings indicate that canvassing these experiences provides valuable information on the needs of the intended users. Incorporating our recommendations may enhance the efficacy of future systems to optimize the rehabilitation benefits of virtual reality games.
A Data Management System for International Space Station Simulation Tools
NASA Technical Reports Server (NTRS)
Betts, Bradley J.; DelMundo, Rommel; Elcott, Sharif; McIntosh, Dawn; Niehaus, Brian; Papasin, Richard; Mah, Robert W.; Clancy, Daniel (Technical Monitor)
2002-01-01
Groups associated with the design, operational, and training aspects of the International Space Station make extensive use of modeling and simulation tools. Users of these tools often need to access and manipulate large quantities of data associated with the station, ranging from design documents to wiring diagrams. Retrieving and manipulating this data directly within the simulation and modeling environment can provide substantial benefit to users. An approach for providing these kinds of data management services, including a database schema and class structure, is presented. Implementation details are also provided as a data management system is integrated into the Intelligent Virtual Station, a modeling and simulation tool developed by the NASA Ames Smart Systems Research Laboratory. One use of the Intelligent Virtual Station is generating station-related training procedures in a virtual environment, The data management component allows users to quickly and easily retrieve information related to objects on the station, enhancing their ability to generate accurate procedures. Users can associate new information with objects and have that information stored in a database.
NASA Technical Reports Server (NTRS)
Smith, Jeffrey
2003-01-01
The Bio- Visualization, Imaging and Simulation (BioVIS) Technology Center at NASA's Ames Research Center is dedicated to developing and applying advanced visualization, computation and simulation technologies to support NASA Space Life Sciences research and the objectives of the Fundamental Biology Program. Research ranges from high resolution 3D cell imaging and structure analysis, virtual environment simulation of fine sensory-motor tasks, computational neuroscience and biophysics to biomedical/clinical applications. Computer simulation research focuses on the development of advanced computational tools for astronaut training and education. Virtual Reality (VR) and Virtual Environment (VE) simulation systems have become important training tools in many fields from flight simulation to, more recently, surgical simulation. The type and quality of training provided by these computer-based tools ranges widely, but the value of real-time VE computer simulation as a method of preparing individuals for real-world tasks is well established. Astronauts routinely use VE systems for various training tasks, including Space Shuttle landings, robot arm manipulations and extravehicular activities (space walks). Currently, there are no VE systems to train astronauts for basic and applied research experiments which are an important part of many missions. The Virtual Glovebox (VGX) is a prototype VE system for real-time physically-based simulation of the Life Sciences Glovebox where astronauts will perform many complex tasks supporting research experiments aboard the International Space Station. The VGX consists of a physical display system utilizing duel LCD projectors and circular polarization to produce a desktop-sized 3D virtual workspace. Physically-based modeling tools (Arachi Inc.) provide real-time collision detection, rigid body dynamics, physical properties and force-based controls for objects. The human-computer interface consists of two magnetic tracking devices (Ascention Inc.) attached to instrumented gloves (Immersion Inc.) which co-locate the user's hands with hand/forearm representations in the virtual workspace. Force-feedback is possible in a work volume defined by a Phantom Desktop device (SensAble inc.). Graphics are written in OpenGL. The system runs on a 2.2 GHz Pentium 4 PC. The prototype VGX provides astronauts and support personnel with a real-time physically-based VE system to simulate basic research tasks both on Earth and in the microgravity of Space. The immersive virtual environment of the VGX also makes it a useful tool for virtual engineering applications including CAD development, procedure design and simulation of human-system systems in a desktop-sized work volume.
Immersive virtual reality for visualization of abdominal CT
NASA Astrophysics Data System (ADS)
Lin, Qiufeng; Xu, Zhoubing; Li, Bo; Baucom, Rebeccah; Poulose, Benjamin; Landman, Bennett A.; Bodenheimer, Robert E.
2013-03-01
Immersive virtual environments use a stereoscopic head-mounted display and data glove to create high fidelity virtual experiences in which users can interact with three-dimensional models and perceive relationships at their true scale. This stands in stark contrast to traditional PACS-based infrastructure in which images are viewed as stacks of two dimensional slices, or, at best, disembodied renderings. Although there has substantial innovation in immersive virtual environments for entertainment and consumer media, these technologies have not been widely applied in clinical applications. Here, we consider potential applications of immersive virtual environments for ventral hernia patients with abdominal computed tomography imaging data. Nearly a half million ventral hernias occur in the United States each year, and hernia repair is the most commonly performed general surgery operation worldwide. A significant problem in these conditions is communicating the urgency, degree of severity, and impact of a hernia (and potential repair) on patient quality of life. Hernias are defined by ruptures in the abdominal wall (i.e., the absence of healthy tissues) rather than a growth (e.g., cancer); therefore, understanding a hernia necessitates understanding the entire abdomen. Our environment allows surgeons and patients to view body scans at scale and interact with these virtual models using a data glove. This visualization and interaction allows users to perceive the relationship between physical structures and medical imaging data. The system provides close integration of PACS-based CT data with immersive virtual environments and creates opportunities to study and optimize interfaces for patient communication, operative planning, and medical education.
Immersive Virtual Reality for Visualization of Abdominal CT.
Lin, Qiufeng; Xu, Zhoubing; Li, Bo; Baucom, Rebeccah; Poulose, Benjamin; Landman, Bennett A; Bodenheimer, Robert E
2013-03-28
Immersive virtual environments use a stereoscopic head-mounted display and data glove to create high fidelity virtual experiences in which users can interact with three-dimensional models and perceive relationships at their true scale. This stands in stark contrast to traditional PACS-based infrastructure in which images are viewed as stacks of two-dimensional slices, or, at best, disembodied renderings. Although there has substantial innovation in immersive virtual environments for entertainment and consumer media, these technologies have not been widely applied in clinical applications. Here, we consider potential applications of immersive virtual environments for ventral hernia patients with abdominal computed tomography imaging data. Nearly a half million ventral hernias occur in the United States each year, and hernia repair is the most commonly performed general surgery operation worldwide. A significant problem in these conditions is communicating the urgency, degree of severity, and impact of a hernia (and potential repair) on patient quality of life. Hernias are defined by ruptures in the abdominal wall (i.e., the absence of healthy tissues) rather than a growth (e.g., cancer); therefore, understanding a hernia necessitates understanding the entire abdomen. Our environment allows surgeons and patients to view body scans at scale and interact with these virtual models using a data glove. This visualization and interaction allows users to perceive the relationship between physical structures and medical imaging data. The system provides close integration of PACS-based CT data with immersive virtual environments and creates opportunities to study and optimize interfaces for patient communication, operative planning, and medical education.
Latency and User Performance in Virtual Environments and Augmented Reality
NASA Technical Reports Server (NTRS)
Ellis, Stephen R.
2009-01-01
System rendering latency has been recognized by senior researchers, such as Professor Fredrick Brooks of UNC (Turing Award 1999), as a major factor limiting the realism and utility of head-referenced displays systems. Latency has been shown to reduce the user's sense of immersion within a virtual environment, disturb user interaction with virtual objects, and to contribute to motion sickness during some simulation tasks. Latency, however, is not just an issue for external display systems since finite nerve conduction rates and variation in transduction times in the human body's sensors also pose problems for latency management within the nervous system. Some of the phenomena arising from the brain's handling of sensory asynchrony due to latency will be discussed as a prelude to consideration of the effects of latency in interactive displays. The causes and consequences of the erroneous movement that appears in displays due to latency will be illustrated with examples of the user performance impact provided by several experiments. These experiments will review the generality of user sensitivity to latency when users judge either object or environment stability. Hardware and signal processing countermeasures will also be discussed. In particular the tuning of a simple extrapolative predictive filter not using a dynamic movement model will be presented. Results show that it is possible to adjust this filter so that the appearance of some latencies may be hidden without the introduction of perceptual artifacts such as overshoot. Several examples of the effects of user performance will be illustrated by three-dimensional tracking and tracing tasks executed in virtual environments. These experiments demonstrate classic phenomena known from work on manual control and show the need for very responsive systems if they are indented to support precise manipulation. The practical benefits of removing interfering latencies from interactive systems will be emphasized with some classic final examples from surgical telerobotics, and human-computer interaction.
Desktop supercomputer: what can it do?
NASA Astrophysics Data System (ADS)
Bogdanov, A.; Degtyarev, A.; Korkhov, V.
2017-12-01
The paper addresses the issues of solving complex problems that require using supercomputers or multiprocessor clusters available for most researchers nowadays. Efficient distribution of high performance computing resources according to actual application needs has been a major research topic since high-performance computing (HPC) technologies became widely introduced. At the same time, comfortable and transparent access to these resources was a key user requirement. In this paper we discuss approaches to build a virtual private supercomputer available at user's desktop: a virtual computing environment tailored specifically for a target user with a particular target application. We describe and evaluate possibilities to create the virtual supercomputer based on light-weight virtualization technologies, and analyze the efficiency of our approach compared to traditional methods of HPC resource management.
NASA Astrophysics Data System (ADS)
Iwai, Go
2015-12-01
We describe the development of an environment for Geant4 consisting of an application and data that provide users with a more efficient way to access Geant4 applications without having to download and build the software locally. The environment is platform neutral and offers the users near-real time performance. In addition, the environment consists of data and Geant4 libraries built using low-level virtual machine (LLVM) tools which can produce bitcode that can be embedded in HTML and accessed via a browser. The bitcode is downloaded to the local machine via the browser and can then be configured by the user. This approach provides a way of minimising the risk of leaking potentially sensitive data used to construct the Geant4 model and application in the medical domain for treatment planning. We describe several applications that have used this approach and compare their performance with that of native applications. We also describe potential user communities that could benefit from this approach.
Establishing a virtual learning environment: a nursing experience.
Wood, Anya; McPhee, Carolyn
2011-11-01
The use of virtual worlds has exploded in popularity, but getting started may not be easy. In this article, the authors, members of the corporate nursing education team at University Health Network, outline their experience with incorporating virtual technology into their learning environment. Over a period of several months, a virtual hospital, including two nursing units, was created in Second Life®, allowing more than 500 nurses to role-play in a safe environment without the fear of making a mistake. This experience has provided valuable insight into the best ways to develop and learn in a virtual environment. The authors discuss the challenges of installing and building the Second Life® platform and provide guidelines for preparing users and suggestions for crafting educational activities. This article provides a starting point for organizations planning to incorporate virtual worlds into their learning environment. Copyright 2011, SLACK Incorporated.
Designing 3 Dimensional Virtual Reality Using Panoramic Image
NASA Astrophysics Data System (ADS)
Wan Abd Arif, Wan Norazlinawati; Wan Ahmad, Wan Fatimah; Nordin, Shahrina Md.; Abdullah, Azrai; Sivapalan, Subarna
The high demand to improve the quality of the presentation in the knowledge sharing field is to compete with rapidly growing technology. The needs for development of technology based learning and training lead to an idea to develop an Oil and Gas Plant Virtual Environment (OGPVE) for the benefit of our future. Panoramic Virtual Reality learning based environment is essential in order to help educators overcome the limitations in traditional technical writing lesson. Virtual reality will help users to understand better by providing the simulations of real-world and hard to reach environment with high degree of realistic experience and interactivity. Thus, in order to create a courseware which will achieve the objective, accurate images of intended scenarios must be acquired. The panorama shows the OGPVE and helps to generate ideas to users on what they have learnt. This paper discusses part of the development in panoramic virtual reality. The important phases for developing successful panoramic image are image acquisition and image stitching or mosaicing. In this paper, the combination of wide field-of-view (FOV) and close up image used in this panoramic development are also discussed.
The NASA Goddard Space Flight Center Virtual Science Fair
NASA Technical Reports Server (NTRS)
Bolognese, Jeff; Walden, Harvey; Obenschain, Arthur F. (Technical Monitor)
2001-01-01
This report describes the development of the NASA Goddard Space Flight Center Virtual Science Fair, including its history and outgrowth from the traditional regional science fairs supported by NASA. The results of the 1999 Virtual Science Fair pilot program, the mechanics of running the 2000 Virtual Science Fair and its results, and comments and suggestions for future Virtual Science Fairs are provided. The appendices to the report contain supporting documentation, including the original proposal for this project, the judging criteria, the user's guide and the judge's guide to the Virtual Science Fair Web site, the Fair publicity brochure and the Fair award designs, judges' and students' responses to survey questions about the Virtual Science Fair, and lists of student entries to both the 1999 and 2000 Fairs.
Ketelhut, Diane Jass; Niemi, Steven M
2007-01-01
This article examines several new and exciting communication technologies. Many of the technologies were developed by the entertainment industry; however, other industries are adopting and modifying them for their own needs. These new technologies allow people to collaborate across distance and time and to learn in simulated work contexts. The article explores the potential utility of these technologies for advancing laboratory animal care and use through better education and training. Descriptions include emerging technologies such as augmented reality and multi-user virtual environments, which offer new approaches with different capabilities. Augmented reality interfaces, characterized by the use of handheld computers to infuse the virtual world into the real one, result in deeply immersive simulations. In these simulations, users can access virtual resources and communicate with real and virtual participants. Multi-user virtual environments enable multiple participants to simultaneously access computer-based three-dimensional virtual spaces, called "worlds," and to interact with digital tools. They allow for authentic experiences that promote collaboration, mentoring, and communication. Because individuals may learn or train differently, it is advantageous to combine the capabilities of these technologies and applications with more traditional methods to increase the number of students who are served by using current methods alone. The use of these technologies in animal care and use programs can create detailed training and education environments that allow students to learn the procedures more effectively, teachers to assess their progress more objectively, and researchers to gain insights into animal care.
Using Virtual Reality For Outreach Purposes in Planetology
NASA Astrophysics Data System (ADS)
Civet, François; Le Mouélic, Stéphane; Le Menn, Erwan; Beaunay, Stéphanie
2016-10-01
2016 has been a year marked by a technological breakthrough : the availability for the first time to the general public of technologically mature virtual reality devices. Virtual Reality consists in visually immerging a user in a 3D environment reproduced either from real and/or imaginary data, with the possibility to move and eventually interact with the different elements. In planetology, most of the places will remain inaccessible to the public for a while, but a fleet of dedicated spacecraft's such as orbiters, landers and rovers allow the possibility to virtually reconstruct the environments, using image processing, cartography and photogrammetry. Virtual reality can then bridge the gap to virtually "send" any user into the place and enjoy the exploration.We are investigating several type of devices to render orbital or ground based data of planetological interest, mostly from Mars. The most simple system consists of a "cardboard" headset, on which the user can simply use his cellphone as the screen. A more comfortable experience is obtained with more complex systems such as the HTC vive or Oculus Rift headsets, which include a tracking system important to minimize motion sickness. The third environment that we have developed is based on the CAVE concept, were four 3D video projectors are used to project on three 2x3m walls plus the ground. These systems can be used for scientific data analysis, but also prove to be perfectly suited for outreach and education purposes.
Collaboration Modality, Cognitive Load, and Science Inquiry Learning in Virtual Inquiry Environments
ERIC Educational Resources Information Center
Erlandson, Benjamin E.; Nelson, Brian C.; Savenye, Wilhelmina C.
2010-01-01
Educational multi-user virtual environments (MUVEs) have been shown to be effective platforms for situated science inquiry curricula. While researchers find MUVEs to be supportive of collaborative scientific inquiry processes, the complex mix of multi-modal messages present in MUVEs can lead to cognitive overload, with learners unable to…
Predicting Virtual Learning Environment Adoption: A Case Study
ERIC Educational Resources Information Center
Penjor, Sonam; Zander, Pär-Ola
2016-01-01
This study investigates the significance of Rogers' Diffusion of Innovations (DOI) theory with regard to the use of a Virtual Learning Environment (VLE) at the Royal University of Bhutan (RUB). The focus is on different adoption types and characteristics of users. Rogers' DOI theory is applied to investigate the influence of five predictors…
Evaluating Technology-Based Educational Interventions: A Review of Two Projects
ERIC Educational Resources Information Center
Adamo-Villani, Nicoletta; Dib, Hazar
2013-01-01
The article discusses current evaluation methodologies used to assess the usability, user enjoyment, and pedagogical efficacy of virtual learning environments (VLEs) and serious games. It also describes the evaluations of two recently developed projects: a virtual learning environment that employs a fantasy 3D world to engage deaf and hearing…
NASA Astrophysics Data System (ADS)
Demir, I.
2015-12-01
Recent developments in internet technologies make it possible to manage and visualize large data on the web. Novel visualization techniques and interactive user interfaces allow users to create realistic environments, and interact with data to gain insight from simulations and environmental observations. This presentation showcase information communication interfaces, games, and virtual and immersive reality applications for supporting teaching and learning of concepts in atmospheric and hydrological sciences. The information communication platforms utilizes latest web technologies and allow accessing and visualizing large scale data on the web. The simulation system is a web-based 3D interactive learning environment for teaching hydrological and atmospheric processes and concepts. The simulation systems provides a visually striking platform with realistic terrain and weather information, and water simulation. The web-based simulation system provides an environment for students to learn about the earth science processes, and effects of development and human activity on the terrain. Users can access the system in three visualization modes including virtual reality, augmented reality, and immersive reality using heads-up display. The system provides various scenarios customized to fit the age and education level of various users.
NASA Technical Reports Server (NTRS)
Ishii, Masahiro; Sukanya, P.; Sato, Makoto
1994-01-01
This paper describes the construction of a virtual work space for tasks performed by two handed manipulation. We intend to provide a virtual environment that encourages users to accomplish tasks as they usually act in a real environment. Our approach uses a three dimensional spatial interface device that allows the user to handle virtual objects by hand and be able to feel some physical properties such as contact, weight, etc. We investigated suitable conditions for constructing our virtual work space by simulating some basic assembly work, a face and fit task. We then selected the conditions under which the subjects felt most comfortable in performing this task and set up our virtual work space. Finally, we verified the possibility of performing more complex tasks in this virtual work space by providing simple virtual models and then let the subjects create new models by assembling these components. The subjects can naturally perform assembly operations and accomplish the task. Our evaluation shows that this virtual work space has the potential to be used for performing tasks that require two-handed manipulation or cooperation between both hands in a natural manner.
A collaborative molecular modeling environment using a virtual tunneling service.
Lee, Jun; Kim, Jee-In; Kang, Lin-Woo
2012-01-01
Collaborative researches of three-dimensional molecular modeling can be limited by different time zones and locations. A networked virtual environment can be utilized to overcome the problem caused by the temporal and spatial differences. However, traditional approaches did not sufficiently consider integration of different computing environments, which were characterized by types of applications, roles of users, and so on. We propose a collaborative molecular modeling environment to integrate different molecule modeling systems using a virtual tunneling service. We integrated Co-Coot, which is a collaborative crystallographic object-oriented toolkit, with VRMMS, which is a virtual reality molecular modeling system, through a collaborative tunneling system. The proposed system showed reliable quantitative and qualitative results through pilot experiments.
Virtual reality environments for post-stroke arm rehabilitation.
Subramanian, Sandeep; Knaut, Luiz A; Beaudoin, Christian; McFadyen, Bradford J; Feldman, Anatol G; Levin, Mindy F
2007-06-22
Optimal practice and feedback elements are essential requirements for maximal motor recovery in patients with motor deficits due to central nervous system lesions. A virtual environment (VE) was created that incorporates practice and feedback elements necessary for maximal motor recovery. It permits varied and challenging practice in a motivating environment that provides salient feedback. The VE gives the user knowledge of results feedback about motor behavior and knowledge of performance feedback about the quality of pointing movements made in a virtual elevator. Movement distances are related to length of body segments. We describe an immersive and interactive experimental protocol developed in a virtual reality environment using the CAREN system. The VE can be used as a training environment for the upper limb in patients with motor impairments.
Developing a Virtual Rock Deformation Laboratory
NASA Astrophysics Data System (ADS)
Zhu, W.; Ougier-simonin, A.; Lisabeth, H. P.; Banker, J. S.
2012-12-01
Experimental rock physics plays an important role in advancing earthquake research. Despite its importance in geophysics, reservoir engineering, waste deposits and energy resources, most geology departments in U.S. universities don't have rock deformation facilities. A virtual deformation laboratory can serve as an efficient tool to help geology students naturally and internationally learn about rock deformation. Working with computer science engineers, we built a virtual deformation laboratory that aims at fostering user interaction to facilitate classroom and outreach teaching and learning. The virtual lab is built to center around a triaxial deformation apparatus in which laboratory measurements of mechanical and transport properties such as stress, axial and radial strains, acoustic emission activities, wave velocities, and permeability are demonstrated. A student user can create her avatar to enter the virtual lab. In the virtual lab, the avatar can browse and choose among various rock samples, determine the testing conditions (pressure, temperature, strain rate, loading paths), then operate the virtual deformation machine to observe how deformation changes physical properties of rocks. Actual experimental results on the mechanical, frictional, sonic, acoustic and transport properties of different rocks at different conditions are compiled. The data acquisition system in the virtual lab is linked to the complied experimental data. Structural and microstructural images of deformed rocks are up-loaded and linked to different deformation tests. The integration of the microstructural image and the deformation data allows the student to visualize how forces reshape the structure of the rock and change the physical properties. The virtual lab is built using the Game Engine. The geological background, outstanding questions related to the geological environment, and physical and mechanical concepts associated with the problem will be illustrated on the web portal. In addition, some web based data collection tools are available to collect student feedback and opinions on their learning experience. The virtual laboratory is designed to be an online education tool that facilitates interactive learning.; Virtual Deformation Laboratory
Evaluation of the Next-Gen Exercise Software Interface in the NEEMO Analog
NASA Technical Reports Server (NTRS)
Hanson, Andrea; Kalogera, Kent; Sandor, Aniko; Hardy, Marc; Frank, Andrew; English, Kirk; Williams, Thomas; Perera, Jeevan; Amonette, William
2017-01-01
NSBRI (National Space Biomedical Research Institute) funded research grant to develop the 'NextGen' exercise software for the NEEMO (NASA Extreme Environment Mission Operations) analog. Develop a software architecture to integrate instructional, motivational and socialization techniques into a common portal to enhance exercise countermeasures in remote environments. Increase user efficiency and satisfaction, and institute commonality across multiple exercise systems. Utilized GUI (Graphical User Interface) design principals focused on intuitive ease of use to minimize training time and realize early user efficiency. Project requirement to test the software in an analog environment. Top Level Project Aims: 1) Improve the usability of crew interface software to exercise CMS (Crew Management System) through common app-like interfaces. 2) Introduce virtual instructional motion training. 3) Use virtual environment to provide remote socialization with family and friends, improve exercise technique, adherence, motivation and ultimately performance outcomes.
E-Drama: Facilitating Online Role-Play Using an AI Actor and Emotionally Expressive Characters
ERIC Educational Resources Information Center
Zhang, Li; Gillies, Marco; Dhaliwal, Kulwant; Gower, Amanda; Robertson, Dale; Crabtree, Barry
2009-01-01
This paper describes a multi-user role-playing environment, referred to as "e-drama", which enables groups of people to converse online, in scenario driven virtual environments. The starting point of this research, is an existing application known as "edrama", a 2D graphical environment in which users are represented by static…
Cranial implant design using augmented reality immersive system.
Ai, Zhuming; Evenhouse, Ray; Leigh, Jason; Charbel, Fady; Rasmussen, Mary
2007-01-01
Software tools that utilize haptics for sculpting precise fitting cranial implants are utilized in an augmented reality immersive system to create a virtual working environment for the modelers. The virtual environment is designed to mimic the traditional working environment as closely as possible, providing more functionality for the users. The implant design process uses patient CT data of a defective area. This volumetric data is displayed in an implant modeling tele-immersive augmented reality system where the modeler can build a patient specific implant that precisely fits the defect. To mimic the traditional sculpting workspace, the implant modeling augmented reality system includes stereo vision, viewer centered perspective, sense of touch, and collaboration. To achieve optimized performance, this system includes a dual-processor PC, fast volume rendering with three-dimensional texture mapping, the fast haptic rendering algorithm, and a multi-threading architecture. The system replaces the expensive and time consuming traditional sculpting steps such as physical sculpting, mold making, and defect stereolithography. This augmented reality system is part of a comprehensive tele-immersive system that includes a conference-room-sized system for tele-immersive small group consultation and an inexpensive, easily deployable networked desktop virtual reality system for surgical consultation, evaluation and collaboration. This system has been used to design patient-specific cranial implants with precise fit.
Using Immersive Virtual Reality for Electrical Substation Training
ERIC Educational Resources Information Center
Tanaka, Eduardo H.; Paludo, Juliana A.; Cordeiro, Carlúcio S.; Domingues, Leonardo R.; Gadbem, Edgar V.; Euflausino, Adriana
2015-01-01
Usually, distribution electricians are called upon to solve technical problems found in electrical substations. In this project, we apply problem-based learning to a training program for electricians, with the help of a virtual reality environment that simulates a real substation. Using this virtual substation, users may safely practice maneuvers…
The Optokinetic Cervical Reflex (OKCR) in Pilots of High-Performance Aircraft.
1997-04-01
Coupled System virtual reality - the attempt to create a realistic, three-dimensional environment or synthetic immersive environment in which the user ...factors interface between the pilot and the flight environment. The final section is a case study of head- and helmet-mounted displays (HMD) and the impact...themselves as actually moving (flying) through a virtual environment. However, in the studies of Held, et al. (1975) and Young, et al. (1975) the
What makes virtual agents believable?
NASA Astrophysics Data System (ADS)
Bogdanovych, Anton; Trescak, Tomas; Simoff, Simeon
2016-01-01
In this paper we investigate the concept of believability and make an attempt to isolate individual characteristics (features) that contribute to making virtual characters believable. As the result of this investigation we have produced a formalisation of believability and based on this formalisation built a computational framework focused on simulation of believable virtual agents that possess the identified features. In order to test whether the identified features are, in fact, responsible for agents being perceived as more believable, we have conducted a user study. In this study we tested user reactions towards the virtual characters that were created for a simulation of aboriginal inhabitants of a particular area of Sydney, Australia in 1770 A.D. The participants of our user study were exposed to short simulated scenes, in which virtual agents performed some behaviour in two different ways (while possessing a certain aspect of believability vs. not possessing it). The results of the study indicate that virtual agents that appear resource bounded, are aware of their environment, own interaction capabilities and their state in the world, agents that can adapt to changes in the environment and exist in correct social context are those that are being perceived as more believable. Further in the paper we discuss these and other believability features and provide a quantitative analysis of the level of contribution for each such feature to the overall perceived believability of a virtual agent.
Antoniou, Panagiotis E; Athanasopoulou, Christina A; Dafli, Eleni
2014-01-01
Background Since their inception, virtual patients have provided health care educators with a way to engage learners in an experience simulating the clinician’s environment without danger to learners and patients. This has led this learning modality to be accepted as an essential component of medical education. With the advent of the visually and audio-rich 3-dimensional multi-user virtual environment (MUVE), a new deployment platform has emerged for educational content. Immersive, highly interactive, multimedia-rich, MUVEs that seamlessly foster collaboration provide a new hotbed for the deployment of medical education content. Objective This work aims to assess the suitability of the Second Life MUVE as a virtual patient deployment platform for undergraduate dental education, and to explore the requirements and specifications needed to meaningfully repurpose Web-based virtual patients in MUVEs. Methods Through the scripting capabilities and available art assets in Second Life, we repurposed an existing Web-based periodontology virtual patient into Second Life. Through a series of point-and-click interactions and multiple-choice queries, the user experienced a specific periodontology case and was asked to provide the optimal responses for each of the challenges of the case. A focus group of 9 undergraduate dentistry students experienced both the Web-based and the Second Life version of this virtual patient. The group convened 3 times and discussed relevant issues such as the group’s computer literacy, the assessment of Second Life as a virtual patient deployment platform, and compared the Web-based and MUVE-deployed virtual patients. Results A comparison between the Web-based and the Second Life virtual patient revealed the inherent advantages of the more experiential and immersive Second Life virtual environment. However, several challenges for the successful repurposing of virtual patients from the Web to the MUVE were identified. The identified challenges for repurposing of Web virtual patients to the MUVE platform from the focus group study were (1) increased case complexity to facilitate the user’s gaming preconception in a MUVE, (2) necessity to decrease textual narration and provide the pertinent information in a more immersive sensory way, and (3) requirement to allow the user to actuate the solutions of problems instead of describing them through narration. Conclusions For a successful systematic repurposing effort of virtual patients to MUVEs such as Second Life, the best practices of experiential and immersive game design should be organically incorporated in the repurposing workflow (automated or not). These findings are pivotal in an era in which open educational content is transferred to and shared among users, learners, and educators of various open repositories/environments. PMID:24927470
Collaborative voxel-based surgical virtual environments.
Acosta, Eric; Muniz, Gilbert; Armonda, Rocco; Bowyer, Mark; Liu, Alan
2008-01-01
Virtual Reality-based surgical simulators can utilize Collaborative Virtual Environments (C-VEs) to provide team-based training. To support real-time interactions, C-VEs are typically replicated on each user's local computer and a synchronization method helps keep all local copies consistent. This approach does not work well for voxel-based C-VEs since large and frequent volumetric updates make synchronization difficult. This paper describes a method that allows multiple users to interact within a voxel-based C-VE for a craniotomy simulator being developed. Our C-VE method requires smaller update sizes and provides faster synchronization update rates than volumetric-based methods. Additionally, we address network bandwidth/latency issues to simulate networked haptic and bone drilling tool interactions with a voxel-based skull C-VE.
Review of Enabling Technologies to Facilitate Secure Compute Customization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aderholdt, Ferrol; Caldwell, Blake A; Hicks, Susan Elaine
High performance computing environments are often used for a wide variety of workloads ranging from simulation, data transformation and analysis, and complex workflows to name just a few. These systems may process data for a variety of users, often requiring strong separation between job allocations. There are many challenges to establishing these secure enclaves within the shared infrastructure of high-performance computing (HPC) environments. The isolation mechanisms in the system software are the basic building blocks for enabling secure compute enclaves. There are a variety of approaches and the focus of this report is to review the different virtualization technologies thatmore » facilitate the creation of secure compute enclaves. The report reviews current operating system (OS) protection mechanisms and modern virtualization technologies to better understand the performance/isolation properties. We also examine the feasibility of running ``virtualized'' computing resources as non-privileged users, and providing controlled administrative permissions for standard users running within a virtualized context. Our examination includes technologies such as Linux containers (LXC [32], Docker [15]) and full virtualization (KVM [26], Xen [5]). We categorize these different approaches to virtualization into two broad groups: OS-level virtualization and system-level virtualization. The OS-level virtualization uses containers to allow a single OS kernel to be partitioned to create Virtual Environments (VE), e.g., LXC. The resources within the host's kernel are only virtualized in the sense of separate namespaces. In contrast, system-level virtualization uses hypervisors to manage multiple OS kernels and virtualize the physical resources (hardware) to create Virtual Machines (VM), e.g., Xen, KVM. This terminology of VE and VM, detailed in Section 2, is used throughout the report to distinguish between the two different approaches to providing virtualized execution environments. As part of our technology review we analyzed several current virtualization solutions to assess their vulnerabilities. This included a review of common vulnerabilities and exposures (CVEs) for Xen, KVM, LXC and Docker to gauge their susceptibility to different attacks. The complete details are provided in Section 5 on page 33. Based on this review we concluded that system-level virtualization solutions have many more vulnerabilities than OS level virtualization solutions. As such, security mechanisms like sVirt (Section 3.3) should be considered when using system-level virtualization solutions in order to protect the host against exploits. The majority of vulnerabilities related to KVM, LXC, and Docker are in specific regions of the system. Therefore, future "zero day attacks" are likely to be in the same regions, which suggests that protecting these areas can simplify the protection of the host and maintain the isolation between users. The evaluations of virtualization technologies done thus far are discussed in Section 4. This includes experiments with 'user' namespaces in VEs, which provides the ability to isolate user privileges and allow a user to run with different UIDs within the container while mapping them to non-privileged UIDs in the host. We have identified Linux namespaces as a promising mechanism to isolate shared resources, while maintaining good performance. In Section 4.1 we describe our tests with LXC as a non-root user and leveraging namespaces to control UID/GID mappings and support controlled sharing of parallel file-systems. We highlight several of these namespace capabilities in Section 6.2.3. The other evaluations that were performed during this initial phase of work provide baseline performance data for comparing VEs and VMs to purely native execution. In Section 4.2 we performed tests using the High-Performance Computing Conjugate Gradient (HPCCG) benchmark to establish baseline performance for a scientific application when run on the Native (host) machine in contrast with execution under Docker and KVM. Our tests verified prior studies showing roughly 2-4% overheads in application execution time & MFlops when running in hypervisor-base environments (VMs) as compared to near native performance with VEs. For more details, see Figures 4.5 (page 28), 4.6 (page 28), and 4.7 (page 29). Additionally, in Section 4.3 we include network measurements for TCP bandwidth performance over the 10GigE interface in our testbed. The Native and Docker based tests achieved >= ~9Gbits/sec, while the KVM configuration only achieved 2.5Gbits/sec (Table 4.6 on page 32). This may be a configuration issue with our KVM installation, and is a point for further testing as we refine the network settings in the testbed. The initial network tests were done using a bridged networking configuration. The report outline is as follows: - Section 1 introduces the report and clarifies the scope of the proj...« less
Guided exploration in virtual environments
NASA Astrophysics Data System (ADS)
Beckhaus, Steffi; Eckel, Gerhard; Strothotte, Thomas
2001-06-01
We describe an application supporting alternating interaction and animation for the purpose of exploration in a surround- screen projection-based virtual reality system. The exploration of an environment is a highly interactive and dynamic process in which the presentation of objects of interest can give the user guidance while exploring the scene. Previous systems for automatic presentation of models or scenes need either cinematographic rules, direct human interaction, framesets or precalculation (e.g. precalculation of paths to a predefined goal). We report on the development of a system that can deal with rapidly changing user interest in objects of a scene or model as well as with dynamic models and changes of the camera position introduced interactively by the user. It is implemented as a potential-field based camera data generating system. In this paper we describe the implementation of our approach in a virtual art museum on the CyberStage, our surround-screen projection-based stereoscopic display. The paradigm of guided exploration is introduced describing the freedom of the user to explore the museum autonomously. At the same time, if requested by the user, guided exploration provides just-in-time navigational support. The user controls this support by specifying the current field of interest in high-level search criteria. We also present an informal user study evaluating this approach.
Import and visualization of clinical medical imagery into multiuser VR environments
NASA Astrophysics Data System (ADS)
Mehrle, Andreas H.; Freysinger, Wolfgang; Kikinis, Ron; Gunkel, Andreas; Kral, Florian
2005-03-01
The graphical representation of three-dimensional data obtained from tomographic imaging has been the central problem since this technology is available. Neither the representation as a set of two-dimensional slices nor the 2D projection of three-dimensional models yields satisfactory results. In this paper a way is outlined which permits the investigation of volumetric clinical data obtained from standard CT, MR, PET, SPECT or experimental very high resolution CT-scanners in a three dimensional environment within a few worksteps. Volumetric datasets are converted into surface data (segmentation process) using the 3D-Slicer software tool and saved as .vtk files and exported as a collection of primitives in any common file format (.iv, .pfb). Subsequently this files can be displayed and manipulated in the CAVE virtual reality center. The CAVE is a multiuser walkable virtual room consisting of several walls on which stereoscopic images are projected by rear panel beamers. Adequate tracking of the head position and separate image calculation for each eye yields a vivid impression for one or several users. With the use of a seperately tracked 6D joystick manipulations such as rotation, translation, zooming, decomposition or highlighting can be done intuitively. The usage of the CAVE technology opens new possibilities especially in surgical training ("hands-on-effect") and as an educational tool (availability of pathological data). Unlike concurring technologies the CAVE permits a walk-through into the virtual scene but preserves enough physical perception to allow interaction between multiple users, e.g. gestures and movements. By training in a virtual environment on one hand the learning process of students in complex anatomic findings may be improved considerably and on the other hand unaccustomed views such as the one through a microscope or endoscope can be trained in advance. The availability of low-cost PC based CAVE-like systems and the rapidly decreasing price of high-performance video beamers makes the CAVE an affordable alternative to conventional surgical training techniques and without limitations in handling cadavers.
The Creation of a Theoretical Framework for Avatar Creation and Revision
ERIC Educational Resources Information Center
Beck, Dennis; Murphy, Cheryl
2014-01-01
Multi-User Virtual Environments (MUVE) are increasingly being used in education and provide environments where users can manipulate minute details of their avatar's appearance including those traditionally associated with gender and race identification. The ability to choose racial and gender characteristics differs from real-world educational…
Chuang, Shih-Chyueh; Hwang, Fu-Kwun; Tsai, Chin-Chung
2008-04-01
The purpose of this study was to investigate the perceptions of Internet users of a physics virtual laboratory, Demolab, in Taiwan. Learners' perceptions of Internet-based learning environments were explored and the role of gender was examined by using preferred and actual forms of a revised Constructivist Internet-based Learning Environment Survey (CILES). The students expressed a clear gap between ideal and reality, and they showed higher preferences for many features of constructivist Internet-based learning environments than for features they had actually learned in Demolab. The results further suggested that male users prefer to be involved in the process of discussion and to show critical judgments. In addition, male users indicated they enjoyed the process of negotiation and discussion with others and were able to engage in reflective thoughts while learning in Demolab. In light of these findings, male users seemed to demonstrate better adaptability to the constructivist Internet-based learning approach than female users did. Although this study indicated certain differences between males and females in their responses to Internet-based learning environments, they also shared numerous similarities. A well-established constructivist Internet-based learning environment may encourage more female learners to participate in the science community.
A Standard-Compliant Virtual Meeting System with Active Video Object Tracking
NASA Astrophysics Data System (ADS)
Lin, Chia-Wen; Chang, Yao-Jen; Wang, Chih-Ming; Chen, Yung-Chang; Sun, Ming-Ting
2002-12-01
This paper presents an H.323 standard compliant virtual video conferencing system. The proposed system not only serves as a multipoint control unit (MCU) for multipoint connection but also provides a gateway function between the H.323 LAN (local-area network) and the H.324 WAN (wide-area network) users. The proposed virtual video conferencing system provides user-friendly object compositing and manipulation features including 2D video object scaling, repositioning, rotation, and dynamic bit-allocation in a 3D virtual environment. A reliable, and accurate scheme based on background image mosaics is proposed for real-time extracting and tracking foreground video objects from the video captured with an active camera. Chroma-key insertion is used to facilitate video objects extraction and manipulation. We have implemented a prototype of the virtual conference system with an integrated graphical user interface to demonstrate the feasibility of the proposed methods.
NASA Astrophysics Data System (ADS)
Rengganis, Y. A.; Safrodin, M.; Sukaridhoto, S.
2018-01-01
Virtual Reality Laboratory (VR Lab) is an innovation for conventional learning media which show us whole learning process in laboratory. There are many tools and materials are needed by user for doing practical in it, so user could feel new learning atmosphere by using this innovation. Nowadays, technologies more sophisticated than before. So it would carry in education and it will be more effective, efficient. The Supported technologies are needed us for making VR Lab such as head mounted display device and hand motion gesture device. The integration among them will be used us for making this research. Head mounted display device for viewing 3D environment of virtual reality laboratory. Hand motion gesture device for catching user real hand and it will be visualized in virtual reality laboratory. Virtual Reality will show us, if using the newest technologies in learning process it could make more interesting and easy to understand.
ERIC Educational Resources Information Center
Annetta, Leonard; Murray, Marshall; Gull Laird, Shelby; Bohr, Stephanie; Park, John
2008-01-01
This article describes a graduate distance education course at North Carolina State University, which combined science content and pedagogy with video game design. The course was conducted entirely in a synchronous, online, Virtual Learning Environment (VLE) through the ActiveWorlds[TM] platform. Inservice teachers enrolled as graduate students in…
ERIC Educational Resources Information Center
Zuiker, Steven J.
2012-01-01
As a global cyberinfrastructure, the Internet makes authentic digital problem spaces like educational virtual environments (EVEs) available to a wide range of classrooms, schools and education systems operating under different circumstantial, practical, social and cultural conditions. And yet, if the makers and users of EVEs both have a hand in…
Designing for Real-World Scientific Inquiry in Virtual Environments
ERIC Educational Resources Information Center
Ketelhut, Diane Jass; Nelson, Brian C.
2010-01-01
Background: Most policy doctrines promote the use of scientific inquiry in the K-12 classroom, but good inquiry is hard to implement, particularly for schools with fiscal and safety constraints and for teachers struggling with understanding how to do so. Purpose: In this paper, we present the design of a multi-user virtual environment (MUVE)…
Avatars Go to Class: A Virtual Environment Soil Science Activity
ERIC Educational Resources Information Center
Mamo, M.; Namuth-Covert, D.; Guru, A.; Nugent, G.; Phillips, L.; Sandall, L.; Kettler, T.; McCallister, D.
2011-01-01
Web 2.0 technology is expanding rapidly from social and gaming uses into the educational applications. Specifically, the multi-user virtual environment (MUVE), such as SecondLife, allows educators to fill the gap of first-hand experience by creating simulated realistic evolving problems/games. In a pilot study, a team of educators at the…
A Virtual Reality Simulator Prototype for Learning and Assessing Phaco-sculpting Skills
NASA Astrophysics Data System (ADS)
Choi, Kup-Sze
This paper presents a virtual reality based simulator prototype for learning phacoemulsification in cataract surgery, with focus on the skills required for making a cross-shape trench in cataractous lens by an ultrasound probe during the phaco-sculpting procedure. An immersive virtual environment is created with 3D models of the lens and surgical tools. Haptic device is also used as 3D user interface. Phaco-sculpting is simulated by interactively deleting the constituting tetrahedrons of the lens model. Collisions between the virtual probe and the lens are effectively identified by partitioning the space containing the lens hierarchically with an octree. The simulator can be programmed to collect real-time quantitative user data for reviewing and assessing trainee's performance in an objective manner. A game-based learning environment can be created on top of the simulator by incorporating gaming elements based on the quantifiable performance metrics.
A Coalition Approach to Higher-Level Fusion
2009-07-01
environment and as a simulation which operates without any human interaction . In a typical wargame, WISE might portray a complex warfighting...the environment , while higher-level fusion is about establishing the story behind the data. Thus, achieving situational awareness for the users of...It is important to manage the relationship between the users and the virtual adviser so that context can be conveyed without disrupting the users
WebVR: an interactive web browser for virtual environments
NASA Astrophysics Data System (ADS)
Barsoum, Emad; Kuester, Falko
2005-03-01
The pervasive nature of web-based content has lead to the development of applications and user interfaces that port between a broad range of operating systems and databases, while providing intuitive access to static and time-varying information. However, the integration of this vast resource into virtual environments has remained elusive. In this paper we present an implementation of a 3D Web Browser (WebVR) that enables the user to search the internet for arbitrary information and to seamlessly augment this information into virtual environments. WebVR provides access to the standard data input and query mechanisms offered by conventional web browsers, with the difference that it generates active texture-skins of the web contents that can be mapped onto arbitrary surfaces within the environment. Once mapped, the corresponding texture functions as a fully integrated web-browser that will respond to traditional events such as the selection of links or text input. As a result, any surface within the environment can be turned into a web-enabled resource that provides access to user-definable data. In order to leverage from the continuous advancement of browser technology and to support both static as well as streamed content, WebVR uses ActiveX controls to extract the desired texture skin from industry strength browsers, providing a unique mechanism for data fusion and extensibility.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Priedhorsky, Reid; Randles, Tim
Charliecloud is a set of scripts to let users run a virtual cluster of virtual machines (VMs) on a desktop or supercomputer. Key functions include: 1. Creating (typically by installing an operating system from vendor media) and updating VM images; 2. Running a single VM; 3. Running multiple VMs in a virtual cluster. The virtual machines can talk to one another over the network and (in some cases) the outside world. This is accomplished by calling external programs such as QEMU and the Virtual Distributed Ethernet (VDE) suite. The goal is to let users have a virtual cluster containing nodesmore » where they have privileged access, while isolating that privilege within the virtual cluster so it cannot affect the physical compute resources. Host configuration enforces security; this is not included in Charliecloud, though security guidelines are included in its documentation and Charliecloud is designed to facilitate such configuration. Charliecloud manages passing information from host computers into and out of the virtual machines, such as parameters of the virtual cluster, input data specified by the user, output data from virtual compute jobs, VM console display, and network connections (e.g., SSH or X11). Parameters for the virtual cluster (number of VMs, RAM and disk per VM, etc.) are specified by the user or gathered from the environment (e.g., SLURM environment variables). Example job scripts are included. These include computation examples (such as a "hello world" MPI job) as well as performance tests. They also include a security test script to verify that the virtual cluster is appropriately sandboxed. Tests include: 1. Pinging hosts inside and outside the virtual cluster to explore connectivity; 2. Port scans (again inside and outside) to see what services are available; 3. Sniffing tests to see what traffic is visible to running VMs; 4. IP address spoofing to test network functionality in this case; 5. File access tests to make sure host access permissions are enforced. This test script is not a comprehensive scanner and does not test for specific vulnerabilities. Importantly, no information about physical hosts or network topology is included in this script (or any of Charliecloud); while part of a sensible test, such information is specified by the user when the test is run. That is, one cannot learn anything about the LANL network or computing infrastructure by examining Charliecloud code.« less
Design of an immersive simulator for assisted power wheelchair driving.
Devigne, Louise; Babel, Marie; Nouviale, Florian; Narayanan, Vishnu K; Pasteau, Francois; Gallien, Philippe
2017-07-01
Driving a power wheelchair is a difficult and complex visual-cognitive task. As a result, some people with visual and/or cognitive disabilities cannot access the benefits of a power wheelchair because their impairments prevent them from driving safely. In order to improve their access to mobility, we have previously designed a semi-autonomous assistive wheelchair system which progressively corrects the trajectory as the user manually drives the wheelchair and smoothly avoids obstacles. Developing and testing such systems for wheelchair driving assistance requires a significant amount of material resources and clinician time. With Virtual Reality technology, prototypes can be developed and tested in a risk-free and highly flexible Virtual Environment before equipping and testing a physical prototype. Additionally, users can "virtually" test and train more easily during the development process. In this paper, we introduce a power wheelchair driving simulator allowing the user to navigate with a standard wheelchair in an immersive 3D Virtual Environment. The simulation framework is designed to be flexible so that we can use different control inputs. In order to validate the framework, we first performed tests on the simulator with able-bodied participants during which the user's Quality of Experience (QoE) was assessed through a set of questionnaires. Results show that the simulator is a promising tool for future works as it generates a good sense of presence and requires rather low cognitive effort from users.
A Collaborative Molecular Modeling Environment Using a Virtual Tunneling Service
Lee, Jun; Kim, Jee-In; Kang, Lin-Woo
2012-01-01
Collaborative researches of three-dimensional molecular modeling can be limited by different time zones and locations. A networked virtual environment can be utilized to overcome the problem caused by the temporal and spatial differences. However, traditional approaches did not sufficiently consider integration of different computing environments, which were characterized by types of applications, roles of users, and so on. We propose a collaborative molecular modeling environment to integrate different molecule modeling systems using a virtual tunneling service. We integrated Co-Coot, which is a collaborative crystallographic object-oriented toolkit, with VRMMS, which is a virtual reality molecular modeling system, through a collaborative tunneling system. The proposed system showed reliable quantitative and qualitative results through pilot experiments. PMID:22927721
An integrated pipeline to create and experience compelling scenarios in virtual reality
NASA Astrophysics Data System (ADS)
Springer, Jan P.; Neumann, Carsten; Reiners, Dirk; Cruz-Neira, Carolina
2011-03-01
One of the main barriers to create and use compelling scenarios in virtual reality is the complexity and time-consuming efforts for modeling, element integration, and the software development to properly display and interact with the content in the available systems. Still today, most virtual reality applications are tedious to create and they are hard-wired to the specific display and interaction system available to the developers when creating the application. Furthermore, it is not possible to alter the content or the dynamics of the content once the application has been created. We present our research on designing a software pipeline that enables the creation of compelling scenarios with a fair degree of visual and interaction complexity in a semi-automated way. Specifically, we are targeting drivable urban scenarios, ranging from large cities to sparsely populated rural areas that incorporate both static components (e. g., houses, trees) and dynamic components (e. g., people, vehicles) as well as events, such as explosions or ambient noise. Our pipeline has four basic components. First, an environment designer, where users sketch the overall layout of the scenario, and an automated method constructs the 3D environment from the information in the sketch. Second, a scenario editor used for authoring the complete scenario, incorporate the dynamic elements and events, fine tune the automatically generated environment, define the execution conditions of the scenario, and set up any data gathering that may be necessary during the execution of the scenario. Third, a run-time environment for different virtual-reality systems provides users with the interactive experience as designed with the designer and the editor. And fourth, a bi-directional monitoring system that allows for capturing and modification of information from the virtual environment. One of the interesting capabilities of our pipeline is that scenarios can be built and modified on-the-fly as they are being presented in the virtual-reality systems. Users can quickly prototype the basic scene using the designer and the editor on a control workstation. More elements can then be introduced into the scene from both the editor and the virtual-reality display. In this manner, users are able to gradually increase the complexity of the scenario with immediate feedback. The main use of this pipeline is the rapid development of scenarios for human-factors studies. However, it is applicable in a much more general context.
NASA Technical Reports Server (NTRS)
Fisher, Scott S.
1986-01-01
A head-mounted, wide-angle, stereoscopic display system controlled by operator position, voice and gesture has been developed for use as a multipurpose interface environment. The system provides a multisensory, interactive display environment in which a user can virtually explore a 360-degree synthesized or remotely sensed environment and can viscerally interact with its components. Primary applications of the system are in telerobotics, management of large-scale integrated information systems, and human factors research. System configuration, application scenarios, and research directions are described.
NASA Astrophysics Data System (ADS)
Aneri, Parikh; Sumathy, S.
2017-11-01
Cloud computing provides services over the internet and provides application resources and data to the users based on their demand. Base of the Cloud Computing is consumer provider model. Cloud provider provides resources which consumer can access using cloud computing model in order to build their application based on their demand. Cloud data center is a bulk of resources on shared pool architecture for cloud user to access. Virtualization is the heart of the Cloud computing model, it provides virtual machine as per application specific configuration and those applications are free to choose their own configuration. On one hand, there is huge number of resources and on other hand it has to serve huge number of requests effectively. Therefore, resource allocation policy and scheduling policy play very important role in allocation and managing resources in this cloud computing model. This paper proposes the load balancing policy using Hungarian algorithm. Hungarian Algorithm provides dynamic load balancing policy with a monitor component. Monitor component helps to increase cloud resource utilization by managing the Hungarian algorithm by monitoring its state and altering its state based on artificial intelligent. CloudSim used in this proposal is an extensible toolkit and it simulates cloud computing environment.
Collaborative Virtual Gaming Worlds in Higher Education
ERIC Educational Resources Information Center
Whitton, Nicola; Hollins, Paul
2008-01-01
There is growing interest in the use of virtual gaming worlds in education, supported by the increased use of multi-user virtual environments (MUVEs) and massively multi-player online role-playing games (MMORPGs) for collaborative learning. However, this paper argues that collaborative gaming worlds have been in use much longer and are much wider…
An Investigation into Cooperative Learning in a Virtual World Using Problem-Based Learning
ERIC Educational Resources Information Center
Parson, Vanessa; Bignell, Simon
2017-01-01
Three-dimensional multi-user virtual environments (MUVEs) have the potential to provide experiential learning qualitatively similar to that found in the real world. MUVEs offer a pedagogically-driven immersive learning opportunity for educationalists that is cost-effective and enjoyable. A family of digital virtual avatars was created within…
Social Presence and Motivation in a Three-Dimensional Virtual World: An Explanatory Study
ERIC Educational Resources Information Center
Yilmaz, Rabia M.; Topu, F. Burcu; Goktas, Yuksel; Coban, Murat
2013-01-01
Three-dimensional (3-D) virtual worlds differ from other learning environments in their similarity to real life, providing opportunities for more effective communication and interaction. With these features, 3-D virtual worlds possess considerable potential to enhance learning opportunities. For effective learning, the users' motivation levels and…
Robots, multi-user virtual environments and healthcare: synergies for future directions.
Moon, Ajung; Grajales, Francisco J; Van der Loos, H F Machiel
2011-01-01
The adoption of technology in healthcare over the last twenty years has steadily increased, particularly as it relates to medical robotics and Multi-User Virtual Environments (MUVEs) such as Second Life. Both disciplines have been shown to improve the quality of care and have evolved, for the most part, in isolation from each other. In this paper, we present four synergies between medical robotics and MUVEs that have the potential to decrease resource utilization and improve the quality of healthcare delivery. We conclude with some foreseeable barriers and future research directions for researchers in these fields.
How to avoid simulation sickness in virtual environments during user displacement
NASA Astrophysics Data System (ADS)
Kemeny, A.; Colombet, F.; Denoual, T.
2015-03-01
Driving simulation (DS) and Virtual Reality (VR) share the same technologies for visualization and 3D vision and may use the same technics for head movement tracking. They experience also similar difficulties when rendering the displacements of the observer in virtual environments, especially when these displacements are carried out using driver commands, including steering wheels, joysticks and nomad devices. High values for transport delay, the time lag between the action and the corresponding rendering cues and/or visual-vestibular conflict, due to the discrepancies perceived by the human visual and vestibular systems when driving or displacing using a control device, induces the so-called simulation sickness. While the visual transport delay can be efficiently reduced using high frequency frame rate, the visual-vestibular conflict is inherent to VR, when not using motion platforms. In order to study the impact of displacements on simulation sickness, we have tested various driving scenarios in Renault's 5-sided ultra-high resolution CAVE. First results indicate that low speed displacements with longitudinal and lateral accelerations under a given perception thresholds are well accepted by a large number of users and relatively high values are only accepted by experienced users and induce VR induced symptoms and effects (VRISE) for novice users, with a worst case scenario corresponding to rotational displacements. These results will be used for optimization technics at Arts et Métiers ParisTech for motion sickness reduction in virtual environments for industrial, research, educational or gaming applications.
Evaluation of navigation interfaces in virtual environments
NASA Astrophysics Data System (ADS)
Mestre, Daniel R.
2014-02-01
When users are immersed in cave-like virtual reality systems, navigational interfaces have to be used when the size of the virtual environment becomes larger than the physical extent of the cave floor. However, using navigation interfaces, physically static users experience self-motion (visually-induced vection). As a consequence, sensorial incoherence between vision (indicating self-motion) and other proprioceptive inputs (indicating immobility) can make them feel dizzy and disoriented. We tested, in two experimental studies, different locomotion interfaces. The objective was twofold: testing spatial learning and cybersickness. In a first experiment, using first-person navigation with a flystick ®, we tested the effect of sensorial aids, a spatialized sound or guiding arrows on the ground, attracting the user toward the goal of the navigation task. Results revealed that sensorial aids tended to impact negatively spatial learning. Moreover, subjects reported significant levels of cybersickness. In a second experiment, we tested whether such negative effects could be due to poorly controlled rotational motion during simulated self-motion. Subjects used a gamepad, in which rotational and translational displacements were independently controlled by two joysticks. Furthermore, we tested first- versus third-person navigation. No significant difference was observed between these two conditions. Overall, cybersickness tended to be lower, as compared to experiment 1, but the difference was not significant. Future research should evaluate further the hypothesis of the role of passively perceived optical flow in cybersickness, but manipulating the virtual environment'sperrot structure. It also seems that video-gaming experience might be involved in the user's sensitivity to cybersickness.
KinImmerse: Macromolecular VR for NMR ensembles
Block, Jeremy N; Zielinski, David J; Chen, Vincent B; Davis, Ian W; Vinson, E Claire; Brady, Rachael; Richardson, Jane S; Richardson, David C
2009-01-01
Background In molecular applications, virtual reality (VR) and immersive virtual environments have generally been used and valued for the visual and interactive experience – to enhance intuition and communicate excitement – rather than as part of the actual research process. In contrast, this work develops a software infrastructure for research use and illustrates such use on a specific case. Methods The Syzygy open-source toolkit for VR software was used to write the KinImmerse program, which translates the molecular capabilities of the kinemage graphics format into software for display and manipulation in the DiVE (Duke immersive Virtual Environment) or other VR system. KinImmerse is supported by the flexible display construction and editing features in the KiNG kinemage viewer and it implements new forms of user interaction in the DiVE. Results In addition to molecular visualizations and navigation, KinImmerse provides a set of research tools for manipulation, identification, co-centering of multiple models, free-form 3D annotation, and output of results. The molecular research test case analyzes the local neighborhood around an individual atom within an ensemble of nuclear magnetic resonance (NMR) models, enabling immersive visual comparison of the local conformation with the local NMR experimental data, including target curves for residual dipolar couplings (RDCs). Conclusion The promise of KinImmerse for production-level molecular research in the DiVE is shown by the locally co-centered RDC visualization developed there, which gave new insights now being pursued in wider data analysis. PMID:19222844
Rizzo, Albert Skip; Difede, JoAnn; Rothbaum, Barbara O; Reger, Greg; Spitalnick, Josh; Cukor, Judith; McLay, Rob
2010-10-01
Numerous reports indicate that the growing incidence of posttraumatic stress disorder (PTSD) in returning Operation Enduring Freedom (OEF)/Operation Iraqi Freedom (OIF) military personnel is creating a significant health care and economic challenge. These findings have served to motivate research on how to better develop and disseminate evidence-based treatments for PTSD. Virtual reality-delivered exposure therapy for PTSD has been previously used with reports of positive outcomes. The current paper will detail the development and early results from use of the Virtual Iraq/Afghanistan exposure therapy system. The system consists of a series of customizable virtual scenarios designed to represent relevant Middle Eastern contexts for exposure therapy, including a city and desert road convoy environment. The process for gathering user-centered design feedback from returning OEF/OIF military personnel and from a system deployed in Iraq (as was needed to iteratively evolve the system) will be discussed, along with a brief summary of results from an open clinical trial using Virtual Iraq with 20 treatment completers, which indicated that 16 no longer met PTSD checklist-military criteria for PTSD after treatment. © 2010 Association for Research in Nervous and Mental Disease.
Design and Implementation of a 3D Multi-User Virtual World for Language Learning
ERIC Educational Resources Information Center
Ibanez, Maria Blanca; Garcia, Jose Jesus; Galan, Sergio; Maroto, David; Morillo, Diego; Kloos, Carlos Delgado
2011-01-01
The best way to learn is by having a good teacher and the best language learning takes place when the learner is immersed in an environment where the language is natively spoken. 3D multi-user virtual worlds have been claimed to be useful for learning, and the field of exploiting them for education is becoming more and more active thanks to the…
EduMOOs: Virtual Learning Centers.
ERIC Educational Resources Information Center
Woods, Judy C.
1998-01-01
Multi-user Object Oriented Internet activities (MOOs) permit real time interaction in a text-based virtual reality via the Internet. This article explains EduMOOs (educational MOOs) and provides brief descriptions, World Wide Web addresses, and telnet addresses for selected EduMOOs. Instructions for connecting to a MOO and a list of related Web…
The VERITAS Facility: A Virtual Environment Platform for Human Performance Research
2016-01-01
IAE The IAE supports the audio environment that users experience during the course of an experiment. This includes environmental sounds, user-to...future, we are looking towards a database-based system that would use MySQL or an equivalent product to store the large data sets and provide standard
Future Evolution of Virtual Worlds as Communication Environments
NASA Astrophysics Data System (ADS)
Prisco, Giulio
Extensive experience creating locations and activities inside virtual worlds provides the basis for contemplating their future. Users of virtual worlds are diverse in their goals for these online environments; for example, immersionists want them to be alternative realities disconnected from real life, whereas augmentationists want them to be communication media supporting real-life activities. As the technology improves, the diversity of virtual worlds will increase along with their significance. Many will incorporate more advanced virtual reality, or serve as major media for long-distance collaboration, or become the venues for futurist social movements. Key issues are how people can create their own virtual worlds, travel across worlds, and experience a variety of multimedia immersive environments. This chapter concludes by noting the view among some computer scientists that future technologies will permit uploading human personalities to artificial intelligence avatars, thereby enhancing human beings and rendering the virtual worlds entirely real.
ERIC Educational Resources Information Center
Sandhu, Rizwan Saleem; Hussain, Sajid
2016-01-01
The online learning has broadened the teaching spectrum from Face-to-Face to virtual environment, and this move has brought traditional teacher-centered instruction to learner-centered instruction. This paradigm shift appears to place demands on faculty to modify faculty's instruction roles that are different from those encountered in Face-to-Face…
UkrVO astronomical WEB services
NASA Astrophysics Data System (ADS)
Mazhaev, A.
2017-02-01
Ukraine Virtual Observatory (UkrVO) has been a member of the International Virtual Observatory Alliance (IVOA) since 2011. The virtual observatory (VO) is not a magic solution to all problems of data storing and processing, but it provides certain standards for building infrastructure of astronomical data center. The astronomical databases help data mining and offer to users an easy access to observation metadata, images within celestial sphere and results of image processing. The astronomical web services (AWS) of UkrVO give to users handy tools for data selection from large astronomical catalogues for a relatively small region of interest in the sky. Examples of the AWS usage are showed.
NASA Astrophysics Data System (ADS)
Kersten, T. P.; Büyüksalih, G.; Tschirschwitz, F.; Kan, T.; Deggim, S.; Kaya, Y.; Baskaraca, A. P.
2017-05-01
Recent advances in contemporary Virtual Reality (VR) technologies are going to have a significant impact on veryday life. Through VR it is possible to virtually explore a computer-generated environment as a different reality, and to immerse oneself into the past or in a virtual museum without leaving the current real-life situation. For such the ultimate VR experience, the user should only see the virtual world. Currently, the user must wear a VR headset which fits around the head and over the eyes to visually separate themselves from the physical world. Via the headset images are fed to the eyes through two small lenses. Cultural heritage monuments are ideally suited both for thorough multi-dimensional geometric documentation and for realistic interactive visualisation in immersive VR applications. Additionally, the game industry offers tools for interactive visualisation of objects to motivate users to virtually visit objects and places. In this paper the generation of a virtual 3D model of the Selimiye mosque in the city of Edirne, Turkey and its processing for data integration into the game engine Unity is presented. The project has been carried out as a co-operation between BİMTAŞ, a company of the Greater Municipality of Istanbul, Turkey and the Photogrammetry & Laser Scanning Lab of the HafenCity University Hamburg, Germany to demonstrate an immersive and interactive visualisation using the new VR system HTC Vive. The workflow from data acquisition to VR visualisation, including the necessary programming for navigation, is described. Furthermore, the possible use (including simultaneous multiple users environments) of such a VR visualisation for a CH monument is discussed in this contribution.
Augmented Reality versus Virtual Reality for 3D Object Manipulation.
Krichenbauer, Max; Yamamoto, Goshiro; Taketom, Takafumi; Sandor, Christian; Kato, Hirokazu
2018-02-01
Virtual Reality (VR) Head-Mounted Displays (HMDs) are on the verge of becoming commodity hardware available to the average user and feasible to use as a tool for 3D work. Some HMDs include front-facing cameras, enabling Augmented Reality (AR) functionality. Apart from avoiding collisions with the environment, interaction with virtual objects may also be affected by seeing the real environment. However, whether these effects are positive or negative has not yet been studied extensively. For most tasks it is unknown whether AR has any advantage over VR. In this work we present the results of a user study in which we compared user performance measured in task completion time on a 9 degrees of freedom object selection and transformation task performed either in AR or VR, both with a 3D input device and a mouse. Our results show faster task completion time in AR over VR. When using a 3D input device, a purely VR environment increased task completion time by 22.5 percent on average compared to AR ( ). Surprisingly, a similar effect occurred when using a mouse: users were about 17.3 percent slower in VR than in AR ( ). Mouse and 3D input device produced similar task completion times in each condition (AR or VR) respectively. We further found no differences in reported comfort.
ERIC Educational Resources Information Center
McCreery, Michael P.; Schrader, P. G.; Krach, S. Kathleen
2011-01-01
There is a substantial and growing interest in immersive virtual spaces as contexts for 21st century skills like problem solving, communication, and collaboration. However, little consideration has been given to the ways in which users become proficient in these environments or what types of target behaviors are associated with 21st century…
ERIC Educational Resources Information Center
Poitras, Eric G.; Naismith, Laura M.; Doleck, Tenzin; Lajoie, Susanne P.
2016-01-01
This study aimed to identify misconceptions in medical student knowledge by mining user interactions in the MedU online learning environment. Data from 13000 attempts at a single virtual patient case were extracted from the MedU MySQL database. A subgroup discovery method was applied to identify patterns in learner-generated annotations and…
CyberDeutsch: Language Production and User Preferences in a Moodle Virtual Learning Environment
ERIC Educational Resources Information Center
Stickler, Ursula; Hampel, Regine
2010-01-01
This case study focuses on two learners who took part in an intensive online German course offered to intermediate level students in the Department of Languages of the Open University. The course piloted the use of a Moodle-based virtual learning environment and a range of new online tools which lend themselves to different types of language…
Virtual workstation - A multimodal, stereoscopic display environment
NASA Astrophysics Data System (ADS)
Fisher, S. S.; McGreevy, M.; Humphries, J.; Robinett, W.
1987-01-01
A head-mounted, wide-angle, stereoscopic display system controlled by operator position, voice and gesture has been developed for use in a multipurpose interface environment. The system provides a multisensory, interactive display environment in which a user can virtually explore a 360-degree synthesized or remotely sensed environment and can viscerally interact with its components. Primary applications of the system are in telerobotics, management of large-scale integrated information systems, and human factors research. System configuration, application scenarios, and research directions are described.
Distributed collaborative environments for virtual capability-based planning
NASA Astrophysics Data System (ADS)
McQuay, William K.
2003-09-01
Distributed collaboration is an emerging technology that will significantly change how decisions are made in the 21st century. Collaboration involves two or more geographically dispersed individuals working together to share and exchange data, information, knowledge, and actions. The marriage of information, collaboration, and simulation technologies provides the decision maker with a collaborative virtual environment for planning and decision support. This paper reviews research that is focusing on the applying open standards agent-based framework with integrated modeling and simulation to a new Air Force initiative in capability-based planning and the ability to implement it in a distributed virtual environment. Virtual Capability Planning effort will provide decision-quality knowledge for Air Force resource allocation and investment planning including examining proposed capabilities and cost of alternative approaches, the impact of technologies, identification of primary risk drivers, and creation of executable acquisition strategies. The transformed Air Force business processes are enabled by iterative use of constructive and virtual modeling, simulation, and analysis together with information technology. These tools are applied collaboratively via a technical framework by all the affected stakeholders - warfighter, laboratory, product center, logistics center, test center, and primary contractor.
Enhancing Security by System-Level Virtualization in Cloud Computing Environments
NASA Astrophysics Data System (ADS)
Sun, Dawei; Chang, Guiran; Tan, Chunguang; Wang, Xingwei
Many trends are opening up the era of cloud computing, which will reshape the IT industry. Virtualization techniques have become an indispensable ingredient for almost all cloud computing system. By the virtual environments, cloud provider is able to run varieties of operating systems as needed by each cloud user. Virtualization can improve reliability, security, and availability of applications by using consolidation, isolation, and fault tolerance. In addition, it is possible to balance the workloads by using live migration techniques. In this paper, the definition of cloud computing is given; and then the service and deployment models are introduced. An analysis of security issues and challenges in implementation of cloud computing is identified. Moreover, a system-level virtualization case is established to enhance the security of cloud computing environments.
Tools virtualization for command and control systems
NASA Astrophysics Data System (ADS)
Piszczek, Marek; Maciejewski, Marcin; Pomianek, Mateusz; Szustakowski, Mieczysław
2017-10-01
Information management is an inseparable part of the command process. The result is that the person making decisions at the command post interacts with data providing devices in various ways. Tools virtualization process can introduce a number of significant modifications in the design of solutions for management and command. The general idea involves replacing physical devices user interface with their digital representation (so-called Virtual instruments). A more advanced level of the systems "digitalization" is to use the mixed reality environments. In solutions using Augmented reality (AR) customized HMI is displayed to the operator when he approaches to each device. Identification of device is done by image recognition of photo codes. Visualization is achieved by (optical) see-through head mounted display (HMD). Control can be done for example by means of a handheld touch panel. Using the immersive virtual environment, the command center can be digitally reconstructed. Workstation requires only VR system (HMD) and access to information network. Operator can interact with devices in such a way as it would perform in real world (for example with the virtual hands). Because of their procedures (an analysis of central vision, eye tracking) MR systems offers another useful feature of reducing requirements for system data throughput. Due to the fact that at the moment we focus on the single device. Experiments carried out using Moverio BT-200 and SteamVR systems and the results of experimental application testing clearly indicate the ability to create a fully functional information system with the use of mixed reality technology.
Earth Adventure: Virtual Globe-based Suborbital Atmospheric Greenhouse Gases Exploration
NASA Astrophysics Data System (ADS)
Wei, Y.; Landolt, K.; Boyer, A.; Santhana Vannan, S. K.; Wei, Z.; Wang, E.
2016-12-01
The Earth Venture Suborbital (EVS) mission is an important component of NASA's Earth System Science Pathfinder program that aims at making substantial advances in Earth system science through measurements from suborbital platforms and modeling researches. For example, the Carbon in Arctic Reservoirs Vulnerability Experiment (CARVE) project of EVS-1 collected measurements of greenhouse gases (GHG) on local to regional scales in the Alaskan Arctic. The Atmospheric Carbon and Transport - America (ACT-America) project of EVS-2 will provide advanced, high-resolution measurements of atmospheric profiles and horizontal gradients of CO2 and CH4.As the long-term archival center for CARVE and the future ACT-America data, the Oak Ridge National Laboratory Distributed Active Archive Center (ORNL DAAC) has been developing a versatile data management system for CARVE data to maximize their usability. One of these efforts is the virtual globe-based Suborbital Atmospheric GHG Exploration application. It leverages Google Earth to simulate the 185 flights flew by the C-23 Sherpa aircraft in 2012-2015 for the CARVE project. Based on Google Earth's 3D modeling capability and the precise coordinates, altitude, pitch, roll, and heading info of the aircraft recorded in every second during each flight, the application provides users accurate and vivid simulation of flight experiences, with an active 3D visualization of a C-23 Sherpa aircraft in view. This application provides dynamic visualization of GHG, including CO2, CO, H2O, and CH4 captured during the flights, at the same pace of the flight simulation in Google Earth. Photos taken during those flights are also properly displayed along the flight paths. In the future, this application will be extended to incorporate more complicated GHG measurements (e.g. vertical profiles) from the ACT-America project. This application leverages virtual globe technology to provide users an integrated framework to interactively explore information about GHG measurements and to link scientific measurements to the rich virtual planet environment provided by Google Earth. Positive feedbacks have been received from users. It provides a good example of extending basic data visualization into a knowledge discovery experience and maximizing the usability of Earth science observations.
ERIC Educational Resources Information Center
McGlamery, Susan; Coffman, Steve
2000-01-01
Explores the possibility of using Web contact center software to offer reference assistance to remote users. Discusses a project by the Metropolitan Cooperative Library System/Santiago Library System consortium to test contact center software and to develop a virtual reference network. (Author/LRW)
Investigation of tracking systems properties in CAVE-type virtual reality systems
NASA Astrophysics Data System (ADS)
Szymaniak, Magda; Mazikowski, Adam; Meironke, Michał
2017-08-01
In recent years, many scientific and industrial centers in the world developed a virtual reality systems or laboratories. One of the most advanced solutions are Immersive 3D Visualization Lab (I3DVL), a CAVE-type (Cave Automatic Virtual Environment) laboratory. It contains two CAVE-type installations: six-screen installation arranged in a form of a cube, and four-screen installation, a simplified version of the previous one. The user feeling of "immersion" and interaction with virtual world depend on many factors, in particular on the accuracy of the tracking system of the user. In this paper properties of the tracking systems applied in I3DVL was investigated. For analysis two parameters were selected: the accuracy of the tracking system and the range of detection of markers by the tracking system in space of the CAVE. Measurements of system accuracy were performed for six-screen installation, equipped with four tracking cameras for three axes: X, Y, Z. Rotation around the Y axis was also analyzed. Measured tracking system shows good linear and rotating accuracy. The biggest issue was the range of the monitoring of markers inside the CAVE. It turned out, that the tracking system lose sight of the markers in the corners of the installation. For comparison, for a simplified version of CAVE (four-screen installation), equipped with eight tracking cameras, this problem was not occur. Obtained results will allow for improvement of cave quality.
Novel Virtual User Models of Mild Cognitive Impairment for Simulating Dementia
Segkouli, Sofia; Tzovaras, Dimitrios; Tsakiris, Thanos; Tsolaki, Magda; Karagiannidis, Charalampos
2015-01-01
Virtual user modeling research has attempted to address critical issues of human-computer interaction (HCI) such as usability and utility through a large number of analytic, usability-oriented approaches as cognitive models in order to provide users with experiences fitting to their specific needs. However, there is demand for more specific modules embodied in cognitive architecture that will detect abnormal cognitive decline across new synthetic task environments. Also, accessibility evaluation of graphical user interfaces (GUIs) requires considerable effort for enhancing ICT products accessibility for older adults. The main aim of this study is to develop and test virtual user models (VUM) simulating mild cognitive impairment (MCI) through novel specific modules, embodied at cognitive models and defined by estimations of cognitive parameters. Well-established MCI detection tests assessed users' cognition, elaborated their ability to perform multitasks, and monitored the performance of infotainment related tasks to provide more accurate simulation results on existing conceptual frameworks and enhanced predictive validity in interfaces' design supported by increased tasks' complexity to capture a more detailed profile of users' capabilities and limitations. The final outcome is a more robust cognitive prediction model, accurately fitted to human data to be used for more reliable interfaces' evaluation through simulation on the basis of virtual models of MCI users. PMID:26339282
Augmented reality and photogrammetry: A synergy to visualize physical and virtual city environments
NASA Astrophysics Data System (ADS)
Portalés, Cristina; Lerma, José Luis; Navarro, Santiago
2010-01-01
Close-range photogrammetry is based on the acquisition of imagery to make accurate measurements and, eventually, three-dimensional (3D) photo-realistic models. These models are a photogrammetric product per se. They are usually integrated into virtual reality scenarios where additional data such as sound, text or video can be introduced, leading to multimedia virtual environments. These environments allow users both to navigate and interact on different platforms such as desktop PCs, laptops and small hand-held devices (mobile phones or PDAs). In very recent years, a new technology derived from virtual reality has emerged: Augmented Reality (AR), which is based on mixing real and virtual environments to boost human interactions and real-life navigations. The synergy of AR and photogrammetry opens up new possibilities in the field of 3D data visualization, navigation and interaction far beyond the traditional static navigation and interaction in front of a computer screen. In this paper we introduce a low-cost outdoor mobile AR application to integrate buildings of different urban spaces. High-accuracy 3D photo-models derived from close-range photogrammetry are integrated in real (physical) urban worlds. The augmented environment that is presented herein requires for visualization a see-through video head mounted display (HMD), whereas user's movement navigation is achieved in the real world with the help of an inertial navigation sensor. After introducing the basics of AR technology, the paper will deal with real-time orientation and tracking in combined physical and virtual city environments, merging close-range photogrammetry and AR. There are, however, some software and complex issues, which are discussed in the paper.
NASA Astrophysics Data System (ADS)
Prusten, Mark J.; McIntyre, Michelle; Landis, Marvin
2006-02-01
A 3D workflow pipeline is presented for High Dynamic Range (HDR) image capture of projected scenes or objects for presentation in CAVE virtual environments. The methods of HDR digital photography of environments vs. objects are reviewed. Samples of both types of virtual authoring being the actual CAVE environment and a sculpture are shown. A series of software tools are incorporated into a pipeline called CAVEPIPE, allowing for high-resolution objects and scenes to be composited together in natural illumination environments [1] and presented in our CAVE virtual reality environment. We also present a way to enhance the user interface for CAVE environments. The traditional methods of controlling the navigation through virtual environments include: glove, HUD's and 3D mouse devices. By integrating a wireless network that includes both WiFi (IEEE 802.11b/g) and Bluetooth (IEEE 802.15.1) protocols the non-graphical input control device can be eliminated. Therefore wireless devices can be added that would include: PDA's, Smart Phones, TabletPC's, Portable Gaming consoles, and PocketPC's.
Get immersed in the Soil Sciences: the first community of avatars in the EGU Assembly 2015!
NASA Astrophysics Data System (ADS)
Castillo, Sebastian; Alarcón, Purificación; Beato, Mamen; Emilio Guerrero, José; José Martínez, Juan; Pérez, Cristina; Ortiz, Leovigilda; Taguas, Encarnación V.
2015-04-01
Virtual reality and immersive worlds refer to artificial computer-generated environments, with which users act and interact as in a known environment by the use of figurative virtual individuals (avatars). Virtual environments will be the technology of the early twenty-first century that will most dramatically change the way we live, particularly in the areas of training and education, product development and entertainment (Schmorrow, 2009). The usefulness of immersive worlds has been proved in different fields. They reduce geographic and social barriers between different stakeholders and create virtual social spaces which can positively impact learning and discussion outcomes (Lorenzo et al. 2012). In this work we present a series of interactive meetings in a virtual building to celebrate the International Year of Soil to promote the importance of soil functions and its conservation. In a virtual room, the avatars of different senior researchers will meet young scientist avatars to talk about: 1) what remains to be done in Soil Sciences; 2) which are their main current limitations and difficulties and 3) which are the future hot research lines. The interactive participation does not require physically attend to the EGU Assembly 2015. In addition, this virtual building inspired in Soil Sciences can be completed with different teaching resources from different locations around the world and it will be used to improve the learning of Soil Sciences in a multicultural context. REFERENCES: Lorenzo C.M., Sicilia, M.A., Sánchez S. 2012. Studying the effectiveness of multi-user immersive environments for collaborative evaluation tasks. Computers & Education 59 (2012) 1361-1376 Schmorrow D.D. 2009. "Why virtual?" Theoretical Issues in Ergonomics Science 10(3): 279-282.
Kalawsky, R S
1999-02-01
A special questionnaire (VRUSE) has been designed to measure the usability of a VR system according to the attitude and perception of its users. Important aspects of VR systems were carefully derived to produce key usability factors for the questionnaire. Unlike questionnaires designed for generic interfaces VRUSE is specifically designed to cater for evaluating virtual environments, being a diagnostic tool providing a wealth of information about a user's viewpoint of the interface. VRUSE can be used to great effect with other evaluation techniques to pinpoint problematical areas of a VR interface. Other applications include bench-marking of competitor VR systems.
Virtual Reality Training Environments: Contexts and Concerns.
ERIC Educational Resources Information Center
Harmon, Stephen W.; Kenney, Patrick J.
1994-01-01
Discusses the contexts where virtual reality (VR) training environments might be appropriate; examines the advantages and disadvantages of VR as a training technology; and presents a case study of a VR training environment used at the NASA Johnson Space Center in preparation for the repair of the Hubble Space Telescope. (AEF)
Transcript of the "Overview of the Transportation Data Center" Webinar |
Center"-that was originally presented by Jeff Gonder of the National Renewable Energy Laboratory in a designated secure environment facility, we set up the TSDC as a virtual data center where you centers where you go in and work with the data there. But in this case, it's a virtual connection so you
Shared virtual environments for telerehabilitation.
Popescu, George V; Burdea, Grigore; Boian, Rares
2002-01-01
Current VR telerehabilitation systems use offline remote monitoring from the clinic and patient-therapist videoconferencing. Such "store and forward" and video-based systems cannot implement medical services involving patient therapist direct interaction. Real-time telerehabilitation applications (including remote therapy) can be developed using a shared Virtual Environment (VE) architecture. We developed a two-user shared VE for hand telerehabilitation. Each site has a telerehabilitation workstation with a videocamera and a Rutgers Master II (RMII) force feedback glove. Each user can control a virtual hand and interact hapticly with virtual objects. Simulated physical interactions between therapist and patient are implemented using hand force feedback. The therapist's graphic interface contains several virtual panels, which allow control over the rehabilitation process. These controls start a videoconferencing session, collect patient data, or apply therapy. Several experimental telerehabilitation scenarios were successfully tested on a LAN. A Web-based approach to "real-time" patient telemonitoring--the monitoring portal for hand telerehabilitation--was also developed. The therapist interface is implemented as a Java3D applet that monitors patient hand movement. The monitoring portal gives real-time performance on off-the-shelf desktop workstations.
Klapan, Ivica; Vranjes, Zeljko; Prgomet, Drago; Lukinović, Juraj
2008-03-01
The real-time requirement means that the simulation should be able to follow the actions of the user that may be moving in the virtual environment. The computer system should also store in its memory a three-dimensional (3D) model of the virtual environment. In that case a real-time virtual reality system will update the 3D graphic visualization as the user moves, so that up-to-date visualization is always shown on the computer screen. Upon completion of the tele-operation, the surgeon compares the preoperative and postoperative images and models of the operative field, and studies video records of the procedure itself Using intraoperative records, animated images of the real tele-procedure performed can be designed. Virtual surgery offers the possibility of preoperative planning in rhinology. The intraoperative use of computer in real time requires development of appropriate hardware and software to connect medical instrumentarium with the computer and to operate the computer by thus connected instrumentarium and sophisticated multimedia interfaces.
Using Virtual Reality to Improve Walking Post-Stroke: Translation to Individuals with Diabetes
Deutsch, Judith E
2011-01-01
Use of virtual reality (VR) technology to improve walking for people post-stroke has been studied for its clinical application since 2004. The hardware and software used to create these systems has varied but has predominantly been constituted by projected environments with users walking on treadmills. Transfer of training from the virtual environment to real-world walking has modest but positive research support. Translation of the research findings to clinical practice has been hampered by commercial availability and costs of the VR systems. Suggestions for how the work for individuals post-stroke might be applied and adapted for individuals with diabetes and other impaired ambulatory conditions include involvement of the target user groups (both practitioners and clients) early in the design and integration of activity and education into the systems. PMID:21527098
Using virtual reality to improve walking post-stroke: translation to individuals with diabetes.
Deutsch, Judith E
2011-03-01
Use of virtual reality (VR) technology to improve walking for people post-stroke has been studied for its clinical application since 2004. The hardware and software used to create these systems has varied but has predominantly been constituted by projected environments with users walking on treadmills. Transfer of training from the virtual environment to real-world walking has modest but positive research support. Translation of the research findings to clinical practice has been hampered by commercial availability and costs of the VR systems. Suggestions for how the work for individuals post-stroke might be applied and adapted for individuals with diabetes and other impaired ambulatory conditions include involvement of the target user groups (both practitioners and clients) early in the design and integration of activity and education into the systems. © 2011 Diabetes Technology Society.
Ye, Quanliang; Li, Yi; Zhuo, La; Zhang, Wenlong; Xiong, Wei; Wang, Chao; Wang, Peifang
2018-02-01
This study provides an innovative application of virtual water trade in the traditional allocation of physical water resources in water scarce regions. A multi-objective optimization model was developed to optimize the allocation of physical water and virtual water resources to different water users in Beijing, China, considering the trade-offs between economic benefit and environmental impacts of water consumption. Surface water, groundwater, transferred water and reclaimed water constituted the physical resource of water supply side, while virtual water flow associated with the trade of five major crops (barley, corn, rice, soy and wheat) and three livestock products (beef, pork and poultry) in agricultural sector (calculated by the trade quantities of products and their virtual water contents). Urban (daily activities and public facilities), industry, environment and agriculture (products growing) were considered in water demand side. As for the traditional allocation of physical water resources, the results showed that agriculture and urban were the two predominant water users (accounting 54% and 28%, respectively), while groundwater and surface water satisfied around 70% water demands of different users (accounting 36% and 34%, respectively). When considered the virtual water trade of eight agricultural products in water allocation procedure, the proportion of agricultural consumption decreased to 45% in total water demand, while the groundwater consumption decreased to 24% in total water supply. Virtual water trade overturned the traditional components of water supplied from different sources for agricultural consumption, and became the largest water source in Beijing. Additionally, it was also found that environmental demand took a similar percentage of water consumption in each water source. Reclaimed water was the main water source for industrial and environmental users. The results suggest that physical water resources would mainly satisfy the consumption of urban and environment, and the unbalance between water supply and demand could be filled by virtual water import in water scarce regions. Copyright © 2017 Elsevier Ltd. All rights reserved.
Sarver, Nina Wong; Beidel, Deborah C; Spitalnick, Josh S
2014-01-01
Two significant challenges for the dissemination of social skills training programs are the need to assure generalizability and provide sufficient practice opportunities. In the case of social anxiety disorder, virtual environments may provide one strategy to address these issues. This study evaluated the utility of an interactive virtual school environment for the treatment of social anxiety disorder in preadolescent children. Eleven children with a primary diagnosis of social anxiety disorder between 8 to 12 years old participated in this initial feasibility trial. All children were treated with Social Effectiveness Therapy for Children, an empirically supported treatment for children with social anxiety disorder. However, the in vivo peer generalization sessions and standard parent-assisted homework assignments were substituted by practice in a virtual environment. Overall, the virtual environment programs were acceptable, feasible, and credible treatment components. Both children and clinicians were satisfied with using the virtual environment technology, and children believed it was a high-quality program overall. In addition, parents were satisfied with the virtual environment augmented treatment and indicated that they would recommend the program to family and friends. Findings indicate that the virtual environments are viewed as acceptable and credible by potential recipients. Furthermore, they are easy to implement by even novice users and appear to be useful adjunctive elements for the treatment of childhood social anxiety disorder.
Wong, Nina; Beidel, Deborah C.; Spitalnick, Josh
2013-01-01
Objective Two significant challenges for the dissemination of social skills training programs are the need to assure generalizability and provide sufficient practice opportunities. In the case of social anxiety disorder, virtual environments may provide one strategy to address these issues. This study evaluated the utility of an interactive virtual school environment for the treatment of social anxiety disorder in preadolescent children. Method Eleven children with a primary diagnosis of social anxiety disorder between 8 to 12 years old participated in this initial feasibility trial. All children were treated with Social Effectiveness Therapy for Children, an empirically supported treatment for children with social anxiety disorder. However, the in vivo peer generalization sessions and standard parent-assisted homework assignments were substituted by practice in a virtual environment. Results Overall, the virtual environment programs were acceptable, feasible, and credible treatment components. Both children and clinicians were satisfied with using the virtual environment technology, and children believed it was a high quality program overall. Additionally, parents were satisfied with the virtual environment augmented treatment and indicated that they would recommend the program to family and friends. Conclusion Virtual environments are viewed as acceptable and credible by potential recipients. Furthermore, they are easy to implement by even novice users and appear to be useful adjunctive elements for the treatment of childhood social anxiety disorder. PMID:24144182
Emerging Conceptual Understanding of Complex Astronomical Phenomena by Using a Virtual Solar System
ERIC Educational Resources Information Center
Gazit, Elhanan; Yair, Yoav; Chen, David
2005-01-01
This study describes high school students' conceptual development of the basic astronomical phenomena during real-time interactions with a Virtual Solar System (VSS). The VSS is a non-immersive virtual environment which has a dynamic frame of reference that can be altered by the user. Ten 10th grade students were given tasks containing a set of…
Interfacing modeling suite Physics Of Eclipsing Binaries 2.0 with a Virtual Reality Platform
NASA Astrophysics Data System (ADS)
Harriett, Edward; Conroy, Kyle; Prša, Andrej; Klassner, Frank
2018-01-01
To explore alternate methods for modeling eclipsing binary stars, we extrapolate upon PHOEBE’s (PHysics Of Eclipsing BinariEs) capabilities in a virtual reality (VR) environment to create an immersive and interactive experience for users. The application used is Vizard, a python-scripted VR development platform for environments such as Cave Automatic Virtual Environment (CAVE) and other off-the-shelf VR headsets. Vizard allows the freedom for all modeling to be precompiled without compromising functionality or usage on its part. The system requires five arguments to be precomputed using PHOEBE’s python front-end: the effective temperature, flux, relative intensity, vertex coordinates, and orbits; the user can opt to implement other features from PHOEBE to be accessed within the simulation as well. Here we present the method for making the data observables accessible in real time. An Occulus Rift will be available for a live showcase of various cases of VR rendering of PHOEBE binary systems including detached and contact binary stars.
Bending the Curve: Sensitivity to Bending of Curved Paths and Application in Room-Scale VR.
Langbehn, Eike; Lubos, Paul; Bruder, Gerd; Steinicke, Frank
2017-04-01
Redirected walking (RDW) promises to allow near-natural walking in an infinitely large virtual environment (VE) by subtle manipulations of the virtual camera. Previous experiments analyzed the human sensitivity to RDW manipulations by focusing on the worst-case scenario, in which users walk perfectly straight ahead in the VE, whereas they are redirected on a circular path in the real world. The results showed that a physical radius of at least 22 meters is required for undetectable RDW. However, users do not always walk exactly straight in a VE. So far, it has not been investigated how much a physical path can be bent in situations in which users walk a virtual curved path instead of a straight one. Such curved walking paths can be often observed, for example, when users walk on virtual trails, through bent corridors, or when circling around obstacles. In such situations the question is not, whether or not the physical path can be bent, but how much the bending of the physical path may vary from the bending of the virtual path. In this article, we analyze this question and present redirection by means of bending gains that describe the discrepancy between the bending of curved paths in the real and virtual environment. Furthermore, we report the psychophysical experiments in which we analyzed the human sensitivity to these gains. The results reveal encouragingly wider detection thresholds than for straightforward walking. Based on our findings, we discuss the potential of curved walking and present a first approach to leverage bent paths in a way that can provide undetectable RDW manipulations even in room-scale VR.
Amann, Julia; Zanini, Claudia; Rubinelli, Sara
2016-01-01
Background In order to adapt to societal changes, healthcare systems need to switch from a disease orientation to a patient-centered approach. Virtual patient networks are a promising tool to favor this switch and much can be learned from the open and user innovation literature where the involvement of online user communities in the innovation process is well-documented. Objectives The objectives of this study were 1) to describe the use of online communities as a tool to capture and harness innovative ideas of end users or consumers; and 2) to point to the potential value and challenges of these virtual platforms to function as a tool to inform and promote patient-centered care in the context of chronic health conditions. Methods A scoping review was conducted. A total of seven databases were searched for scientific articles published in English between 1995 and 2014. The search strategy was refined through an iterative process. Results A total of 144 studies were included in the review. Studies were coded inductively according to their research focus to identify groupings of papers. The first set of studies focused on the interplay of factors related to user roles, motivations, and behaviors that shape the innovation process within online communities. Studies of the second set examined the role of firms in online user innovation initiatives, identifying different organizational strategies and challenges. The third set of studies focused on the idea selection process and measures of success with respect to online user innovation initiatives. Finally, the findings from the review are presented in the light of the particularities and challenges discussed in current healthcare research. Conclusion The present paper highlights the potential of virtual patient communities to inform and promote patient-centered care, describes the key challenges involved in this process, and makes recommendations on how to address them. PMID:27272912
Amann, Julia; Zanini, Claudia; Rubinelli, Sara
2016-01-01
In order to adapt to societal changes, healthcare systems need to switch from a disease orientation to a patient-centered approach. Virtual patient networks are a promising tool to favor this switch and much can be learned from the open and user innovation literature where the involvement of online user communities in the innovation process is well-documented. The objectives of this study were 1) to describe the use of online communities as a tool to capture and harness innovative ideas of end users or consumers; and 2) to point to the potential value and challenges of these virtual platforms to function as a tool to inform and promote patient-centered care in the context of chronic health conditions. A scoping review was conducted. A total of seven databases were searched for scientific articles published in English between 1995 and 2014. The search strategy was refined through an iterative process. A total of 144 studies were included in the review. Studies were coded inductively according to their research focus to identify groupings of papers. The first set of studies focused on the interplay of factors related to user roles, motivations, and behaviors that shape the innovation process within online communities. Studies of the second set examined the role of firms in online user innovation initiatives, identifying different organizational strategies and challenges. The third set of studies focused on the idea selection process and measures of success with respect to online user innovation initiatives. Finally, the findings from the review are presented in the light of the particularities and challenges discussed in current healthcare research. The present paper highlights the potential of virtual patient communities to inform and promote patient-centered care, describes the key challenges involved in this process, and makes recommendations on how to address them.
Virtual reality: past, present and future.
Gobbetti, E; Scateni, R
1998-01-01
This report provides a short survey of the field of virtual reality, highlighting application domains, technological requirements, and currently available solutions. The report is organized as follows: section 1 presents the background and motivation of virtual environment research and identifies typical application domain, section 2 discusses the characteristics a virtual reality system must have in order to exploit the perceptual and spatial skills of users, section 3 surveys current input/output devices for virtual reality, section 4 surveys current software approaches to support the creation of virtual reality systems, and section 5 summarizes the report.
NASA Astrophysics Data System (ADS)
Shute, J.; Carriere, L.; Duffy, D.; Hoy, E.; Peters, J.; Shen, Y.; Kirschbaum, D.
2017-12-01
The NASA Center for Climate Simulation (NCCS) at the Goddard Space Flight Center is building and maintaining an Enterprise GIS capability for its stakeholders, to include NASA scientists, industry partners, and the public. This platform is powered by three GIS subsystems operating in a highly-available, virtualized environment: 1) the Spatial Analytics Platform is the primary NCCS GIS and provides users discoverability of the vast DigitalGlobe/NGA raster assets within the NCCS environment; 2) the Disaster Mapping Platform provides mapping and analytics services to NASA's Disaster Response Group; and 3) the internal (Advanced Data Analytics Platform/ADAPT) enterprise GIS provides users with the full suite of Esri and open source GIS software applications and services. All systems benefit from NCCS's cutting edge infrastructure, to include an InfiniBand network for high speed data transfers; a mixed/heterogeneous environment featuring seamless sharing of information between Linux and Windows subsystems; and in-depth system monitoring and warning systems. Due to its co-location with the NCCS Discover High Performance Computing (HPC) environment and the Advanced Data Analytics Platform (ADAPT), the GIS platform has direct access to several large NCCS datasets including DigitalGlobe/NGA, Landsat, MERRA, and MERRA2. Additionally, the NCCS ArcGIS Desktop Windows virtual machines utilize existing NetCDF and OPeNDAP assets for visualization, modelling, and analysis - thus eliminating the need for data duplication. With the advent of this platform, Earth scientists have full access to vast data repositories and the industry-leading tools required for successful management and analysis of these multi-petabyte, global datasets. The full system architecture and integration with scientific datasets will be presented. Additionally, key applications and scientific analyses will be explained, to include the NASA Global Landslide Catalog (GLC) Reporter crowdsourcing application, the NASA GLC Viewer discovery and analysis tool, the DigitalGlobe/NGA Data Discovery Tool, the NASA Disaster Response Group Mapping Platform (https://maps.disasters.nasa.gov), and support for NASA's Arctic - Boreal Vulnerability Experiment (ABoVE).
Phenomenology tools on cloud infrastructures using OpenStack
NASA Astrophysics Data System (ADS)
Campos, I.; Fernández-del-Castillo, E.; Heinemeyer, S.; Lopez-Garcia, A.; Pahlen, F.; Borges, G.
2013-04-01
We present a new environment for computations in particle physics phenomenology employing recent developments in cloud computing. On this environment users can create and manage "virtual" machines on which the phenomenology codes/tools can be deployed easily in an automated way. We analyze the performance of this environment based on "virtual" machines versus the utilization of physical hardware. In this way we provide a qualitative result for the influence of the host operating system on the performance of a representative set of applications for phenomenology calculations.
Research on Intelligent Synthesis Environments
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.; Lobeck, William E.
2002-01-01
Four research activities related to Intelligent Synthesis Environment (ISE) have been performed under this grant. The four activities are: 1) non-deterministic approaches that incorporate technologies such as intelligent software agents, visual simulations and other ISE technologies; 2) virtual labs that leverage modeling, simulation and information technologies to create an immersive, highly interactive virtual environment tailored to the needs of researchers and learners; 3) advanced learning modules that incorporate advanced instructional, user interface and intelligent agent technologies; and 4) assessment and continuous improvement of engineering team effectiveness in distributed collaborative environments.
Research on Intelligent Synthesis Environments
NASA Astrophysics Data System (ADS)
Noor, Ahmed K.; Loftin, R. Bowen
2002-12-01
Four research activities related to Intelligent Synthesis Environment (ISE) have been performed under this grant. The four activities are: 1) non-deterministic approaches that incorporate technologies such as intelligent software agents, visual simulations and other ISE technologies; 2) virtual labs that leverage modeling, simulation and information technologies to create an immersive, highly interactive virtual environment tailored to the needs of researchers and learners; 3) advanced learning modules that incorporate advanced instructional, user interface and intelligent agent technologies; and 4) assessment and continuous improvement of engineering team effectiveness in distributed collaborative environments.
CliniSpace: a multiperson 3D online immersive training environment accessible through a browser.
Dev, Parvati; Heinrichs, W LeRoy; Youngblood, Patricia
2011-01-01
Immersive online medical environments, with dynamic virtual patients, have been shown to be effective for scenario-based learning (1). However, ease of use and ease of access have been barriers to their use. We used feedback from prior evaluation of these projects to design and develop CliniSpace. To improve usability, we retained the richness of prior virtual environments but modified the user interface. To improve access, we used a Software-as-a-Service (SaaS) approach to present a richly immersive 3D environment within a web browser.
Gatica-Rojas, Valeska; Méndez-Rebolledo, Guillermo
2014-04-15
Two key characteristics of all virtual reality applications are interaction and immersion. Systemic interaction is achieved through a variety of multisensory channels (hearing, sight, touch, and smell), permitting the user to interact with the virtual world in real time. Immersion is the degree to which a person can feel wrapped in the virtual world through a defined interface. Virtual reality interface devices such as the Nintendo® Wii and its peripheral nunchuks-balance board, head mounted displays and joystick allow interaction and immersion in unreal environments created from computer software. Virtual environments are highly interactive, generating great activation of visual, vestibular and proprioceptive systems during the execution of a video game. In addition, they are entertaining and safe for the user. Recently, incorporating therapeutic purposes in virtual reality interface devices has allowed them to be used for the rehabilitation of neurological patients, e.g., balance training in older adults and dynamic stability in healthy participants. The improvements observed in neurological diseases (chronic stroke and cerebral palsy) have been shown by changes in the reorganization of neural networks in patients' brain, along with better hand function and other skills, contributing to their quality of life. The data generated by such studies could substantially contribute to physical rehabilitation strategies.
Gatica-Rojas, Valeska; Méndez-Rebolledo, Guillermo
2014-01-01
Two key characteristics of all virtual reality applications are interaction and immersion. Systemic interaction is achieved through a variety of multisensory channels (hearing, sight, touch, and smell), permitting the user to interact with the virtual world in real time. Immersion is the degree to which a person can feel wrapped in the virtual world through a defined interface. Virtual reality interface devices such as the Nintendo® Wii and its peripheral nunchuks-balance board, head mounted displays and joystick allow interaction and immersion in unreal environments created from computer software. Virtual environments are highly interactive, generating great activation of visual, vestibular and proprioceptive systems during the execution of a video game. In addition, they are entertaining and safe for the user. Recently, incorporating therapeutic purposes in virtual reality interface devices has allowed them to be used for the rehabilitation of neurological patients, e.g., balance training in older adults and dynamic stability in healthy participants. The improvements observed in neurological diseases (chronic stroke and cerebral palsy) have been shown by changes in the reorganization of neural networks in patients’ brain, along with better hand function and other skills, contributing to their quality of life. The data generated by such studies could substantially contribute to physical rehabilitation strategies. PMID:25206907
Direct manipulation of virtual objects
NASA Astrophysics Data System (ADS)
Nguyen, Long K.
Interacting with a Virtual Environment (VE) generally requires the user to correctly perceive the relative position and orientation of virtual objects. For applications requiring interaction in personal space, the user may also need to accurately judge the position of the virtual object relative to that of a real object, for example, a virtual button and the user's real hand. This is difficult since VEs generally only provide a subset of the cues experienced in the real world. Complicating matters further, VEs presented by currently available visual displays may be inaccurate or distorted due to technological limitations. Fundamental physiological and psychological aspects of vision as they pertain to the task of object manipulation were thoroughly reviewed. Other sensory modalities -- proprioception, haptics, and audition -- and their cross-interactions with each other and with vision are briefly discussed. Visual display technologies, the primary component of any VE, were canvassed and compared. Current applications and research were gathered and categorized by different VE types and object interaction techniques. While object interaction research abounds in the literature, pockets of research gaps remain. Direct, dexterous, manual interaction with virtual objects in Mixed Reality (MR), where the real, seen hand accurately and effectively interacts with virtual objects, has not yet been fully quantified. An experimental test bed was designed to provide the highest accuracy attainable for salient visual cues in personal space. Optical alignment and user calibration were carefully performed. The test bed accommodated the full continuum of VE types and sensory modalities for comprehensive comparison studies. Experimental designs included two sets, each measuring depth perception and object interaction. The first set addressed the extreme end points of the Reality-Virtuality (R-V) continuum -- Immersive Virtual Environment (IVE) and Reality Environment (RE). This validated, linked, and extended several previous research findings, using one common test bed and participant pool. The results provided a proven method and solid reference points for further research. The second set of experiments leveraged the first to explore the full R-V spectrum and included additional, relevant sensory modalities. It consisted of two full-factorial experiments providing for rich data and key insights into the effect of each type of environment and each modality on accuracy and timeliness of virtual object interaction. The empirical results clearly showed that mean depth perception error in personal space was less than four millimeters whether the stimuli presented were real, virtual, or mixed. Likewise, mean error for the simple task of pushing a button was less than four millimeters whether the button was real or virtual. Mean task completion time was less than one second. Key to the high accuracy and quick task performance time observed was the correct presentation of the visual cues, including occlusion, stereoscopy, accommodation, and convergence. With performance results already near optimal level with accurate visual cues presented, adding proprioception, audio, and haptic cues did not significantly improve performance. Recommendations for future research include enhancement of the visual display and further experiments with more complex tasks and additional control variables.
Internet-based distributed collaborative environment for engineering education and design
NASA Astrophysics Data System (ADS)
Sun, Qiuli
2001-07-01
This research investigates the use of the Internet for engineering education, design, and analysis through the presentation of a Virtual City environment. The main focus of this research was to provide an infrastructure for engineering education, test the concept of distributed collaborative design and analysis, develop and implement the Virtual City environment, and assess the environment's effectiveness in the real world. A three-tier architecture was adopted in the development of the prototype, which contains an online database server, a Web server as well as multi-user servers, and client browsers. The environment is composed of five components, a 3D virtual world, multiple Internet-based multimedia modules, an online database, a collaborative geometric modeling module, and a collaborative analysis module. The environment was designed using multiple Intenet-based technologies, such as Shockwave, Java, Java 3D, VRML, Perl, ASP, SQL, and a database. These various technologies together formed the basis of the environment and were programmed to communicate smoothly with each other. Three assessments were conducted over a period of three semesters. The Virtual City is open to the public at www.vcity.ou.edu. The online database was designed to manage the changeable data related to the environment. The virtual world was used to implement 3D visualization and tie the multimedia modules together. Students are allowed to build segments of the 3D virtual world upon completion of appropriate undergraduate courses in civil engineering. The end result is a complete virtual world that contains designs from all of their coursework and is viewable on the Internet. The environment is a content-rich educational system, which can be used to teach multiple engineering topics with the help of 3D visualization, animations, and simulations. The concept of collaborative design and analysis using the Internet was investigated and implemented. Geographically dispersed users can build the same geometric model simultaneously over the Internet and communicate with each other through a chat room. They can also conduct finite element analysis collaboratively on the same object over the Internet. They can mesh the same object, apply and edit the same boundary conditions and forces, obtain the same analysis results, and then discuss the results through the Internet.
Enrichment of Human-Computer Interaction in Brain-Computer Interfaces via Virtual Environments
Víctor Rodrigo, Mercado-García
2017-01-01
Tridimensional representations stimulate cognitive processes that are the core and foundation of human-computer interaction (HCI). Those cognitive processes take place while a user navigates and explores a virtual environment (VE) and are mainly related to spatial memory storage, attention, and perception. VEs have many distinctive features (e.g., involvement, immersion, and presence) that can significantly improve HCI in highly demanding and interactive systems such as brain-computer interfaces (BCI). BCI is as a nonmuscular communication channel that attempts to reestablish the interaction between an individual and his/her environment. Although BCI research started in the sixties, this technology is not efficient or reliable yet for everyone at any time. Over the past few years, researchers have argued that main BCI flaws could be associated with HCI issues. The evidence presented thus far shows that VEs can (1) set out working environmental conditions, (2) maximize the efficiency of BCI control panels, (3) implement navigation systems based not only on user intentions but also on user emotions, and (4) regulate user mental state to increase the differentiation between control and noncontrol modalities. PMID:29317861
Estimation of detection thresholds for redirected walking techniques.
Steinicke, Frank; Bruder, Gerd; Jerald, Jason; Frenz, Harald; Lappe, Markus
2010-01-01
In immersive virtual environments (IVEs), users can control their virtual viewpoint by moving their tracked head and walking through the real world. Usually, movements in the real world are mapped one-to-one to virtual camera motions. With redirection techniques, the virtual camera is manipulated by applying gains to user motion so that the virtual world moves differently than the real world. Thus, users can walk through large-scale IVEs while physically remaining in a reasonably small workspace. In psychophysical experiments with a two-alternative forced-choice task, we have quantified how much humans can unknowingly be redirected on physical paths that are different from the visually perceived paths. We tested 12 subjects in three different experiments: (E1) discrimination between virtual and physical rotations, (E2) discrimination between virtual and physical straightforward movements, and (E3) discrimination of path curvature. In experiment E1, subjects performed rotations with different gains, and then had to choose whether the visually perceived rotation was smaller or greater than the physical rotation. In experiment E2, subjects chose whether the physical walk was shorter or longer than the visually perceived scaled travel distance. In experiment E3, subjects estimate the path curvature when walking a curved path in the real world while the visual display shows a straight path in the virtual world. Our results show that users can be turned physically about 49 percent more or 20 percent less than the perceived virtual rotation, distances can be downscaled by 14 percent and upscaled by 26 percent, and users can be redirected on a circular arc with a radius greater than 22 m while they believe that they are walking straight.
CLEW: A Cooperative Learning Environment for the Web.
ERIC Educational Resources Information Center
Ribeiro, Marcelo Blois; Noya, Ricardo Choren; Fuks, Hugo
This paper outlines CLEW (collaborative learning environment for the Web). The project combines MUD (Multi-User Dimension), workflow, VRML (Virtual Reality Modeling Language) and educational concepts like constructivism in a learning environment where students actively participate in the learning process. The MUD shapes the environment structure.…
Virtual World Learning Spaces: Developing a Second Life Operating Room Simulation
ERIC Educational Resources Information Center
Gerald, Stephanie; Antonacci, David M.
2009-01-01
User-created virtual worlds, such as Second Life, are a hot topic in higher education. Thousands of educators are currently exploring and using Second Life, and hundreds of colleges and universities have purchased and developed their own private islands in Second Life, including the University of Kansas Medical Center (KUMC). Because it is so easy…
Providing Personal Assistance in the SAGRES Virtual Museum.
ERIC Educational Resources Information Center
Bertoletti, Ana Carolina; Moraes, Marcia Cristina; da Rocha Costa, Antonio Carlos
The SAGRES system is an educational environment built on the Web that facilitates the organization of visits to museums, presenting museum information bases in a way adapted to the user's characteristics (capacities and preferences). The system determines the group of links appropriate to the user(s) and shows them in a resultant HTML page. In…
NASA Technical Reports Server (NTRS)
Smith, Jeffrey D.; Twombly, I. Alexander; Maese, A. Christopher; Cagle, Yvonne; Boyle, Richard
2003-01-01
The International Space Station demonstrates the greatest capabilities of human ingenuity, international cooperation and technology development. The complexity of this space structure is unprecedented; and training astronaut crews to maintain all its systems, as well as perform a multitude of research experiments, requires the most advanced training tools and techniques. Computer simulation and virtual environments are currently used by astronauts to train for robotic arm manipulations and extravehicular activities; but now, with the latest computer technologies and recent successes in areas of medical simulation, the capability exists to train astronauts for more hands-on research tasks using immersive virtual environments. We have developed a new technology, the Virtual Glovebox (VGX), for simulation of experimental tasks that astronauts will perform aboard the Space Station. The VGX may also be used by crew support teams for design of experiments, testing equipment integration capability and optimizing the procedures astronauts will use. This is done through the 3D, desk-top sized, reach-in virtual environment that can simulate the microgravity environment in space. Additional features of the VGX allow for networking multiple users over the internet and operation of tele-robotic devices through an intuitive user interface. Although the system was developed for astronaut training and assisting support crews, Earth-bound applications, many emphasizing homeland security, have also been identified. Examples include training experts to handle hazardous biological and/or chemical agents in a safe simulation, operation of tele-robotic systems for assessing and diffusing threats such as bombs, and providing remote medical assistance to field personnel through a collaborative virtual environment. Thus, the emerging VGX simulation technology, while developed for space- based applications, can serve a dual use facilitating homeland security here on Earth.
NASA Technical Reports Server (NTRS)
Aucoin, P. J.; Stewart, J.; Mckay, M. F. (Principal Investigator)
1980-01-01
This document presents instructions for analysts who use the EOD-LARSYS as programmed on the Purdue University IBM 370/148 (recently replaced by the IBM 3031) computer. It presents sample applications, control cards, and error messages for all processors in the system and gives detailed descriptions of the mathematical procedures and information needed to execute the system and obtain the desired output. EOD-LARSYS is the JSC version of an integrated batch system for analysis of multispectral scanner imagery data. The data included is designed for use with the as built documentation (volume 3) and the program listings (volume 4). The system is operational from remote terminals at Johnson Space Center under the virtual machine/conversational monitor system environment.
John, Nigel W; Pop, Serban R; Day, Thomas W; Ritsos, Panagiotis D; Headleand, Christopher J
2018-05-01
Navigating a powered wheelchair and avoiding collisions is often a daunting task for new wheelchair users. It takes time and practice to gain the coordination needed to become a competent driver and this can be even more of a challenge for someone with a disability. We present a cost-effective virtual reality (VR) application that takes advantage of consumer level VR hardware. The system can be easily deployed in an assessment centre or for home use, and does not depend on a specialized high-end virtual environment such as a Powerwall or CAVE. This paper reviews previous work that has used virtual environments technology for training tasks, particularly wheelchair simulation. We then describe the implementation of our own system and the first validation study carried out using thirty three able bodied volunteers. The study results indicate that at a significance level of 5 percent then there is an improvement in driving skills from the use of our VR system. We thus have the potential to develop the competency of a wheelchair user whilst avoiding the risks inherent to training in the real world. However, the occurrence of cybersickness is a particular problem in this application that will need to be addressed.
An artificial reality environment for remote factory control and monitoring
NASA Technical Reports Server (NTRS)
Kosta, Charles Paul; Krolak, Patrick D.
1993-01-01
Work has begun on the merger of two well known systems, VEOS (HITLab) and CLIPS (NASA). In the recent past, the University of Massachusetts Lowell developed a parallel version of NASA CLIPS, called P-CLIPS. This modification allows users to create smaller expert systems which are able to communicate with each other to jointly solve problems. With the merger of a VEOS message system, PCLIPS-V can now act as a group of entities working within VEOS. To display the 3D virtual world we have been using a graphics package called HOOPS, from Ithaca Software. The artificial reality environment we have set up contains actors and objects as found in our Lincoln Logs Factory of the Future project. The environment allows us to view and control the objects within the virtual world. All communication between the separate CLIPS expert systems is done through VEOS. A graphical renderer generates camera views on X-Windows devices; Head Mounted Devices are not required. This allows more people to make use of this technology. We are experimenting with different types of virtual vehicles to give the user a sense that he or she is actually moving around inside the factory looking ahead through windows and virtual monitors.
Training software using virtual-reality technology and pre-calculated effective dose data.
Ding, Aiping; Zhang, Di; Xu, X George
2009-05-01
This paper describes the development of a software package, called VR Dose Simulator, which aims to provide interactive radiation safety and ALARA training to radiation workers using virtual-reality (VR) simulations. Combined with a pre-calculated effective dose equivalent (EDE) database, a virtual radiation environment was constructed in VR authoring software, EON Studio, using 3-D models of a real nuclear power plant building. Models of avatars representing two workers were adopted with arms and legs of the avatar being controlled in the software to simulate walking and other postures. Collision detection algorithms were developed for various parts of the 3-D power plant building and avatars to confine the avatars to certain regions of the virtual environment. Ten different camera viewpoints were assigned to conveniently cover the entire virtual scenery in different viewing angles. A user can control the avatar to carry out radiological engineering tasks using two modes of avatar navigation. A user can also specify two types of radiation source: Cs and Co. The location of the avatar inside the virtual environment during the course of the avatar's movement is linked to the EDE database. The accumulative dose is calculated and displayed on the screen in real-time. Based on the final accumulated dose and the completion status of all virtual tasks, a score is given to evaluate the performance of the user. The paper concludes that VR-based simulation technologies are interactive and engaging, thus potentially useful in improving the quality of radiation safety training. The paper also summarizes several challenges: more streamlined data conversion, realistic avatar movement and posture, more intuitive implementation of the data communication between EON Studio and VB.NET, and more versatile utilization of EDE data such as a source near the body, etc., all of which needs to be addressed in future efforts to develop this type of software.
Evolution-based Virtual Content Insertion with Visually Virtual Interactions in Videos
NASA Astrophysics Data System (ADS)
Chang, Chia-Hu; Wu, Ja-Ling
With the development of content-based multimedia analysis, virtual content insertion has been widely used and studied for video enrichment and multimedia advertising. However, how to automatically insert a user-selected virtual content into personal videos in a less-intrusive manner, with an attractive representation, is a challenging problem. In this chapter, we present an evolution-based virtual content insertion system which can insert virtual contents into videos with evolved animations according to predefined behaviors emulating the characteristics of evolutionary biology. The videos are considered not only as carriers of message conveyed by the virtual content but also as the environment in which the lifelike virtual contents live. Thus, the inserted virtual content will be affected by the videos to trigger a series of artificial evolutions and evolve its appearances and behaviors while interacting with video contents. By inserting virtual contents into videos through the system, users can easily create entertaining storylines and turn their personal videos into visually appealing ones. In addition, it would bring a new opportunity to increase the advertising revenue for video assets of the media industry and online video-sharing websites.
Maidenbaum, Shachar; Levy-Tzedek, Shelly; Chebat, Daniel Robert; Namer-Furstenberg, Rinat; Amedi, Amir
2014-01-01
Mobility training programs for helping the blind navigate through unknown places with a White-Cane significantly improve their mobility. However, what is the effect of new assistive technologies, offering more information to the blind user, on the underlying premises of these programs such as navigation patterns? We developed the virtual-EyeCane, a minimalistic sensory substitution device translating single-point-distance into auditory cues identical to the EyeCane's in the real world. We compared performance in virtual environments when using the virtual-EyeCane, a virtual-White-Cane, no device and visual navigation. We show that the characteristics of virtual-EyeCane navigation differ from navigation with a virtual-White-Cane or no device, and that virtual-EyeCane users complete more levels successfully, taking shorter paths and with less collisions than these groups, and we demonstrate the relative similarity of virtual-EyeCane and visual navigation patterns. This suggests that additional distance information indeed changes navigation patterns from virtual-White-Cane use, and brings them closer to visual navigation.
Transduction between worlds: using virtual and mixed reality for earth and planetary science
NASA Astrophysics Data System (ADS)
Hedley, N.; Lochhead, I.; Aagesen, S.; Lonergan, C. D.; Benoy, N.
2017-12-01
Virtual reality (VR) and augmented reality (AR) have the potential to transform the way we visualize multidimensional geospatial datasets in support of geoscience research, exploration and analysis. The beauty of virtual environments is that they can be built at any scale, users can view them at many levels of abstraction, move through them in unconventional ways, and experience spatial phenomena as if they had superpowers. Similarly, augmented reality allows you to bring the power of virtual 3D data visualizations into everyday spaces. Spliced together, these interface technologies hold incredible potential to support 21st-century geoscience. In my ongoing research, my team and I have made significant advances to connect data and virtual simulations with real geographic spaces, using virtual environments, geospatial augmented reality and mixed reality. These research efforts have yielded new capabilities to connect users with spatial data and phenomena. These innovations include: geospatial x-ray vision; flexible mixed reality; augmented 3D GIS; situated augmented reality 3D simulations of tsunamis and other phenomena interacting with real geomorphology; augmented visual analytics; and immersive GIS. These new modalities redefine the ways in which we can connect digital spaces of spatial analysis, simulation and geovisualization, with geographic spaces of data collection, fieldwork, interpretation and communication. In a way, we are talking about transduction between real and virtual worlds. Taking a mixed reality approach to this, we can link real and virtual worlds. This paper presents a selection of our 3D geovisual interface projects in terrestrial, coastal, underwater and other environments. Using rigorous applied geoscience data, analyses and simulations, our research aims to transform the novelty of virtual and augmented reality interface technologies into game-changing mixed reality geoscience.
ERIC Educational Resources Information Center
Yu, Tao Wang
2009-01-01
A much more attractive way to use the internet was discovered. Users are represented by avatars in the fantasy persistent 3D world, and the avatars apparently come to occupy a special place in the hearts of their creators (Castronova, 2001). At present, millions of people worldwide have accounts to some kind of virtual environments. Virtual world…
Virtual Reality Visualization of Permafrost Dynamics Along a Transect Through Northern Alaska
NASA Astrophysics Data System (ADS)
Chappell, G. G.; Brody, B.; Webb, P.; Chord, J.; Romanovsky, V.; Tipenko, G.
2004-12-01
Understanding permafrost dynamics poses a significant challenge for researchers and planners. Our project uses nontraditional visualization tools to create a 3-D interactive virtual-reality environment in which permafrost dynamics can be explored and experimented with. We have incorporated a numerical soil temperature model by Gennadiy Tipenko and Vladimir Romanovsky of the Geophysical institute at the University of Alaska Fairbanks into an animated tour in space and time in the virtual reality facility of the Arctic Region Supercomputing Center at the University of Alaska Fairbanks. The software is being written by undergraduate interns Patrick Webb and Jordanna Chord under the direction of Professors Chappell and Brody. When using our software, the user appears to be surrounded by a 3-D computer-generated model of the state of Alaska. The eastern portion of the state is displaced upward from the western portion. The data are represented on an animated vertical strip running between the two parts, as if eastern Alaska were raised up, and the soil at the cut could be viewed. We use coloring to highlight significant properties and features of the soil: temperature, the active layer, etc. The user can view data from various parts of the state simply by walking to the appropriate location in the model, or by using a flying-style interface to cover longer distances. Using a control panel, the user can also alter the time, viewing the data for a particular date, or watching the data change with time: a high-speed movie in which long-term changes in permafrost are readily apparent. In the second phase of the project, we connect the visualization directly to the model, running in real time. We allow the user to manipulate the input data and get immediate visual feedback. For example, the user might specify the kind and placement of ground cover, by ``painting'' snowpack, plant species, or fire damage, and be able to see the effect on permafrost stability with no significant time lag.
Implementing virtual reality interfaces for the geosciences
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bethel, W.; Jacobsen, J.; Austin, A.
1996-06-01
For the past few years, a multidisciplinary team of computer and earth scientists at Lawrence Berkeley National Laboratory has been exploring the use of advanced user interfaces, commonly called {open_quotes}Virtual Reality{close_quotes} (VR), coupled with visualization and scientific computing software. Working closely with industry, these efforts have resulted in an environment in which VR technology is coupled with existing visualization and computational tools. VR technology may be thought of as a user interface. It is useful to think of a spectrum, ranging the gamut from command-line interfaces to completely immersive environments. In the former, one uses the keyboard to enter threemore » or six-dimensional parameters. In the latter, three or six-dimensional information is provided by trackers contained either in hand-held devices or attached to the user in some fashion, e.g. attached to a head-mounted display. Rich, extensible and often complex languages are a vehicle whereby the user controls parameters to manipulate object position and location in a virtual world, but the keyboard is the obstacle in that typing is cumbersome, error-prone and typically slow. In the latter, the user can interact with these parameters by means of motor skills which are highly developed. Two specific geoscience application areas will be highlighted. In the first, we have used VR technology to manipulate three-dimensional input parameters, such as the spatial location of injection or production wells in a reservoir simulator. In the second, we demonstrate how VR technology has been used to manipulate visualization tools, such as a tool for computing streamlines via manipulation of a {open_quotes}rake.{close_quotes} The rake is presented to the user in the form of a {open_quotes}virtual well{close_quotes} icon, and provides parameters used by the streamlines algorithm.« less
NASA Astrophysics Data System (ADS)
Bainbridge, William Sims
2010-01-01
This essay introduces the opportunity for theory development and even empirical research on some aspects of astrosociology through today's online virtual worlds. The examples covered present life on other planets or in space itself, in a manner that can be experienced by the user and where the user's reactions may simulate to some degree future human behavior in real extraterrestrial environments: Tabula Rasa, Anarchy Online, Entropia Universe, EVE Online, StarCraft and World of Warcraft. Ethnographic exploration of these computerized environments raises many questions about the social science both of space exploration and of direct contact with extraterrestrials. The views expressed in this essay do not necessarily represent the views of the National Science Foundation or the United States.
Temporally coherent 4D video segmentation for teleconferencing
NASA Astrophysics Data System (ADS)
Ehmann, Jana; Guleryuz, Onur G.
2013-09-01
We develop an algorithm for 4-D (RGB+Depth) video segmentation targeting immersive teleconferencing ap- plications on emerging mobile devices. Our algorithm extracts users from their environments and places them onto virtual backgrounds similar to green-screening. The virtual backgrounds increase immersion and interac- tivity, relieving the users of the system from distractions caused by disparate environments. Commodity depth sensors, while providing useful information for segmentation, result in noisy depth maps with a large number of missing depth values. By combining depth and RGB information, our work signi¯cantly improves the other- wise very coarse segmentation. Further imposing temporal coherence yields compositions where the foregrounds seamlessly blend with the virtual backgrounds with minimal °icker and other artifacts. We achieve said improve- ments by correcting the missing information in depth maps before fast RGB-based segmentation, which operates in conjunction with temporal coherence. Simulation results indicate the e±cacy of the proposed system in video conferencing scenarios.
Virtual reality and interactive 3D as effective tools for medical training.
Webb, George; Norcliffe, Alex; Cannings, Peter; Sharkey, Paul; Roberts, Dave
2003-01-01
CAVE-like displays allow a user to walk in to a virtual environment, and use natural movement to change the viewpoint of virtual objects which they can manipulate with a hand held device. This maps well to many surgical procedures offering strong potential for training and planning. These devices may be networked together allowing geographically remote users to share the interactive experience. This maps to the strong need for distance training and planning of surgeons. Our paper shows how the properties of a CAVE-Like facility can be maximised in order to provide an ideal environment for medical training. The implementation of a large 3D-eye is described. The resulting application is that of an eye that can be manipulated and examined by trainee medics under the guidance of a medical expert. The progression and effects of different ailments can be illustrated and corrective procedures, demonstrated.
Robotic and Virtual Reality BCIs Using Spatial Tactile and Auditory Oddball Paradigms.
Rutkowski, Tomasz M
2016-01-01
The paper reviews nine robotic and virtual reality (VR) brain-computer interface (BCI) projects developed by the author, in collaboration with his graduate students, within the BCI-lab research group during its association with University of Tsukuba, Japan. The nine novel approaches are discussed in applications to direct brain-robot and brain-virtual-reality-agent control interfaces using tactile and auditory BCI technologies. The BCI user intentions are decoded from the brainwaves in realtime using a non-invasive electroencephalography (EEG) and they are translated to a symbiotic robot or virtual reality agent thought-based only control. A communication protocol between the BCI output and the robot or the virtual environment is realized in a symbiotic communication scenario using an user datagram protocol (UDP), which constitutes an internet of things (IoT) control scenario. Results obtained from healthy users reproducing simple brain-robot and brain-virtual-agent control tasks in online experiments support the research goal of a possibility to interact with robotic devices and virtual reality agents using symbiotic thought-based BCI technologies. An offline BCI classification accuracy boosting method, using a previously proposed information geometry derived approach, is also discussed in order to further support the reviewed robotic and virtual reality thought-based control paradigms.
Robotic and Virtual Reality BCIs Using Spatial Tactile and Auditory Oddball Paradigms
Rutkowski, Tomasz M.
2016-01-01
The paper reviews nine robotic and virtual reality (VR) brain–computer interface (BCI) projects developed by the author, in collaboration with his graduate students, within the BCI–lab research group during its association with University of Tsukuba, Japan. The nine novel approaches are discussed in applications to direct brain-robot and brain-virtual-reality-agent control interfaces using tactile and auditory BCI technologies. The BCI user intentions are decoded from the brainwaves in realtime using a non-invasive electroencephalography (EEG) and they are translated to a symbiotic robot or virtual reality agent thought-based only control. A communication protocol between the BCI output and the robot or the virtual environment is realized in a symbiotic communication scenario using an user datagram protocol (UDP), which constitutes an internet of things (IoT) control scenario. Results obtained from healthy users reproducing simple brain-robot and brain-virtual-agent control tasks in online experiments support the research goal of a possibility to interact with robotic devices and virtual reality agents using symbiotic thought-based BCI technologies. An offline BCI classification accuracy boosting method, using a previously proposed information geometry derived approach, is also discussed in order to further support the reviewed robotic and virtual reality thought-based control paradigms. PMID:27999538
New Desktop Virtual Reality Technology in Technical Education
ERIC Educational Resources Information Center
Ausburn, Lynna J.; Ausburn, Floyd B.
2008-01-01
Virtual reality (VR) that immerses users in a 3D environment through use of headwear, body suits, and data gloves has demonstrated effectiveness in technical and professional education. Immersive VR is highly engaging and appealing to technically skilled young Net Generation learners. However, technical difficulty and very high costs have kept…
Virtualization in the Operations Environments
NASA Technical Reports Server (NTRS)
Pitts, Lee; Lankford, Kim; Felton, Larry; Pruitt, Robert
2010-01-01
Virtualization provides the opportunity to continue to do "more with less"---more computing power with fewer physical boxes, thus reducing the overall hardware footprint, power and cooling requirements, software licenses, and their associated costs. This paper explores the tremendous advantages and any disadvantages of virtualization in all of the environments associated with software and systems development to operations flow. It includes the use and benefits of the Intelligent Platform Management Interface (IPMI) specification, and identifies lessons learned concerning hardware and network configurations. Using the Huntsville Operations Support Center (HOSC) at NASA Marshall Space Flight Center as an example, we demonstrate that deploying virtualized servers as a means of managing computing resources is applicable and beneficial to many areas of application, up to and including flight operations.
A model of cloud application assignments in software-defined storages
NASA Astrophysics Data System (ADS)
Bolodurina, Irina P.; Parfenov, Denis I.; Polezhaev, Petr N.; E Shukhman, Alexander
2017-01-01
The aim of this study is to analyze the structure and mechanisms of interaction of typical cloud applications and to suggest the approaches to optimize their placement in storage systems. In this paper, we describe a generalized model of cloud applications including the three basic layers: a model of application, a model of service, and a model of resource. The distinctive feature of the model suggested implies analyzing cloud resources from the user point of view and from the point of view of a software-defined infrastructure of the virtual data center (DC). The innovation character of this model is in describing at the same time the application data placements, as well as the state of the virtual environment, taking into account the network topology. The model of software-defined storage has been developed as a submodel within the resource model. This model allows implementing the algorithm for control of cloud application assignments in software-defined storages. Experimental researches returned this algorithm decreases in cloud application response time and performance growth in user request processes. The use of software-defined data storages allows the decrease in the number of physical store devices, which demonstrates the efficiency of our algorithm.
The impact of self-avatars on trust and collaboration in shared virtual environments.
Pan, Ye; Steed, Anthony
2017-01-01
A self-avatar is known to have a potentially significant impact on the user's experience of the immersive content but it can also affect how users interact with each other in a shared virtual environment (SVE). We implemented an SVE for a consumer virtual reality system where each user's body could be represented by a jointed self-avatar that was dynamically controlled by head and hand controllers. We investigated the impact of a self-avatar on collaborative outcomes such as completion time and trust formation during competitive and cooperative tasks. We used two different embodiment levels: no self-avatar and self-avatar, and compared these to an in-person face to face version of the tasks. We found that participants could finish the task more quickly when they cooperated than when they competed, for both the self-avatar condition and the face to face condition, but not for the no self-avatar condition. In terms of trust formation, both the self-avatar condition and the face to face condition led to higher scores than the no self-avatar condition; however, collaboration style had no significant effect on trust built between partners. The results are further evidence of the importance of a self-avatar representation in immersive virtual reality.
A Virtual Environment for People Who Are Blind – A Usability Study
Lahav, O.; Schloerb, D. W.; Kumar, S.; Srinivasan, M. A.
2013-01-01
For most people who are blind, exploring an unknown environment can be unpleasant, uncomfortable, and unsafe. Over the past years, the use of virtual reality as a learning and rehabilitation tool for people with disabilities has been on the rise. This research is based on the hypothesis that the supply of appropriate perceptual and conceptual information through compensatory sensorial channels may assist people who are blind with anticipatory exploration. In this research we developed and tested the BlindAid system, which allows the user to explore a virtual environment. The two main goals of the research were: (a) evaluation of different modalities (haptic and audio) and navigation tools, and (b) evaluation of spatial cognitive mapping employed by people who are blind. Our research included four participants who are totally blind. The preliminary findings confirm that the system enabled participants to develop comprehensive cognitive maps by exploring the virtual environment. PMID:24353744
The Effect of Perspective on Presence and Space Perception
Ling, Yun; Nefs, Harold T.; Brinkman, Willem-Paul; Qu, Chao; Heynderickx, Ingrid
2013-01-01
In this paper we report two experiments in which the effect of perspective projection on presence and space perception was investigated. In Experiment 1, participants were asked to score a presence questionnaire when looking at a virtual classroom. We manipulated the vantage point, the viewing mode (binocular versus monocular viewing), the display device/screen size (projector versus TV) and the center of projection. At the end of each session of Experiment 1, participants were asked to set their preferred center of projection such that the image seemed most natural to them. In Experiment 2, participants were asked to draw a floor plan of the virtual classroom. The results show that field of view, viewing mode, the center of projection and display all significantly affect presence and the perceived layout of the virtual environment. We found a significant linear relationship between presence and perceived layout of the virtual classroom, and between the preferred center of projection and perceived layout. The results indicate that the way in which virtual worlds are presented is critical for the level of experienced presence. The results also suggest that people ignore veridicality and they experience a higher level of presence while viewing elongated virtual environments compared to viewing the original intended shape. PMID:24223156
Baig, Hasan; Madsen, Jan
2017-01-15
Simulation and behavioral analysis of genetic circuits is a standard approach of functional verification prior to their physical implementation. Many software tools have been developed to perform in silico analysis for this purpose, but none of them allow users to interact with the model during runtime. The runtime interaction gives the user a feeling of being in the lab performing a real world experiment. In this work, we present a user-friendly software tool named D-VASim (Dynamic Virtual Analyzer and Simulator), which provides a virtual laboratory environment to simulate and analyze the behavior of genetic logic circuit models represented in an SBML (Systems Biology Markup Language). Hence, SBML models developed in other software environments can be analyzed and simulated in D-VASim. D-VASim offers deterministic as well as stochastic simulation; and differs from other software tools by being able to extract and validate the Boolean logic from the SBML model. D-VASim is also capable of analyzing the threshold value and propagation delay of a genetic circuit model. D-VASim is available for Windows and Mac OS and can be downloaded from bda.compute.dtu.dk/downloads/. haba@dtu.dk, jama@dtu.dk. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Usage of Thin-Client/Server Architecture in Computer Aided Education
ERIC Educational Resources Information Center
Cimen, Caghan; Kavurucu, Yusuf; Aydin, Halit
2014-01-01
With the advances of technology, thin-client/server architecture has become popular in multi-user/single network environments. Thin-client is a user terminal in which the user can login to a domain and run programs by connecting to a remote server. Recent developments in network and hardware technologies (cloud computing, virtualization, etc.)…
PC-Based Virtual Reality for CAD Model Viewing
ERIC Educational Resources Information Center
Seth, Abhishek; Smith, Shana S.-F.
2004-01-01
Virtual reality (VR), as an emerging visualization technology, has introduced an unprecedented communication method for collaborative design. VR refers to an immersive, interactive, multisensory, viewer-centered, 3D computer-generated environment and the combination of technologies required to build such an environment. This article introduces the…
Accurately Decoding Visual Information from fMRI Data Obtained in a Realistic Virtual Environment
2015-06-09
Center for Learning and Memory , The University of Texas at Austin, 100 E 24th Street, Stop C7000, Austin, TX 78712, USA afloren@utexas.edu Received: 18...information from fMRI data obtained in a realistic virtual environment. Front. Hum. Neurosci. 9:327. doi: 10.3389/fnhum.2015.00327 Accurately decoding...visual information from fMRI data obtained in a realistic virtual environment Andrew Floren 1*, Bruce Naylor 2, Risto Miikkulainen 3 and David Ress 4
The Modeling of Virtual Environment Distance Education
NASA Astrophysics Data System (ADS)
Xueqin, Chang
This research presented a virtual environment that integrates in a virtual mockup services available in a university campus for students and teachers communication in different actual locations. Advantages of this system include: the remote access to a variety of services and educational tools, the representation of real structures and landscapes in an interactive 3D model that favors localization of services and preserves the administrative organization of the university. For that, the system was implemented a control access for users and an interface to allow the use of previous educational equipments and resources not designed for distance education mode.
Playing in or out of character: user role differences in the experience of interactive storytelling.
Roth, Christian; Vermeulen, Ivar; Vorderer, Peter; Klimmt, Christoph; Pizzi, David; Lugrin, Jean-Luc; Cavazza, Marc
2012-11-01
Interactive storytelling (IS) is a promising new entertainment technology synthesizing preauthored narrative with dynamic user interaction. Existing IS prototypes employ different modes to involve users in a story, ranging from individual avatar control to comprehensive control over the virtual environment. The current experiment tested whether different player modes (exerting local vs. global influence) yield different user experiences (e.g., senses of immersion vs. control). A within-subject design involved 34 participants playing the cinematic IS drama "Emo Emma"( 1 ) both in the local (actor) and in global (ghost) mode. The latter mode allowed free movement in the virtual environment and hidden influence on characters, objects, and story development. As expected, control-related experiential qualities (effectance, autonomy, flow, and pride) were more intense for players in the global (ghost) mode. Immersion-related experiences did not differ over modes. Additionally, men preferred the sense of command facilitated by the ghost mode, whereas women preferred the sense of involvement facilitated by the actor mode.
Design and Development of a Virtual Facility Tour Using iPIX(TM) Technology
NASA Technical Reports Server (NTRS)
Farley, Douglas L.
2002-01-01
The capabilities of the iPIX virtual tour software, in conjunction with a web-based interface create a unique and valuable system that provides users with an efficient virtual capability to tour facilities while being able to acquire the necessary technical content is demonstrated. A users guide to the Mechanics and Durability Branch's virtual tour is presented. The guide provides the user with instruction on operating both scripted and unscripted tours as well as a discussion of the tours for Buildings 1148, 1205 and 1256 and NASA Langley Research Center. Furthermore, an indepth discussion has been presented on how to develop a virtual tour using the iPIX software interface with conventional html and JavaScript. The main aspects for discussion are on network and computing issues associated with using this capability. A discussion of how to take the iPIX pictures, manipulate them and bond them together to form hemispherical images is also presented. Linking of images with additional multimedia content is discussed. Finally, a method to integrate the iPIX software with conventional HTML and JavaScript to facilitate linking with multi-media is presented.
Confessions of a Second Life: Conforming in the Virtual World?
NASA Astrophysics Data System (ADS)
Chicas, K.; Bailenson, J.; Stevenson Won, A.; Bailey, J.
2012-12-01
Virtual Worlds such as Second Life or World of Warcraft are increasingly popular, with people all over the world joining these online communities. In these virtual environments people break the barrier of reality every day when they fly, walk through walls and teleport places. It is easy for people to violate the norms and behaviors of the real world in the virtual environment without real world consequences. However, previous research has shown that users' behavior may conform to their digital self-representation (avatar). This is also known as the Proteus effect (Yee, 2007). Are people behaving in virtual worlds in ways that most people would not in the physical world? It's important to understand the behaviors that occur in the virtual world if they have an impact on how people act in the real world.
Fonseca, Luciana Mara Monti; Dias, Danielle Monteiro Vilela; Góes, Fernanda Dos Santos Nogueira; Seixas, Carlos Alberto; Scochi, Carmen Gracinda Silvan; Martins, José Carlos Amado; Rodrigues, Manuel Alves
2014-09-01
The present study aimed to describe the development process of a serious game that enables users to evaluate the respiratory process in a preterm infant based on an emotional design model. The e-Baby serious game was built to feature the simulated environment of an incubator, in which the user performs a clinical evaluation of the respiratory process in a virtual preterm infant. The user learns about the preterm baby's history, chooses the tools for the clinical evaluation, evaluates the baby, and determines whether his/her evaluation is appropriate. The e-Baby game presents phases that contain respiratory process impairments of higher or lower complexity in the virtual preterm baby. Included links give the user the option of recording the entire evaluation procedure and sharing his/her performance on a social network. e-Baby integrates a Clinical Evaluation of the Preterm Baby course in the Moodle virtual environment. This game, which evaluates the respiratory process in preterm infants, could support a more flexible, attractive, and interactive teaching and learning process that includes simulations with features very similar to neonatal unit realities, thus allowing more appropriate training for clinical oxygenation evaluations in at-risk preterm infants. e-Baby allows advanced user-technology-educational interactions because it requires active participation in the process and is emotionally integrated.
Brundage, Shelley B; Hancock, Adrienne B
2015-05-01
Virtual reality environments (VREs) are computer-generated, 3-dimensional worlds that allow users to experience situations similar to those encountered in the real world. The purpose of this study was to investigate VREs for potential use in assessing and treating persons who stutter (PWS) by determining the extent to which PWS's affective, behavioral, and cognitive measures in a VRE correlate with those same measures in a similar live environment. Ten PWS delivered speeches-first to a live audience and, on another day, to 2 virtual audiences (neutral and challenging audiences). Participants completed standard tests of communication apprehension and confidence prior to each condition, and frequency of stuttering was measured during each speech. Correlational analyses revealed significant, positive correlations between virtual and live conditions for affective and cognitive measures as well as for frequency of stuttering. These findings suggest that virtual public speaking environments engender affective, behavioral, and cognitive reactions in PWS that correspond to those experienced in the real world. Therefore, the authentic, safe, and controlled environments provided by VREs may be useful for stuttering assessment and treatment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gonzalez Anez, Francisco
This paper presents two development projects (STARMATE and VIRMAN) focused on supporting training on maintenance. Both projects aim at specifying, designing, developing, and demonstrating prototypes allowing computer guided maintenance of complex mechanical elements using Augmented and Virtual Reality techniques. VIRMAN is a Spanish development project. The objective is to create a computer tool for maintenance training course elaborations and training delivery based on 3D virtual reality models of complex components. The training delivery includes 3D record displays on maintenance procedures with all complementary information for intervention understanding. Users are requested to perform the maintenance intervention trying to follow up themore » procedure. Users can be evaluated about the level of knowledge achieved. Instructors can check the evaluation records left during the training sessions. VIRMAN is simple software supported by a regular computer and can be used in an Internet framework. STARMATE is a forward step in the area of virtual reality. STARMATE is a European Commission project in the frame of 'Information Societies Technologies'. A consortium of five companies and one research institute shares their expertise in this new technology. STARMATE provides two main functionalities (1) user assistance for achieving assembly/de-assembly and following maintenance procedures, and (2) workforce training. The project relies on Augmented Reality techniques, which is a growing area in Virtual Reality research. The idea of Augmented Reality is to combine a real scene, viewed by the user, with a virtual scene, generated by a computer, augmenting the reality with additional information. The user interface is see-through goggles, headphones, microphone and an optical tracking system. All these devices are integrated in a helmet connected with two regular computers. The user has his hands free for performing the maintenance intervention and he can navigate in the virtual world thanks to a voice recognition system and a virtual pointing device. The maintenance work is guided with audio instructions, 2D and 3D information are directly displayed into the user's goggles: There is a position-tracking system that allows 3D virtual models to be displayed in the real counterpart positions independently of the user allocation. The user can create his own virtual environment, placing the information required wherever he wants. The STARMATE system is applicable to a large variety of real work situations. (author)« less
NASA Technical Reports Server (NTRS)
Lindsey, Patricia F.
1993-01-01
In its search for higher level computer interfaces and more realistic electronic simulations for measurement and spatial analysis in human factors design, NASA at MSFC is evaluating the functionality of virtual reality (VR) technology. Virtual reality simulation generates a three dimensional environment in which the participant appears to be enveloped. It is a type of interactive simulation in which humans are not only involved, but included. Virtual reality technology is still in the experimental phase, but it appears to be the next logical step after computer aided three-dimensional animation in transferring the viewer from a passive to an active role in experiencing and evaluating an environment. There is great potential for using this new technology when designing environments for more successful interaction, both with the environment and with another participant in a remote location. At the University of North Carolina, a VR simulation of a the planned Sitterson Hall, revealed a flaw in the building's design that had not been observed during examination of the more traditional building plan simulation methods on paper and on computer aided design (CAD) work station. The virtual environment enables multiple participants in remote locations to come together and interact with one another and with the environment. Each participant is capable of seeing herself and the other participants and of interacting with them within the simulated environment.
A haptic interface for virtual simulation of endoscopic surgery.
Rosenberg, L B; Stredney, D
1996-01-01
Virtual reality can be described as a convincingly realistic and naturally interactive simulation in which the user is given a first person illusion of being immersed within a computer generated environment While virtual reality systems offer great potential to reduce the cost and increase the quality of medical training, many technical challenges must be overcome before such simulation platforms offer effective alternatives to more traditional training means. A primary challenge in developing effective virtual reality systems is designing the human interface hardware which allows rich sensory information to be presented to users in natural ways. When simulating a given manual procedure, task specific human interface requirements dictate task specific human interface hardware. The following paper explores the design of human interface hardware that satisfies the task specific requirements of virtual reality simulation of Endoscopic surgical procedures. Design parameters were derived through direct cadaver studies and interviews with surgeons. Final hardware design is presented.
Measuring Reduction Methods for VR Sickness in Virtual Environments
ERIC Educational Resources Information Center
Magaki, Takurou; Vallance, Michael
2017-01-01
Recently, virtual reality (VR) technologies have developed remarkably. However, some users have negative symptoms during VR experiences or post-experiences. Consequently, alleviating VR sickness is a major challenge, but an effective reduction method has not yet been discovered. The purpose of this article is to compare and evaluate VR sickness in…
Architectural Principles and Experimentation of Distributed High Performance Virtual Clusters
ERIC Educational Resources Information Center
Younge, Andrew J.
2016-01-01
With the advent of virtualization and Infrastructure-as-a-Service (IaaS), the broader scientific computing community is considering the use of clouds for their scientific computing needs. This is due to the relative scalability, ease of use, advanced user environment customization abilities, and the many novel computing paradigms available for…
A CALL for Evolving Teacher Education through 3D Microteaching
ERIC Educational Resources Information Center
Pappa, Giouli; Papadima-Sophocleous, Salomi
2016-01-01
This paper describes micro-teaching delivery in virtual worlds. Emphasis is placed on examining the effectiveness of Singularity Viewer, an Internet-based Multi-User Virtual Environment (MUVE) as the tool used for assessment of the student teacher performance. The overall goal of this endeavour lies in exploiting the opportunities derived from…
ERIC Educational Resources Information Center
Nahl, Diane
2010-01-01
New users of virtual environments face a steep learning curve, requiring persistence and determination to overcome challenges experienced while acclimatizing to the demands of avatar-mediated behavior. Concurrent structured self-reports can be used to monitor the personal affective and cognitive struggles involved in virtual world adaptation to…
ERIC Educational Resources Information Center
Brown, Abbie; Sugar, William
2009-01-01
Second Life is a three-dimensional, multi-user virtual environment that has attracted particular attention for its instructional potential in professional development and higher education settings. This article describes Second Life in general and explores the benefits and challenges of using it for teaching and learning.
13 Tips for Virtual World Teaching
ERIC Educational Resources Information Center
Villano, Matt
2008-01-01
Multi-user virtual environments (MUVEs) are gaining momentum as the latest and greatest learning tool in the world of education technology. How does one get started with them? How do they work? This article shares 13 secrets from immersive education experts and educators on how to have success in implementing these new tools and technologies on…
Using shadow page cache to improve isolated drivers performance.
Zheng, Hao; Dong, Xiaoshe; Wang, Endong; Chen, Baoke; Zhu, Zhengdong; Liu, Chengzhe
2015-01-01
With the advantage of the reusability property of the virtualization technology, users can reuse various types and versions of existing operating systems and drivers in a virtual machine, so as to customize their application environment. In order to prevent users' virtualization environments being impacted by driver faults in virtual machine, Chariot examines the correctness of driver's write operations by the method of combining a driver's write operation capture and a driver's private access control table. However, this method needs to keep the write permission of shadow page table as read-only, so as to capture isolated driver's write operations through page faults, which adversely affect the performance of the driver. Based on delaying setting frequently used shadow pages' write permissions to read-only, this paper proposes an algorithm using shadow page cache to improve the performance of isolated drivers and carefully study the relationship between the performance of drivers and the size of shadow page cache. Experimental results show that, through the shadow page cache, the performance of isolated drivers can be greatly improved without impacting Chariot's reliability too much.
Using Shadow Page Cache to Improve Isolated Drivers Performance
Dong, Xiaoshe; Wang, Endong; Chen, Baoke; Zhu, Zhengdong; Liu, Chengzhe
2015-01-01
With the advantage of the reusability property of the virtualization technology, users can reuse various types and versions of existing operating systems and drivers in a virtual machine, so as to customize their application environment. In order to prevent users' virtualization environments being impacted by driver faults in virtual machine, Chariot examines the correctness of driver's write operations by the method of combining a driver's write operation capture and a driver's private access control table. However, this method needs to keep the write permission of shadow page table as read-only, so as to capture isolated driver's write operations through page faults, which adversely affect the performance of the driver. Based on delaying setting frequently used shadow pages' write permissions to read-only, this paper proposes an algorithm using shadow page cache to improve the performance of isolated drivers and carefully study the relationship between the performance of drivers and the size of shadow page cache. Experimental results show that, through the shadow page cache, the performance of isolated drivers can be greatly improved without impacting Chariot's reliability too much. PMID:25815373
Human-scale interaction for virtual model displays: a clear case for real tools
NASA Astrophysics Data System (ADS)
Williams, George C.; McDowall, Ian E.; Bolas, Mark T.
1998-04-01
We describe a hand-held user interface for interacting with virtual environments displayed on a Virtual Model Display. The tool, constructed entirely of transparent materials, is see-through. We render a graphical counterpart of the tool on the display and map it one-to-one with the real tool. This feature, combined with a capability for touch- sensitive, discrete input, results in a useful spatial input device that is visually versatile. We discuss the tool's design and interaction techniques it supports. Briefly, we look at the human factors issues and engineering challenges presented by this tool and, in general, by the class of hand-held user interfaces that are see-through.
Android Based Mobile Environment for Moodle Users
ERIC Educational Resources Information Center
de Clunie, Gisela T.; Clunie, Clifton; Castillo, Aris; Rangel, Norman
2013-01-01
This paper is about the development of a platform that eases, throughout Android based mobile devices, mobility of users of virtual courses at Technological University of Panama. The platform deploys computational techniques such as "web services," design patterns, ontologies and mobile technologies to allow mobile devices communicate…
Virtual Partnerships in Research and Education.
ERIC Educational Resources Information Center
Payne, Deborah A.; Keating, Kelly A.; Myers, James D.
The William R. Wiley Environmental Molecular Sciences Laboratory (EMSL) at the Pacific Northwest National Laboratory (Washington) is a collaborative user facility with many unique scientific capabilities. The EMSL expects to support many of its remote users and collaborators by electronic means and is creating a collaborative environment for this…
NASA Astrophysics Data System (ADS)
Demir, I.
2014-12-01
Recent developments in internet technologies make it possible to manage and visualize large data on the web. Novel visualization techniques and interactive user interfaces allow users to create realistic environments, and interact with data to gain insight from simulations and environmental observations. The hydrological simulation system is a web-based 3D interactive learning environment for teaching hydrological processes and concepts. The simulation systems provides a visually striking platform with realistic terrain information, and water simulation. Students can create or load predefined scenarios, control environmental parameters, and evaluate environmental mitigation alternatives. The web-based simulation system provides an environment for students to learn about the hydrological processes (e.g. flooding and flood damage), and effects of development and human activity in the floodplain. The system utilizes latest web technologies and graphics processing unit (GPU) for water simulation and object collisions on the terrain. Users can access the system in three visualization modes including virtual reality, augmented reality, and immersive reality using heads-up display. The system provides various scenarios customized to fit the age and education level of various users. This presentation provides an overview of the web-based flood simulation system, and demonstrates the capabilities of the system for various visualization and interaction modes.
Cooper, Natalia; Milella, Ferdinando; Pinto, Carlo; Cant, Iain; White, Mark; Meyer, Georg
2018-01-01
Objective and subjective measures of performance in virtual reality environments increase as more sensory cues are delivered and as simulation fidelity increases. Some cues (colour or sound) are easier to present than others (object weight, vestibular cues) so that substitute cues can be used to enhance informational content in a simulation at the expense of simulation fidelity. This study evaluates how substituting cues in one modality by alternative cues in another modality affects subjective and objective performance measures in a highly immersive virtual reality environment. Participants performed a wheel change in a virtual reality (VR) environment. Auditory, haptic and visual cues, signalling critical events in the simulation, were manipulated in a factorial design. Subjective ratings were recorded via questionnaires. The time taken to complete the task was used as an objective performance measure. The results show that participants performed best and felt an increased sense of immersion and involvement, collectively referred to as 'presence', when substitute multimodal sensory feedback was provided. Significant main effects of audio and tactile cues on task performance and on participants' subjective ratings were found. A significant negative relationship was found between the objective (overall completion times) and subjective (ratings of presence) performance measures. We conclude that increasing informational content, even if it disrupts fidelity, enhances performance and user's overall experience. On this basis we advocate the use of substitute cues in VR environments as an efficient method to enhance performance and user experience.
Virtual Visits and Patient-Centered Care: Results of a Patient Survey and Observational Study
2017-01-01
Background Virtual visits are clinical interactions in health care that do not involve the patient and provider being in the same room at the same time. The use of virtual visits is growing rapidly in health care. Some health systems are integrating virtual visits into primary care as a complement to existing modes of care, in part reflecting a growing focus on patient-centered care. There is, however, limited empirical evidence about how patients view this new form of care and how it affects overall health system use. Objective Descriptive objectives were to assess users and providers of virtual visits, including the reasons patients give for use. The analytic objective was to assess empirically the influence of virtual visits on overall primary care use and costs, including whether virtual care is with a known or a new primary care physician. Methods The study took place in British Columbia, Canada, where virtual visits have been publicly funded since October 2012. A survey of patients who used virtual visits and an observational study of users and nonusers of virtual visits were conducted. Comparison groups included two groups: (1) all other BC residents, and (2) a group matched (3:1) to the cohort. The first virtual visit was used as the intervention and the main outcome measures were total primary care visits and costs. Results During 2013-2014, there were 7286 virtual visit encounters, involving 5441 patients and 144 physicians. Younger patients and physicians were more likely to use and provide virtual visits (P<.001), with no differences by sex. Older and sicker patients were more likely to see a known provider, whereas the lowest socioeconomic groups were the least likely (P<.001). The survey of 399 virtual visit patients indicated that virtual visits were liked by patients, with 372 (93.2%) of respondents saying their virtual visit was of high quality and 364 (91.2%) reporting their virtual visit was “very” or “somewhat” helpful to resolve their health issue. Segmented regression analysis and the corresponding regression parameter estimates suggested virtual visits appear to have the potential to decrease primary care costs by approximately Can $4 per quarter (Can –$3.79, P=.12), but that benefit is most associated with seeing a known provider (Can –$8.68, P<.001). Conclusions Virtual visits may be one means of making the health system more patient-centered, but careful attention needs to be paid to how these services are integrated into existing health care delivery systems. PMID:28550006
A Hypermedia Representation of a Taxonomy of Usability Characteristics in Virtual Environments
2003-03-01
user, organization, and social workflow; needs analysis; and user modeling. A user task analysis generates critical information used throughout all...exist specific to VE user interaction [Gabbard and others, 1999]. Typically more than one person performs guidelines-based evaluations, since it’s...unlikely that any one person could identify all if not most of an interaction design’s usability problems. Nielsen [1994] recommends three to five
Issues of Learning Games: From Virtual to Real
ERIC Educational Resources Information Center
Carron, Thibault; Pernelle, Philippe; Talbot, Stéphane
2013-01-01
Our research work deals with the development of new learning environments, and we are particularly interested in studying the different aspects linked to users' collaboration in these environments. We believe that Game-based Learning can significantly enhance learning. That is why we have developed learning environments grounded on graphical…
A virtual reality environment for telescope operation
NASA Astrophysics Data System (ADS)
Martínez, Luis A.; Villarreal, José L.; Ángeles, Fernando; Bernal, Abel
2010-07-01
Astronomical observatories and telescopes are becoming increasingly large and complex systems, demanding to any potential user the acquirement of great amount of information previous to access them. At present, the most common way to overcome that information is through the implementation of larger graphical user interfaces and computer monitors to increase the display area. Tonantzintla Observatory has a 1-m telescope with a remote observing system. As a step forward in the improvement of the telescope software, we have designed a Virtual Reality (VR) environment that works as an extension of the remote system and allows us to operate the telescope. In this work we explore this alternative technology that is being suggested here as a software platform for the operation of the 1-m telescope.
Virtual displays for 360-degree video
NASA Astrophysics Data System (ADS)
Gilbert, Stephen; Boonsuk, Wutthigrai; Kelly, Jonathan W.
2012-03-01
In this paper we describe a novel approach for comparing users' spatial cognition when using different depictions of 360- degree video on a traditional 2D display. By using virtual cameras within a game engine and texture mapping of these camera feeds to an arbitrary shape, we were able to offer users a 360-degree interface composed of four 90-degree views, two 180-degree views, or one 360-degree view of the same interactive environment. An example experiment is described using these interfaces. This technique for creating alternative displays of wide-angle video facilitates the exploration of how compressed or fish-eye distortions affect spatial perception of the environment and can benefit the creation of interfaces for surveillance and remote system teleoperation.
Harnessing the power of conversations with virtual humans to change health behaviors.
Albright, Glenn; Adam, Cyrille; Serri, Deborah; Bleeker, Seth; Goldman, Ron
2016-01-01
Skillful, collaborative conversations are powerful tools to improve physical and mental health. Whether you are a parent talking with your child about the dangers of substance abuse, an educator concerned about a student's signs of psychological distress, a veteran worried about a buddy who is contemplating suicide, or a healthcare professional wanting to better engage patients to increase treatment compliance, having the skill, confidence and motivation to engage in conversations can truly transform the health and well-being of those you interact with. Kognito develops role-play simulations that prepare individuals to effectively lead real-life conversations that measurably improve social, emotional, and physical health. The behavior change model that drives the simulations draws upon components of game mechanics, virtual human simulation technology and integrates evidence-based instructional design components as well as principles of social-cognitive theory and neuroscience such as motivational interviewing, emotional regulation, empathy and mindfulness. In the simulations, users or enter a risk-free practice environment and engage in a conversation with intelligent, fully animated, and emotionally responsive virtual characters that model human behavior. It is in practicing these conversations, and receiving feedback from a virtual coach, that users learn to better lead conversations in real life. Numerous longitudinal studies have shown that users who complete Kognito simulations demonstrate statistically significant and sustained increases in attitudinal variables that predict behavior change including preparedness, likelihood, and self-efficacy to better manage conversations. Pending the target population, each online or mobile simulation resulted in desired behavior changes ranging from increased referrals of students, patients or veterans in psychological distress to mental health support services, or increasing physician patient-centered communication or patient self-confidence and active involved in the decision-making processes. These simulations have demonstrated a capability to address major health and public health concerns where effective conversations are necessary to bring about changes in attitudes and behaviors.
Harnessing the power of conversations with virtual humans to change health behaviors
Adam, Cyrille; Serri, Deborah; Bleeker, Seth; Goldman, Ron
2016-01-01
Skillful, collaborative conversations are powerful tools to improve physical and mental health. Whether you are a parent talking with your child about the dangers of substance abuse, an educator concerned about a student’s signs of psychological distress, a veteran worried about a buddy who is contemplating suicide, or a healthcare professional wanting to better engage patients to increase treatment compliance, having the skill, confidence and motivation to engage in conversations can truly transform the health and well-being of those you interact with. Kognito develops role-play simulations that prepare individuals to effectively lead real-life conversations that measurably improve social, emotional, and physical health. The behavior change model that drives the simulations draws upon components of game mechanics, virtual human simulation technology and integrates evidence-based instructional design components as well as principles of social-cognitive theory and neuroscience such as motivational interviewing, emotional regulation, empathy and mindfulness. In the simulations, users or enter a risk-free practice environment and engage in a conversation with intelligent, fully animated, and emotionally responsive virtual characters that model human behavior. It is in practicing these conversations, and receiving feedback from a virtual coach, that users learn to better lead conversations in real life. Numerous longitudinal studies have shown that users who complete Kognito simulations demonstrate statistically significant and sustained increases in attitudinal variables that predict behavior change including preparedness, likelihood, and self-efficacy to better manage conversations. Pending the target population, each online or mobile simulation resulted in desired behavior changes ranging from increased referrals of students, patients or veterans in psychological distress to mental health support services, or increasing physician patient-centered communication or patient self-confidence and active involved in the decision-making processes. These simulations have demonstrated a capability to address major health and public health concerns where effective conversations are necessary to bring about changes in attitudes and behaviors. PMID:28293614
Virtualized Multi-Mission Operations Center (vMMOC) and its Cloud Services
NASA Technical Reports Server (NTRS)
Ido, Haisam Kassim
2017-01-01
His presentation will cover, the current and future, technical and organizational opportunities and challenges with virtualizing a multi-mission operations center. The full deployment of Goddard Space Flight Centers (GSFC) Virtualized Multi-Mission Operations Center (vMMOC) is nearly complete. The Space Science Mission Operations (SSMO) organizations spacecraft ACE, Fermi, LRO, MMS(4), OSIRIS-REx, SDO, SOHO, Swift, and Wind are in the process of being fully migrated to the vMMOC. The benefits of the vMMOC will be the normalization and the standardization of IT services, mission operations, maintenance, and development as well as ancillary services and policies such as collaboration tools, change management systems, and IT Security. The vMMOC will also provide operational efficiencies regarding hardware, IT domain expertise, training, maintenance and support.The presentation will also cover SSMO's secure Situational Awareness Dashboard in an integrated, fleet centric, cloud based web services fashion. Additionally the SSMO Telemetry as a Service (TaaS) will be covered, which allows authorized users and processes to access telemetry for the entire SSMO fleet, and for the entirety of each spacecrafts history. Both services leverage cloud services in a secure FISMA High and FedRamp environment, and also leverage distributed object stores in order to house and provide the telemetry. The services are also in the process of leveraging the cloud computing services elasticity and horizontal scalability. In the design phase is the Navigation as a Service (NaaS) which will provide a standardized, efficient, and normalized service for the fleet's space flight dynamics operations. Additional future services that may be considered are Ground Segment as a Service (GSaaS), Telemetry and Command as a Service (TCaaS), Flight Software Simulation as a Service, etc.
Suitability of virtual prototypes to support human factors/ergonomics evaluation during the design.
Aromaa, Susanna; Väänänen, Kaisa
2016-09-01
In recent years, the use of virtual prototyping has increased in product development processes, especially in the assessment of complex systems targeted at end-users. The purpose of this study was to evaluate the suitability of virtual prototyping to support human factors/ergonomics evaluation (HFE) during the design phase. Two different virtual prototypes were used: augmented reality (AR) and virtual environment (VE) prototypes of a maintenance platform of a rock crushing machine. Nineteen designers and other stakeholders were asked to assess the suitability of the prototype for HFE evaluation. Results indicate that the system model characteristics and user interface affect the experienced suitability. The VE system was valued as being more suitable to support the assessment of visibility, reach, and the use of tools than the AR system. The findings of this study can be used as a guidance for the implementing virtual prototypes in the product development process. Copyright © 2016 Elsevier Ltd. All rights reserved.
Identification of Program Signatures from Cloud Computing System Telemetry Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nichols, Nicole M.; Greaves, Mark T.; Smith, William P.
Malicious cloud computing activity can take many forms, including running unauthorized programs in a virtual environment. Detection of these malicious activities while preserving the privacy of the user is an important research challenge. Prior work has shown the potential viability of using cloud service billing metrics as a mechanism for proxy identification of malicious programs. Previously this novel detection method has been evaluated in a synthetic and isolated computational environment. In this paper we demonstrate the ability of billing metrics to identify programs, in an active cloud computing environment, including multiple virtual machines running on the same hypervisor. The openmore » source cloud computing platform OpenStack, is used for private cloud management at Pacific Northwest National Laboratory. OpenStack provides a billing tool (Ceilometer) to collect system telemetry measurements. We identify four different programs running on four virtual machines under the same cloud user account. Programs were identified with up to 95% accuracy. This accuracy is dependent on the distinctiveness of telemetry measurements for the specific programs we tested. Future work will examine the scalability of this approach for a larger selection of programs to better understand the uniqueness needed to identify a program. Additionally, future work should address the separation of signatures when multiple programs are running on the same virtual machine.« less
Sankaranarayanan, Ganesh; Halic, Tansel; Arikatla, Venkata Sreekanth; Lu, Zhonghua; De, Suvranu
2010-01-01
Purpose Surgical simulations require haptic interactions and collaboration in a shared virtual environment. A software framework for decoupled surgical simulation based on a multi-controller and multi-viewer model-view-controller (MVC) pattern was developed and tested. Methods A software framework for multimodal virtual environments was designed, supporting both visual interactions and haptic feedback while providing developers with an integration tool for heterogeneous architectures maintaining high performance, simplicity of implementation, and straightforward extension. The framework uses decoupled simulation with updates of over 1,000 Hz for haptics and accommodates networked simulation with delays of over 1,000 ms without performance penalty. Results The simulation software framework was implemented and was used to support the design of virtual reality-based surgery simulation systems. The framework supports the high level of complexity of such applications and the fast response required for interaction with haptics. The efficacy of the framework was tested by implementation of a minimally invasive surgery simulator. Conclusion A decoupled simulation approach can be implemented as a framework to handle simultaneous processes of the system at the various frame rates each process requires. The framework was successfully used to develop collaborative virtual environments (VEs) involving geographically distributed users connected through a network, with the results comparable to VEs for local users. PMID:20714933
Maciel, Anderson; Sankaranarayanan, Ganesh; Halic, Tansel; Arikatla, Venkata Sreekanth; Lu, Zhonghua; De, Suvranu
2011-07-01
Surgical simulations require haptic interactions and collaboration in a shared virtual environment. A software framework for decoupled surgical simulation based on a multi-controller and multi-viewer model-view-controller (MVC) pattern was developed and tested. A software framework for multimodal virtual environments was designed, supporting both visual interactions and haptic feedback while providing developers with an integration tool for heterogeneous architectures maintaining high performance, simplicity of implementation, and straightforward extension. The framework uses decoupled simulation with updates of over 1,000 Hz for haptics and accommodates networked simulation with delays of over 1,000 ms without performance penalty. The simulation software framework was implemented and was used to support the design of virtual reality-based surgery simulation systems. The framework supports the high level of complexity of such applications and the fast response required for interaction with haptics. The efficacy of the framework was tested by implementation of a minimally invasive surgery simulator. A decoupled simulation approach can be implemented as a framework to handle simultaneous processes of the system at the various frame rates each process requires. The framework was successfully used to develop collaborative virtual environments (VEs) involving geographically distributed users connected through a network, with the results comparable to VEs for local users.
The VRFurnace: A Virtual Reality Application for Energy System Data Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Peter Eric
2001-01-01
The VRFurnace is a unique VR application designed to analyze a complete coal-combustion CFD model of a power plant furnace. Although other applications have been created that analyze furnace performance, no other has included the added complications of particle tracking and the reactions associated with coal combustion. Currently the VRFurnace is a versatile analysis tool. Data translators have been written to allow data from most of the major commercial CFD software packages as well as standard data formats of hand-written code to be uploaded into the VR application. Because of this almost any type of CFD model of any powermore » plant component can be analyzed immediately. The ease of use of the VRFurnace is another of its qualities. The menu system created for the application not only guides first time users through the various button combinations but it also helps the experienced user keep track of which tool is being used. Because the VRFurnace was designed for use in the C6 device at Iowa State University's Virtual Reality Applications Center it is naturally a collaborative project. The projection-based system allows many people to be involved in the analysis process. This type of environment opens the design process to not only include CFD analysts but management teams and plant operators as well by making it easier for engineers to explain design changes. The 3D visualization allows power plant components to be studied in the context of their natural physical environments giving engineers a chance to use their innate pattern recognition and intuitive skills to bring to light key relationships that may have previously gone unrecognized. More specifically, the tools that have been developed make better use of the third dimension that the synthetic environment provides. Whereas the plane tools make it easier to track down interesting features of a given flow field, the box tools allow the user to focus on these features and reduce the data load on the computer.« less
Personalization in an Interactive Learning Environment through a Virtual Character
ERIC Educational Resources Information Center
Reategui, E.; Boff, E.; Campbell, J. A.
2008-01-01
Traditional hypermedia applications present the same content and provide identical navigational support to all users. Adaptive Hypermedia Systems (AHS) make it possible to construct personalized presentations to each user, according to preferences and needs identified. We present in this paper an alternative approach to educational AHS where a…
Role of virtual reality for cerebral palsy management.
Weiss, Patrice L Tamar; Tirosh, Emanuel; Fehlings, Darcy
2014-08-01
Virtual reality is the use of interactive simulations to present users with opportunities to perform in virtual environments that appear, sound, and less frequently, feel similar to real-world objects and events. Interactive computer play refers to the use of a game where a child interacts and plays with virtual objects in a computer-generated environment. Because of their distinctive attributes that provide ecologically realistic and motivating opportunities for active learning, these technologies have been used in pediatric rehabilitation over the past 15 years. The ability of virtual reality to create opportunities for active repetitive motor/sensory practice adds to their potential for neuroplasticity and learning in individuals with neurologic disorders. The objectives of this article is to provide an overview of how virtual reality and gaming are used clinically, to present the results of several example studies that demonstrate their use in research, and to briefly remark on future developments. © The Author(s) 2014.
User-Centered Computer Aided Language Learning
ERIC Educational Resources Information Center
Zaphiris, Panayiotis, Ed.; Zacharia, Giorgos, Ed.
2006-01-01
In the field of computer aided language learning (CALL), there is a need for emphasizing the importance of the user. "User-Centered Computer Aided Language Learning" presents methodologies, strategies, and design approaches for building interfaces for a user-centered CALL environment, creating a deeper understanding of the opportunities and…
Object Creation and Human Factors Evaluation for Virtual Environments
NASA Technical Reports Server (NTRS)
Lindsey, Patricia F.
1998-01-01
The main objective of this project is to provide test objects for simulated environments utilized by the recently established Army/NASA Virtual Innovations Lab (ANVIL) at Marshall Space Flight Center, Huntsville, Al. The objective of the ANVIL lab is to provide virtual reality (VR) models and environments and to provide visualization and manipulation methods for the purpose of training and testing. Visualization equipment used in the ANVIL lab includes head-mounted and boom-mounted immersive virtual reality display devices. Objects in the environment are manipulated using data glove, hand controller, or mouse. These simulated objects are solid or surfaced three dimensional models. They may be viewed or manipulated from any location within the environment and may be viewed on-screen or via immersive VR. The objects are created using various CAD modeling packages and are converted into the virtual environment using dVise. This enables the object or environment to be viewed from any angle or distance for training or testing purposes.
Riva, Giuseppe; Carelli, Laura; Gaggioli, Andrea; Gorini, Alessandra; Vigna, Cinzia; Corsi, Riccardo; Faletti, Gianluca; Vezzadini, Luca
2009-01-01
At MMVR 2007 we presented NeuroVR (http://www.neurovr.org) a free virtual reality platform based on open-source software. The software allows non-expert users to adapt the content of 14 pre-designed virtual environments to the specific needs of the clinical or experimental setting. Following the feedbacks of the 700 users who downloaded the first version, we developed a new version - NeuroVR 1.5 - that improves the possibility for the therapist to enhance the patient's feeling of familiarity and intimacy with the virtual scene, by using external sounds, photos or videos. Specifically, the new version now includes full sound support and the ability of triggering external sounds and videos using the keyboard. The outcomes of different trials made using NeuroVR will be presented and discussed.
ISS Radiation Shielding and Acoustic Simulation Using an Immersive Environment
NASA Technical Reports Server (NTRS)
Verhage, Joshua E.; Sandridge, Chris A.; Qualls, Garry D.; Rizzi, Stephen A.
2002-01-01
The International Space Station Environment Simulator (ISSES) is a virtual reality application that uses high-performance computing, graphics, and audio rendering to simulate the radiation and acoustic environments of the International Space Station (ISS). This CAVE application allows the user to maneuver to different locations inside or outside of the ISS and interactively compute and display the radiation dose at a point. The directional dose data is displayed as a color-mapped sphere that indicates the relative levels of radiation from all directions about the center of the sphere. The noise environment is rendered in real time over headphones or speakers and includes non-spatial background noise, such as air-handling equipment, and spatial sounds associated with specific equipment racks, such as compressors or fans. Changes can be made to equipment rack locations that produce changes in both the radiation shielding and system noise. The ISSES application allows for interactive investigation and collaborative trade studies between radiation shielding and noise for crew safety and comfort.
A white paper: NASA virtual environment research, applications, and technology
NASA Technical Reports Server (NTRS)
Null, Cynthia H. (Editor); Jenkins, James P. (Editor)
1993-01-01
Research support for Virtual Environment technology development has been a part of NASA's human factors research program since 1985. Under the auspices of the Office of Aeronautics and Space Technology (OAST), initial funding was provided to the Aerospace Human Factors Research Division, Ames Research Center, which resulted in the origination of this technology. Since 1985, other Centers have begun using and developing this technology. At each research and space flight center, NASA missions have been major drivers of the technology. This White Paper was the joint effort of all the Centers which have been involved in the development of technology and its applications to their unique missions. Appendix A is the list of those who have worked to prepare the document, directed by Dr. Cynthia H. Null, Ames Research Center, and Dr. James P. Jenkins, NASA Headquarters. This White Paper describes the technology and its applications in NASA Centers (Chapters 1, 2 and 3), the potential roles it can take in NASA (Chapters 4 and 5), and a roadmap of the next 5 years (FY 1994-1998). The audience for this White Paper consists of managers, engineers, scientists and the general public with an interest in Virtual Environment technology. Those who read the paper will determine whether this roadmap, or others, are to be followed.
Kinematic evaluation of virtual walking trajectories.
Cirio, Gabriel; Olivier, Anne-Hélène; Marchal, Maud; Pettré, Julien
2013-04-01
Virtual walking, a fundamental task in Virtual Reality (VR), is greatly influenced by the locomotion interface being used, by the specificities of input and output devices, and by the way the virtual environment is represented. No matter how virtual walking is controlled, the generation of realistic virtual trajectories is absolutely required for some applications, especially those dedicated to the study of walking behaviors in VR, navigation through virtual places for architecture, rehabilitation and training. Previous studies focused on evaluating the realism of locomotion trajectories have mostly considered the result of the locomotion task (efficiency, accuracy) and its subjective perception (presence, cybersickness). Few focused on the locomotion trajectory itself, but in situation of geometrically constrained task. In this paper, we study the realism of unconstrained trajectories produced during virtual walking by addressing the following question: did the user reach his destination by virtually walking along a trajectory he would have followed in similar real conditions? To this end, we propose a comprehensive evaluation framework consisting on a set of trajectographical criteria and a locomotion model to generate reference trajectories. We consider a simple locomotion task where users walk between two oriented points in space. The travel path is analyzed both geometrically and temporally in comparison to simulated reference trajectories. In addition, we demonstrate the framework over a user study which considered an initial set of common and frequent virtual walking conditions, namely different input devices, output display devices, control laws, and visualization modalities. The study provides insight into the relative contributions of each condition to the overall realism of the resulting virtual trajectories.
A collaborative virtual reality environment for neurosurgical planning and training.
Kockro, Ralf A; Stadie, Axel; Schwandt, Eike; Reisch, Robert; Charalampaki, Cleopatra; Ng, Ivan; Yeo, Tseng Tsai; Hwang, Peter; Serra, Luis; Perneczky, Axel
2007-11-01
We have developed a highly interactive virtual environment that enables collaborative examination of stereoscopic three-dimensional (3-D) medical imaging data for planning, discussing, or teaching neurosurgical approaches and strategies. The system consists of an interactive console with which the user manipulates 3-D data using hand-held and tracked devices within a 3-D virtual workspace and a stereoscopic projection system. The projection system displays the 3-D data on a large screen while the user is working with it. This setup allows users to interact intuitively with complex 3-D data while sharing this information with a larger audience. We have been using this system on a routine clinical basis and during neurosurgical training courses to collaboratively plan and discuss neurosurgical procedures with 3-D reconstructions of patient-specific magnetic resonance and computed tomographic imaging data or with a virtual model of the temporal bone. Working collaboratively with the 3-D information of a large, interactive, stereoscopic projection provides an unambiguous way to analyze and understand the anatomic spatial relationships of different surgical corridors. In our experience, the system creates a unique forum for open and precise discussion of neurosurgical approaches. We believe the system provides a highly effective way to work with 3-D data in a group, and it significantly enhances teaching of neurosurgical anatomy and operative strategies.
Practicing Learner-Centered Teaching: Pedagogical Design and Assessment of a Second Life Project
ERIC Educational Resources Information Center
Schiller, Shu Z.
2009-01-01
Guided by the principles of learner-centered teaching methodology, a Second Life project is designed to engage students in active learning of virtual commerce through hands-on experiences and teamwork in a virtual environment. More importantly, an assessment framework is proposed to evaluate the learning objectives and learning process of the…
Thomas, J Graham; Spitalnick, Josh S; Hadley, Wendy; Bond, Dale S; Wing, Rena R
2015-01-01
Virtual reality (VR) technology can provide a safe environment for observing, learning, and practicing use of behavioral weight management skills, which could be particularly useful in enhancing minimal contact online weight management programs. The Experience Success (ES) project developed a system for creating and deploying VR scenarios for online weight management skills training. Virtual environments populated with virtual actors allow users to experiment with implementing behavioral skills via a PC-based point and click interface. A culturally sensitive virtual coach guides the experience, including planning for real-world skill use. Thirty-seven overweight/obese women provided feedback on a test scenario focused on social eating situations. They reported that the scenario gave them greater skills, confidence, and commitment for controlling eating in social situations. © 2014 Diabetes Technology Society.
Spitalnick, Josh S.; Hadley, Wendy; Bond, Dale S.; Wing, Rena R.
2014-01-01
Virtual reality (VR) technology can provide a safe environment for observing, learning, and practicing use of behavioral weight management skills, which could be particularly useful in enhancing minimal contact online weight management programs. The Experience Success (ES) project developed a system for creating and deploying VR scenarios for online weight management skills training. Virtual environments populated with virtual actors allow users to experiment with implementing behavioral skills via a PC-based point and click interface. A culturally sensitive virtual coach guides the experience, including planning for real-world skill use. Thirty-seven overweight/obese women provided feedback on a test scenario focused on social eating situations. They reported that the scenario gave them greater skills, confidence, and commitment for controlling eating in social situations. PMID:25367014
Recent developments in virtual experience design and production
NASA Astrophysics Data System (ADS)
Fisher, Scott S.
1995-03-01
Today, the media of VR and Telepresence are in their infancy and the emphasis is still on technology and engineering. But, it is not the hardware people might use that will determine whether VR becomes a powerful medium--instead, it will be the experiences that they are able to have that will drive its acceptance and impact. A critical challenge in the elaboration of these telepresence capabilities will be the development of environments that are as unpredictable and rich in interconnected processes as an actual location or experience. This paper will describe the recent development of several Virtual Experiences including: `Menagerie', an immersive Virtual Environment inhabited by virtual characters designed to respond to and interact with its users; and `The Virtual Brewery', an immersive public VR installation that provides multiple levels of interaction in an artistic interpretation of the brewing process.
Derived virtual devices: a secure distributed file system mechanism
NASA Technical Reports Server (NTRS)
VanMeter, Rodney; Hotz, Steve; Finn, Gregory
1996-01-01
This paper presents the design of derived virtual devices (DVDs). DVDs are the mechanism used by the Netstation Project to provide secure shared access to network-attached peripherals distributed in an untrusted network environment. DVDs improve Input/Output efficiency by allowing user processes to perform I/O operations directly from devices without intermediate transfer through the controlling operating system kernel. The security enforced at the device through the DVD mechanism includes resource boundary checking, user authentication, and restricted operations, e.g., read-only access. To illustrate the application of DVDs, we present the interactions between a network-attached disk and a file system designed to exploit the DVD abstraction. We further discuss third-party transfer as a mechanism intended to provide for efficient data transfer in a typical NAP environment. We show how DVDs facilitate third-party transfer, and provide the security required in a more open network environment.
Zanbaka, Catherine A; Lok, Benjamin C; Babu, Sabarish V; Ulinski, Amy C; Hodges, Larry F
2005-01-01
We describe a between-subjects experiment that compared four different methods of travel and their effect on cognition and paths taken in an immersive virtual environment (IVE). Participants answered a set of questions based on Crook's condensation of Bloom's taxonomy that assessed their cognition of the IVE with respect to knowledge, understanding and application, and higher mental processes. Participants also drew a sketch map of the IVE and the objects within it. The users' sense of presence was measured using the Steed-Usoh-Slater Presence Questionnaire. The participants' position and head orientation were automatically logged during their exposure to the virtual environment. These logs were later used to create visualizations of the paths taken. Path analysis, such as exploring the overlaid path visualizations and dwell data information, revealed further differences among the travel techniques. Our results suggest that, for applications where problem solving and evaluation of information is important or where opportunity to train is minimal, then having a large tracked space so that the participant can walk around the virtual environment provides benefits over common virtual travel techniques.
LHCb experience with running jobs in virtual machines
NASA Astrophysics Data System (ADS)
McNab, A.; Stagni, F.; Luzzi, C.
2015-12-01
The LHCb experiment has been running production jobs in virtual machines since 2013 as part of its DIRAC-based infrastructure. We describe the architecture of these virtual machines and the steps taken to replicate the WLCG worker node environment expected by user and production jobs. This relies on the uCernVM system for providing root images for virtual machines. We use the CernVM-FS distributed filesystem to supply the root partition files, the LHCb software stack, and the bootstrapping scripts necessary to configure the virtual machines for us. Using this approach, we have been able to minimise the amount of contextualisation which must be provided by the virtual machine managers. We explain the process by which the virtual machine is able to receive payload jobs submitted to DIRAC by users and production managers, and how this differs from payloads executed within conventional DIRAC pilot jobs on batch queue based sites. We describe our operational experiences in running production on VM based sites managed using Vcycle/OpenStack, Vac, and HTCondor Vacuum. Finally we show how our use of these resources is monitored using Ganglia and DIRAC.
A Prototype Publishing Registry for the Virtual Observatory
NASA Astrophysics Data System (ADS)
Williamson, R.; Plante, R.
2004-07-01
In the Virtual Observatory (VO), a registry helps users locate resources, such as data and services, in a distributed environment. A general framework for VO registries is now under development within the International Virtual Observatory Alliance (IVOA) Registry Working Group. We present a prototype of one component of this framework: the publishing registry. The publishing registry allows data providers to expose metadata descriptions of their resources to the VO environment. Searchable registries can harvest the metadata from many publishing registries and make them searchable by users. We have developed a prototype publishing registry that data providers can install at their sites to publish their resources. The descriptions are exposed using the Open Archive Initiative (OAI) Protocol for Metadata Harvesting. Automating the input of metadata into registries is critical when a provider wishes to describe many resources. We illustrate various strategies for such automation, both currently in use and planned for the future. We also describe how future versions of the registry can adapt automatically to evolving metadata schemas for describing resources.
The Role of Environment Design in an Educational Multi-User Virtual Environment
ERIC Educational Resources Information Center
Papachristos, Nikiforos M.; Vrellis, Ioannis; Natsis, Antonis; Mikropoulos, Tassos A.
2014-01-01
This paper presents empirical results from an exploratory study conducted in an authentic educational situation with preservice education students enrolled in an undergraduate course, which was partially taught in Second Life. The study investigated the effect of environment design on presence, learning outcomes and the overall experience of the…
Marsh, T; Wright, P; Smith, S
2001-04-01
New and emerging media technologies have the potential to induce a variety of experiences in users. In this paper, it is argued that the inducement of experience presupposes that users are absorbed in the illusion created by these media. Looking to another successful visual medium, film, this paper borrows from the techniques used in "shaping experience" to hold spectators' attention in the illusion of film, and identifies what breaks the illusion/experience for spectators. This paper focuses on one medium, virtual reality (VR), and advocates a transparent or "invisible style" of interaction. We argue that transparency keeps users in the "flow" of their activities and consequently enhances experience in users. Breakdown in activities breaks the experience and subsequently provides opportunities to identify and analyze potential causes of usability problems. Adopting activity theory, we devise a model of interaction with VR--through consciousness and activity--and introduce the concept of breakdown in illusion. From this, a model of effective interaction with VR is devised and the occurrence of breakdown in interaction and illusion is identified along a continuum of engagement. Evaluation guidelines for the design of experience are proposed and applied to usability problems detected in an empirical study of a head-mounted display (HMD) VR system. This study shows that the guidelines are effective in the evaluation of VR. Finally, we look at the potential experiences that may be induced in users and propose a way to evaluate user experience in virtual environments (VEs) and other new and emerging media.
Ahmed, Laura; Seal, Leonard H; Ainley, Carol; De la Salle, Barbara; Brereton, Michelle; Hyde, Keith; Burthem, John; Gilmore, William Samuel
2016-08-11
Morphological examination of blood films remains the reference standard for malaria diagnosis. Supporting the skills required to make an accurate morphological diagnosis is therefore essential. However, providing support across different countries and environments is a substantial challenge. This paper reports a scheme supplying digital slides of malaria-infected blood within an Internet-based virtual microscope environment to users with different access to training and computing facilities. The feasibility of the approach was established, allowing users to test, record, and compare their own performance with that of other users. From Giemsa stained thick and thin blood films, 56 large high-resolution digital slides were prepared, using high-quality image capture and 63x oil-immersion objective lens. The individual images were combined using the photomerge function of Adobe Photoshop and then adjusted to ensure resolution and reproduction of essential diagnostic features. Web delivery employed the Digital Slidebox platform allowing digital microscope viewing facilities and image annotation with data gathering from participants. Engagement was high with images viewed by 38 participants in five countries in a range of environments and a mean completion rate of 42/56 cases. The rate of parasite detection was 78% and accuracy of species identification was 53%, which was comparable with results of similar studies using glass slides. Data collection allowed users to compare performance with other users over time or for each individual case. Overall, these results demonstrate that users worldwide can effectively engage with the system in a range of environments, with the potential to enhance personal performance through education, external quality assessment, and personal professional development, especially in regions where educational resources are difficult to access.
NASA Astrophysics Data System (ADS)
Delello, Julie Anne
2009-12-01
In 1999, the Chinese Academy of Sciences realized that there was a need for a better public understanding of science. For the public to have better accessibility and comprehension of China's significance to the world, the Computer Network Information Center (CNIC), under the direction of the Chinese Academy of Sciences, combined resources from thousands of experts across the world to develop online science exhibits housed within the Virtual Science Museum of China. Through an analysis of historical documents, this descriptive dissertation presents a research project that explores a dimension of the development of the Giant Panda Exhibit. This study takes the reader on a journey, first to China and then to a classroom within the United States, in order to answer the following questions: (1) What is the process of the development of a virtual science exhibit; and, (2) What role do public audiences play in the design and implementation of virtual science museums? The creation of a virtual science museum exhibition is a process that is not completed with just the building and design, but must incorporate feedback from public audiences who utilize the exhibit. To meet the needs of the museum visitors, the designers at CNIC took a user-centered approach and solicited feedback from six survey groups. To design a museum that would facilitate a cultural exchange of scientific information, the CNIC looked at the following categories: visitor insights, the usability of the technology, the educational effectiveness of the museum exhibit, and the cultural nuances that existed between students in China and in the United States. The findings of this study illustrate that the objectives of museum designers may not necessarily reflect the needs of the visitors and confirm previous research studies which indicate that museum exhibits need a more constructivist approach that fully engages the visitor in an interactive, media-rich environment. Even though the world has moved forwards with digital technology, classroom instruction in both China and in the United States continues to reflect traditional teaching methods. Students were shown to have a lack of experience with the Internet in classrooms and difficulty in scientific comprehension when using the virtual science museum---showing a separation between classroom technology and learning. Students showed a greater interest level in learning science with technology through online gaming and rich multimedia suggesting that virtual science museums can be educationally valuable and support an alternative to traditional teaching methods if designed with the end user in mind.
Telearch - Integrated visual simulation environment for collaborative virtual archaeology.
NASA Astrophysics Data System (ADS)
Kurillo, Gregorij; Forte, Maurizio
Archaeologists collect vast amounts of digital data around the world; however, they lack tools for integration and collaborative interaction to support reconstruction and interpretation process. TeleArch software is aimed to integrate different data sources and provide real-time interaction tools for remote collaboration of geographically distributed scholars inside a shared virtual environment. The framework also includes audio, 2D and 3D video streaming technology to facilitate remote presence of users. In this paper, we present several experimental case studies to demonstrate the integration and interaction with 3D models and geographical information system (GIS) data in this collaborative environment.
A User-Centric Knowledge Creation Model in a Web of Object-Enabled Internet of Things Environment
Kibria, Muhammad Golam; Fattah, Sheik Mohammad Mostakim; Jeong, Kwanghyeon; Chong, Ilyoung; Jeong, Youn-Kwae
2015-01-01
User-centric service features in a Web of Object-enabled Internet of Things environment can be provided by using a semantic ontology that classifies and integrates objects on the World Wide Web as well as shares and merges context-aware information and accumulated knowledge. The semantic ontology is applied on a Web of Object platform to virtualize the real world physical devices and information to form virtual objects that represent the features and capabilities of devices in the virtual world. Detailed information and functionalities of multiple virtual objects are combined with service rules to form composite virtual objects that offer context-aware knowledge-based services, where context awareness plays an important role in enabling automatic modification of the system to reconfigure the services based on the context. Converting the raw data into meaningful information and connecting the information to form the knowledge and storing and reusing the objects in the knowledge base can both be expressed by semantic ontology. In this paper, a knowledge creation model that synchronizes a service logistic model and a virtual world knowledge model on a Web of Object platform has been proposed. To realize the context-aware knowledge-based service creation and execution, a conceptual semantic ontology model has been developed and a prototype has been implemented for a use case scenario of emergency service. PMID:26393609
A User-Centric Knowledge Creation Model in a Web of Object-Enabled Internet of Things Environment.
Kibria, Muhammad Golam; Fattah, Sheik Mohammad Mostakim; Jeong, Kwanghyeon; Chong, Ilyoung; Jeong, Youn-Kwae
2015-09-18
User-centric service features in a Web of Object-enabled Internet of Things environment can be provided by using a semantic ontology that classifies and integrates objects on the World Wide Web as well as shares and merges context-aware information and accumulated knowledge. The semantic ontology is applied on a Web of Object platform to virtualize the real world physical devices and information to form virtual objects that represent the features and capabilities of devices in the virtual world. Detailed information and functionalities of multiple virtual objects are combined with service rules to form composite virtual objects that offer context-aware knowledge-based services, where context awareness plays an important role in enabling automatic modification of the system to reconfigure the services based on the context. Converting the raw data into meaningful information and connecting the information to form the knowledge and storing and reusing the objects in the knowledge base can both be expressed by semantic ontology. In this paper, a knowledge creation model that synchronizes a service logistic model and a virtual world knowledge model on a Web of Object platform has been proposed. To realize the context-aware knowledge-based service creation and execution, a conceptual semantic ontology model has been developed and a prototype has been implemented for a use case scenario of emergency service.
Crossing the Virtual World Barrier with OpenAvatar
NASA Technical Reports Server (NTRS)
Joy, Bruce; Kavle, Lori; Tan, Ian
2012-01-01
There are multiple standards and formats for 3D models in virtual environments. The problem is that there is no open source platform for generating models out of discrete parts; this results in the process of having to "reinvent the wheel" when new games, virtual worlds and simulations want to enable their users to create their own avatars or easily customize in-world objects. OpenAvatar is designed to provide a framework to allow artists and programmers to create reusable assets which can be used by end users to generate vast numbers of complete models that are unique and functional. OpenAvatar serves as a framework which facilitates the modularization of 3D models allowing parts to be interchanged within a set of logical constraints.
LivePhantom: Retrieving Virtual World Light Data to Real Environments.
Kolivand, Hoshang; Billinghurst, Mark; Sunar, Mohd Shahrizal
2016-01-01
To achieve realistic Augmented Reality (AR), shadows play an important role in creating a 3D impression of a scene. Casting virtual shadows on real and virtual objects is one of the topics of research being conducted in this area. In this paper, we propose a new method for creating complex AR indoor scenes using real time depth detection to exert virtual shadows on virtual and real environments. A Kinect camera was used to produce a depth map for the physical scene mixing into a single real-time transparent tacit surface. Once this is created, the camera's position can be tracked from the reconstructed 3D scene. Real objects are represented by virtual object phantoms in the AR scene enabling users holding a webcam and a standard Kinect camera to capture and reconstruct environments simultaneously. The tracking capability of the algorithm is shown and the findings are assessed drawing upon qualitative and quantitative methods making comparisons with previous AR phantom generation applications. The results demonstrate the robustness of the technique for realistic indoor rendering in AR systems.
LivePhantom: Retrieving Virtual World Light Data to Real Environments
2016-01-01
To achieve realistic Augmented Reality (AR), shadows play an important role in creating a 3D impression of a scene. Casting virtual shadows on real and virtual objects is one of the topics of research being conducted in this area. In this paper, we propose a new method for creating complex AR indoor scenes using real time depth detection to exert virtual shadows on virtual and real environments. A Kinect camera was used to produce a depth map for the physical scene mixing into a single real-time transparent tacit surface. Once this is created, the camera’s position can be tracked from the reconstructed 3D scene. Real objects are represented by virtual object phantoms in the AR scene enabling users holding a webcam and a standard Kinect camera to capture and reconstruct environments simultaneously. The tracking capability of the algorithm is shown and the findings are assessed drawing upon qualitative and quantitative methods making comparisons with previous AR phantom generation applications. The results demonstrate the robustness of the technique for realistic indoor rendering in AR systems. PMID:27930663
NASA Technical Reports Server (NTRS)
Searcy, Brittani
2017-01-01
Using virtual environments to assess complex large scale human tasks provides timely and cost effective results to evaluate designs and to reduce operational risks during assembly and integration of the Space Launch System (SLS). NASA's Marshall Space Flight Center (MSFC) uses a suite of tools to conduct integrated virtual analysis during the design phase of the SLS Program. Siemens Jack is a simulation tool that allows engineers to analyze human interaction with CAD designs by placing a digital human model into the environment to test different scenarios and assess the design's compliance to human factors requirements. Engineers at MSFC are using Jack in conjunction with motion capture and virtual reality systems in MSFC's Virtual Environments Lab (VEL). The VEL provides additional capability beyond standalone Jack to record and analyze a person performing a planned task to assemble the SLS at Kennedy Space Center (KSC). The VEL integrates Vicon Blade motion capture system, Siemens Jack, Oculus Rift, and other virtual tools to perform human factors assessments. By using motion capture and virtual reality, a more accurate breakdown and understanding of how an operator will perform a task can be gained. By virtual analysis, engineers are able to determine if a specific task is capable of being safely performed by both a 5% (approx. 5ft) female and a 95% (approx. 6'1) male. In addition, the analysis will help identify any tools or other accommodations that may to help complete the task. These assessments are critical for the safety of ground support engineers and keeping launch operations on schedule. Motion capture allows engineers to save and examine human movements on a frame by frame basis, while virtual reality gives the actor (person performing a task in the VEL) an immersive view of the task environment. This presentation will discuss the need of human factors for SLS and the benefits of analyzing tasks in NASA MSFC's VEL.
Community-based pedestrian safety training in virtual reality: A pragmatic trial.
Schwebel, David C; Combs, Tabitha; Rodriguez, Daniel; Severson, Joan; Sisiopiku, Virginia
2016-01-01
Child pedestrian injuries are a leading cause of mortality and morbidity across the United States and the world. Repeated practice at the cognitive-perceptual task of crossing a street may lead to safer pedestrian behavior. Virtual reality offers a unique opportunity for repeated practice without the risk of actual injury. This study conducted a pre-post within-subjects trial of training children in pedestrian safety using a semi-mobile, semi-immersive virtual pedestrian environment placed at schools and community centers. Pedestrian safety skills among a group of 44 seven- and eight-year-old children were assessed in a laboratory, and then children completed six 15-minute training sessions in the virtual pedestrian environment at their school or community center following pragmatic trial strategies over the course of three weeks. Following training, pedestrian safety skills were re-assessed. Results indicate improvement in delay entering traffic following training. Safe crossings did not demonstrate change. Attention to traffic and time to contact with oncoming vehicles both decreased somewhat, perhaps an indication that training was incomplete and children were in the process of actively learning to be safer pedestrians. The findings suggest virtual reality environments placed in community centers hold promise for teaching children to be safer pedestrians, but future research is needed to determine the optimal training dosage. Copyright © 2015 Elsevier Ltd. All rights reserved.
Sonic intelligence as a virtual therapeutic environment.
Tarnanas, Ioannis; Adam, Dimitrios
2003-06-01
This paper reports on the results of a research project, on comparing one virtual collaborative environment with a first-person visual immersion (first-perspective interaction) and a second one where the user interacts through a sound-kinetic virtual representation of himself (avatar), as a stress-coping environment in real-life situations. Recent developments in coping research are proposing a shift from a trait-oriented approach of coping to a more situation-specific treatment. We defined as real-life situation a target-oriented situation that demands a complex coping skills inventory of high self-efficacy and internal or external "locus of control" strategies. The participants were 90 normal adults with healthy or impaired coping skills, 25-40 years of age, randomly spread across two groups. There was the same number of participants across groups and gender balance within groups. All two groups went through two phases. In Phase I, Solo, one participant was assessed using a three-stage assessment inspired by the transactional stress theory of Lazarus and the stress inoculation theory of Meichenbaum. In Phase I, each participant was given a coping skills measurement within the time course of various hypothetical stressful encounters performed in two different conditions and a control group. In Condition A, the participant was given a virtual stress assessment scenario relative to a first-person perspective (VRFP). In Condition B, the participant was given a virtual stress assessment scenario relative to a behaviorally realistic motion controlled avatar with sonic feedback (VRSA). In Condition C, the No Treatment Condition (NTC), the participant received just an interview. In Phase II, all three groups were mixed and exercised the same tasks but with two participants in pairs. The results showed that the VRSA group performed notably better in terms of cognitive appraisals, emotions and attributions than the other two groups in Phase I (VRSA, 92%; VRFP, 85%; NTC, 34%). In Phase II, the difference again favored the VRSA group against the other two. These results indicate that a virtual collaborative environment seems to be a consistent coping environment, tapping two classes of stress: (a) aversive or ambiguous situations, and (b) loss or failure situations in relation to the stress inoculation theory. In terms of coping behaviors, a distinction is made between self-directed and environment-directed strategies. A great advantage of the virtual collaborative environment with the behaviorally enhanced sound-kinetic avatar is the consideration of team coping intentions in different stages. Even if the aim is to tap transactional processes in real-life situations, it might be better to conduct research using a sound-kinetic avatar based collaborative environment than a virtual first-person perspective scenario alone. The VE consisted of two dual-processor PC systems, a video splitter, a digital camera and two stereoscopic CRT displays. The system was programmed in C++ and VRScape Immersive Cluster from VRCO, which created an artificial environment that encodes the user's motion from a video camera, targeted at the face of the users and physiological sensors attached to the body.
Interaction Management Strategies on IRC and Virtual Chat Rooms.
ERIC Educational Resources Information Center
Altun, Arif
Internet Relay Chat (IRC) is an electronic medium that combines orthographic form with real time, synchronous transmission in an unregulated global multi-user environment. The orthographic letters mediate the interaction in that users can only access the IRC session through reading and writing; they have no access to any visual representations at…
System-Level Virtualization for High Performance Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vallee, Geoffroy R; Naughton, III, Thomas J; Engelmann, Christian
2008-01-01
System-level virtualization has been a research topic since the 70's but regained popularity during the past few years because of the availability of efficient solution such as Xen and the implementation of hardware support in commodity processors (e.g. Intel-VT, AMD-V). However, a majority of system-level virtualization projects is guided by the server consolidation market. As a result, current virtualization solutions appear to not be suitable for high performance computing (HPC) which is typically based on large-scale systems. On another hand there is significant interest in exploiting virtual machines (VMs) within HPC for a number of other reasons. By virtualizing themore » machine, one is able to run a variety of operating systems and environments as needed by the applications. Virtualization allows users to isolate workloads, improving security and reliability. It is also possible to support non-native environments and/or legacy operating environments through virtualization. In addition, it is possible to balance work loads, use migration techniques to relocate applications from failing machines, and isolate fault systems for repair. This document presents the challenges for the implementation of a system-level virtualization solution for HPC. It also presents a brief survey of the different approaches and techniques to address these challenges.« less
Planning Image-Based Measurements in Wind Tunnels by Virtual Imaging
NASA Technical Reports Server (NTRS)
Kushner, Laura Kathryn; Schairer, Edward T.
2011-01-01
Virtual imaging is routinely used at NASA Ames Research Center to plan the placement of cameras and light sources for image-based measurements in production wind tunnel tests. Virtual imaging allows users to quickly and comprehensively model a given test situation, well before the test occurs, in order to verify that all optical testing requirements will be met. It allows optimization of the placement of cameras and light sources and leads to faster set-up times, thereby decreasing tunnel occupancy costs. This paper describes how virtual imaging was used to plan optical measurements for three tests in production wind tunnels at NASA Ames.
A convertor and user interface to import CAD files into worldtoolkit virtual reality systems
NASA Technical Reports Server (NTRS)
Wang, Peter Hor-Ching
1996-01-01
Virtual Reality (VR) is a rapidly developing human-to-computer interface technology. VR can be considered as a three-dimensional computer-generated Virtual World (VW) which can sense particular aspects of a user's behavior, allow the user to manipulate the objects interactively, and render the VW at real-time accordingly. The user is totally immersed in the virtual world and feel the sense of transforming into that VW. NASA/MSFC Computer Application Virtual Environments (CAVE) has been developing the space-related VR applications since 1990. The VR systems in CAVE lab are based on VPL RB2 system which consists of a VPL RB2 control tower, an LX eyephone, an Isotrak polhemus sensor, two Fastrak polhemus sensors, a folk of Bird sensor, and two VPL DG2 DataGloves. A dynamics animator called Body Electric from VPL is used as the control system to interface with all the input/output devices and to provide the network communications as well as VR programming environment. The RB2 Swivel 3D is used as the modelling program to construct the VW's. A severe limitation of the VPL VR system is the use of RB2 Swivel 3D, which restricts the files to a maximum of 1020 objects and doesn't have the advanced graphics texture mapping. The other limitation is that the VPL VR system is a turn-key system which does not provide the flexibility for user to add new sensors and C language interface. Recently, NASA/MSFC CAVE lab provides VR systems built on Sense8 WorldToolKit (WTK) which is a C library for creating VR development environments. WTK provides device drivers for most of the sensors and eyephones available on the VR market. WTK accepts several CAD file formats, such as Sense8 Neutral File Format, AutoCAD DXF and 3D Studio file format, Wave Front OBJ file format, VideoScape GEO file format, Intergraph EMS stereolithographics and CATIA Stereolithographics STL file formats. WTK functions are object-oriented in their naming convention, are grouped into classes, and provide easy C language interface. Using a CAD or modelling program to build a VW for WTK VR applications, we typically construct the stationary universe with all the geometric objects except the dynamic objects, and create each dynamic object in an individual file.
NASA Technical Reports Server (NTRS)
Twombly, I. Alexander; Smith, Jeffrey; Bruyns, Cynthia; Montgomery, Kevin; Boyle, Richard
2003-01-01
The International Space Station will soon provide an unparalleled research facility for studying the near- and longer-term effects of microgravity on living systems. Using the Space Station Glovebox Facility - a compact, fully contained reach-in environment - astronauts will conduct technically challenging life sciences experiments. Virtual environment technologies are being developed at NASA Ames Research Center to help realize the scientific potential of this unique resource by facilitating the experimental hardware and protocol designs and by assisting the astronauts in training. The Virtual GloveboX (VGX) integrates high-fidelity graphics, force-feedback devices and real- time computer simulation engines to achieve an immersive training environment. Here, we describe the prototype VGX system, the distributed processing architecture used in the simulation environment, and modifications to the visualization pipeline required to accommodate the display configuration.
Study on Collaborative Object Manipulation in Virtual Environment
NASA Astrophysics Data System (ADS)
Mayangsari, Maria Niken; Yong-Moo, Kwon
This paper presents comparative study on network collaboration performance in different immersion. Especially, the relationship between user collaboration performance and degree of immersion provided by the system is addressed and compared based on several experiments. The user tests on our system include several cases: 1) Comparison between non-haptics and haptics collaborative interaction over LAN, 2) Comparison between non-haptics and haptics collaborative interaction over Internet, and 3) Analysis of collaborative interaction between non-immersive and immersive display environments.
Development of virtual environment for treating acrophobia.
Ku, J; Jang, D; Shin, M; Jo, H; Ahn, H; Lee, J; Cho, B; Kim, S I
2001-01-01
Virtual Reality (VR) is a new technology that makes humans communicate with computer. It allows the user to see, hear, feel and interact in a three-dimensional virtual world created graphically. Virtual Reality Therapy (VRT), based on this sophisticated technology, has been recently used in the treatment of subjects diagnosed with acrophobia, a disorder that is characterized by marked anxiety upon exposure to heights, avoidance of heights, and a resulting interference in functioning. Conventional virtual reality system for the treatment of acrophobia has a limitation in scope that it is based on over-costly devices or somewhat unrealistic graphic scene. The goal of this study was to develop a inexpensive and more realistic virtual environment for the exposure therapy of acrophobia. We constructed two types virtual environment. One is constituted a bungee-jump tower in the middle of a city. It includes the open lift surrounded by props beside tower that allowed the patient to feel sense of heights. Another is composed of diving boards which have various heights. It provides a view of a lower diving board and people swimming in the pool to serve the patient stimuli upon exposure to heights.
3D virtual environment of Taman Mini Indonesia Indah in a web
NASA Astrophysics Data System (ADS)
Wardijono, B. A.; Wardhani, I. P.; Chandra, Y. I.; Pamungkas, B. U. G.
2018-05-01
Taman Mini Indonesia Indah known as TMII is a largest recreational park based on culture in Indonesia. This park has 250 acres that consist of houses from provinces in Indonesia. In TMII, there are traditional houses of the various provinces in Indonesia. The official website of TMII has informed the traditional houses, but the information was limited to public. To provide information more detail about TMII to the public, this research aims to create and develop virtual traditional houses as 3d graphics models and show it via website. The Virtual Reality (VR) technology was used to display the visualization of the TMII and the surrounding environment. This research used Blender software to create the 3D models and Unity3D software to make virtual reality models that can be showed on a web. This research has successfully created 33 virtual traditional houses of province in Indonesia. The texture of traditional house was taken from original to make the culture house realistic. The result of this research was the website of TMII including virtual culture houses that can be displayed through the web browser. The website consists of virtual environment scenes and internet user can walkthrough and navigates inside the scenes.
Iris: Constructing and Analyzing Spectral Energy Distributions with the Virtual Observatory
NASA Astrophysics Data System (ADS)
Laurino, O.; Budynkiewicz, J.; Busko, I.; Cresitello-Dittmar, M.; D'Abrusco, R.; Doe, S.; Evans, J.; Pevunova, O.
2014-05-01
We present Iris 2.0, the latest release of the Virtual Astronomical Observatory application for building and analyzing Spectral Energy Distributions (SEDs). With Iris, users may read in and display SEDs inspect and edit any selection of SED data, fit models to SEDs in arbitrary spectral ranges, and calculate confidence limits on best-fit parameters. SED data may be loaded into the application from VOTable and FITS files compliant with the International Virtual Observatoy Alliance interoperable data models, or retrieved directly from NED or the Italian Space Agency Science Data Center; data in non-standard formats may also be converted within the application. Users may seamlessy exchange data between Iris and other Virtual Observatoy tools using the Simple Application Messaging Protocol. Iris 2.0 also provides a tool for redshifting, interpolating, and measuring integratd fluxes, and allows simple aperture corrections for individual points and SED segments. Custom Python functions, template models and template libraries may be imported into Iris for fitting SEDs. Iris may be extended through Java plugins; users can install third-party packages, or develop their own plugin using Iris' Software Development Kit. Iris 2.0 is available for Linux and Mac OS X systems.
Design on the MUVE: Synergizing Online Design Education with Multi-User Virtual Environments (MUVE)
ERIC Educational Resources Information Center
Sakalli, Isinsu; Chung, WonJoon
2015-01-01
The world is becoming increasingly virtual. Since the invention of the World Wide Web, information and human interaction has been transferring to the web at a rapid rate. Education is one of the many institutions that is taking advantage of accessing large numbers of people globally through computers. While this can be a simpler task for…
Lost in Interaction in IMS Learning Design Runtime Environments
ERIC Educational Resources Information Center
Derntl, Michael; Neumann, Susanne; Oberhuemer, Petra
2014-01-01
Educators are exploiting the advantages of advanced web-based collaboration technologies and massive online interactions. Interactions between learners and human or nonhuman resources therefore play an increasingly important pedagogical role, and the way these interactions are expressed in the user interface of virtual learning environments is…
Toyonaga, Shinya; Kominami, Daichi; Murata, Masayuki
2016-08-19
Many researchers are devoting attention to the so-called "Internet of Things" (IoT), and wireless sensor networks (WSNs) are regarded as a critical technology for realizing the communication infrastructure of the future, including the IoT. Against this background, virtualization is a crucial technique for the integration of multiple WSNs. Designing virtualized WSNs for actual environments will require further detailed studies. Within the IoT environment, physical networks can undergo dynamic change, and so, many problems exist that could prevent applications from running without interruption when using the existing approaches. In this paper, we show an overall architecture that is suitable for constructing and running virtual wireless sensor network (VWSN) services within a VWSN topology. Our approach provides users with a reliable VWSN network by assigning redundant resources according to each user's demand and providing a recovery method to incorporate environmental changes. We tested this approach by simulation experiment, with the results showing that the VWSN network is reliable in many cases, although physical deployment of sensor nodes and the modular structure of the VWSN will be quite important to the stability of services within the VWSN topology.
NASA Technical Reports Server (NTRS)
Gutensohn, Michael
2018-01-01
The task for this project was to design, develop, test, and deploy a facial recognition system for the Kennedy Space Center Augmented/Virtual Reality Lab. This system will serve as a means of user authentication as part of the NUI of the lab. The overarching goal is to create a seamless user interface that will allow the user to initiate and interact with AR and VR experiences without ever needing to use a mouse or keyboard at any step in the process.
The Ames Virtual Environment Workstation: Implementation issues and requirements
NASA Technical Reports Server (NTRS)
Fisher, Scott S.; Jacoby, R.; Bryson, S.; Stone, P.; Mcdowall, I.; Bolas, M.; Dasaro, D.; Wenzel, Elizabeth M.; Coler, C.; Kerr, D.
1991-01-01
This presentation describes recent developments in the implementation of a virtual environment workstation in the Aerospace Human Factors Research Division of NASA's Ames Research Center. Introductory discussions are presented on the primary research objectives and applications of the system and on the system's current hardware and software configuration. Principle attention is then focused on unique issues and problems encountered in the workstation's development with emphasis on its ability to meet original design specifications for computational graphics performance and for associated human factors requirements necessary to provide compelling sense of presence and efficient interaction in the virtual environment.
Virtual Visits and Patient-Centered Care: Results of a Patient Survey and Observational Study.
McGrail, Kimberlyn Marie; Ahuja, Megan Alyssa; Leaver, Chad Andrew
2017-05-26
Virtual visits are clinical interactions in health care that do not involve the patient and provider being in the same room at the same time. The use of virtual visits is growing rapidly in health care. Some health systems are integrating virtual visits into primary care as a complement to existing modes of care, in part reflecting a growing focus on patient-centered care. There is, however, limited empirical evidence about how patients view this new form of care and how it affects overall health system use. Descriptive objectives were to assess users and providers of virtual visits, including the reasons patients give for use. The analytic objective was to assess empirically the influence of virtual visits on overall primary care use and costs, including whether virtual care is with a known or a new primary care physician. The study took place in British Columbia, Canada, where virtual visits have been publicly funded since October 2012. A survey of patients who used virtual visits and an observational study of users and nonusers of virtual visits were conducted. Comparison groups included two groups: (1) all other BC residents, and (2) a group matched (3:1) to the cohort. The first virtual visit was used as the intervention and the main outcome measures were total primary care visits and costs. During 2013-2014, there were 7286 virtual visit encounters, involving 5441 patients and 144 physicians. Younger patients and physicians were more likely to use and provide virtual visits (P<.001), with no differences by sex. Older and sicker patients were more likely to see a known provider, whereas the lowest socioeconomic groups were the least likely (P<.001). The survey of 399 virtual visit patients indicated that virtual visits were liked by patients, with 372 (93.2%) of respondents saying their virtual visit was of high quality and 364 (91.2%) reporting their virtual visit was "very" or "somewhat" helpful to resolve their health issue. Segmented regression analysis and the corresponding regression parameter estimates suggested virtual visits appear to have the potential to decrease primary care costs by approximately Can $4 per quarter (Can -$3.79, P=.12), but that benefit is most associated with seeing a known provider (Can -$8.68, P<.001). Virtual visits may be one means of making the health system more patient-centered, but careful attention needs to be paid to how these services are integrated into existing health care delivery systems. ©Kimberlyn Marie McGrail, Megan Alyssa Ahuja, Chad Andrew Leaver. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 26.05.2017.
ERIC Educational Resources Information Center
Kapralos, Bill; Hogan, Michelle; Pribetic, Antonin I.; Dubrowski, Adam
2011-01-01
Purpose: Gaming and interactive virtual simulation environments support a learner-centered educational model allowing learners to work through problems acquiring knowledge through an active, experiential learning approach. To develop effective virtual simulations and serious games, the views and perceptions of learners and educators must be…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ragan, Eric D; Bowman, Doug A; Scerbo, Siroberto
Virtual reality (VR) systems have been proposed for use in numerous training scenarios, such as room clearing, which require the trainee to maintain spatial awareness. But many VR training systems lack a fully surrounding display, requiring trainees to use a combination of physical and virtual turns to view the environment, thus decreasing spatial awareness. One solution to this problem is to amplify head rotations, such that smaller physical turns are mapped to larger virtual turns, allowing trainees to view the surrounding environment with head movements alone. For example, in a multi-monitor system covering only a 90-degree field of regard, headmore » rotations could be amplified four times to allow the user to see the entire 360-degree surrounding environment. This solution is attractive because it can be used with lower-cost VR systems and does not require virtual turning. However, the effects of amplified head rotations on spatial awareness and training transfer are not well understood. We hypothesized that small amounts of amplification might be tolerable, but that larger amplifications might cause trainees to become disoriented and to have decreased task performance and training transfer. In this paper, we will present our findings from an experiment designed to investigate these hypotheses. The experiment placed users in a virtual warehouse and asked them to move from room to room, counting objects placed around them in space. We varied the amount of amplification applied during these trials, and also varied the type of display used (head-mounted display or CAVE). We measured task performance and spatial awareness. We then assessed training transfer in an assessment environment with a fully surrounding display and no amplification. The results of this study will inform VR training system developers about the potential negative effects of using head rotation amplification and contribute to more effective VR training system design.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shawver, D.M.; Stansfield, S.
This overview presents current research at Sandia National Laboratories in the Virtual Reality and Intelligent Simulation Lab. Into an existing distributed VR environment which we have been developing, and which provides shared immersion for multiple users, we are adding virtual actor support. The virtual actor support we are adding to this environment is intended to provide semi-autonomous actors, with oversight and high-level guiding control by a director/user, and to allow the overall action to be driven by a scenario. We present an overview of the environment into which our virtual actors will be added in Section 3, and discuss themore » direction of the Virtual Actor research itself in Section 4. We will briefly review related work in Section 2. First however we need to place the research in the context of what motivates it. The motivation for our construction of this environment, and the line of research associated with it, is based on a long-term program of providing support, through simulation, for situational training, by which we mean a type of training in which students learn to handle multiple situations or scenarios. In these situations, the student may encounter events ranging from the routine occurance to the rare emergency. Indeed, the appeal of such training systems is that they could allow the student to experience and develop effective responses for situations they would otherwise have no opportunity to practice, until they happened to encounter an actual occurance. Examples of the type of students for this kind of training would be security forces or emergency response forces. An example of the type of training scenario we would like to support is given in Section 4.2.« less
Cross-standard user description in mobile, medical oriented virtual collaborative environments
NASA Astrophysics Data System (ADS)
Ganji, Rama Rao; Mitrea, Mihai; Joveski, Bojan; Chammem, Afef
2015-03-01
By combining four different open standards belonging to the ISO/IEC JTC1/SC29 WG11 (a.k.a. MPEG) and W3C, this paper advances an architecture for mobile, medical oriented virtual collaborative environments. The various users are represented according to MPEG-UD (MPEG User Description) while the security issues are dealt with by deploying the WebID principles. On the server side, irrespective of their elementary types (text, image, video, 3D, …), the medical data are aggregated into hierarchical, interactive multimedia scenes which are alternatively represented into MPEG-4 BiFS or HTML5 standards. This way, each type of content can be optimally encoded according to its particular constraints (semantic, medical practice, network conditions, etc.). The mobile device should ensure only the displaying of the content (inside an MPEG player or an HTML5 browser) and the capturing of the user interaction. The overall architecture is implemented and tested under the framework of the MEDUSA European project, in partnership with medical institutions. The testbed considers a server emulated by a PC and heterogeneous user devices (tablets, smartphones, laptops) running under iOS, Android and Windows operating systems. The connection between the users and the server is alternatively ensured by WiFi and 3G/4G networks.
A Full Body Steerable Wind Display for a Locomotion Interface.
Kulkarni, Sandip D; Fisher, Charles J; Lefler, Price; Desai, Aditya; Chakravarthy, Shanthanu; Pardyjak, Eric R; Minor, Mark A; Hollerbach, John M
2015-10-01
This paper presents the Treadport Active Wind Tunnel (TPAWT)-a full-body immersive virtual environment for the Treadport locomotion interface designed for generating wind on a user from any frontal direction at speeds up to 20 kph. The goal is to simulate the experience of realistic wind while walking in an outdoor virtual environment. A recirculating-type wind tunnel was created around the pre-existing Treadport installation by adding a large fan, ducting, and enclosure walls. Two sheets of air in a non-intrusive design flow along the side screens of the back-projection CAVE-like visual display, where they impinge and mix at the front screen to redirect towards the user in a full-body cross-section. By varying the flow conditions of the air sheets, the direction and speed of wind at the user are controlled. Design challenges to fit the wind tunnel in the pre-existing facility, and to manage turbulence to achieve stable and steerable flow, were overcome. The controller performance for wind speed and direction is demonstrated experimentally.
Virtual reality hardware for use in interactive 3D data fusion and visualization
NASA Astrophysics Data System (ADS)
Gourley, Christopher S.; Abidi, Mongi A.
1997-09-01
Virtual reality has become a tool for use in many areas of research. We have designed and built a VR system for use in range data fusion and visualization. One major VR tool is the CAVE. This is the ultimate visualization tool, but comes with a large price tag. Our design uses a unique CAVE whose graphics are powered by a desktop computer instead of a larger rack machine making it much less costly. The system consists of a screen eight feet tall by twenty-seven feet wide giving a variable field-of-view currently set at 160 degrees. A silicon graphics Indigo2 MaxImpact with the impact channel option is used for display. This gives the capability to drive three projectors at a resolution of 640 by 480 for use in displaying the virtual environment and one 640 by 480 display for a user control interface. This machine is also the first desktop package which has built-in hardware texture mapping. This feature allows us to quickly fuse the range and intensity data and other multi-sensory data. The final goal is a complete 3D texture mapped model of the environment. A dataglove, magnetic tracker, and spaceball are to be used for manipulation of the data and navigation through the virtual environment. This system gives several users the ability to interactively create 3D models from multiple range images.
Center for Nanophase Materials Sciences
NASA Astrophysics Data System (ADS)
Horton, Linda
2002-10-01
The Center for Nanophase Materials Sciences (CNMS) will be a user facility with a strong component of joint, collaborative research. CNMS is being developed, together with the scientific community, with support from DOE's Office of Basic Energy Sciences. The Center will provide a thriving, multidisciplinary environment for research as well as the education of students and postdoctoral scholars. It will be co-located with the Spallation Neutron Source (SNS) and the Joint Institute for Neutron Sciences (JINS). The CNMS will integrate nanoscale research with neutron science, synthesis science, and theory/modeling/simulation, bringing together four areas in which the United States has clear national research and educational needs. The Center's research will be organized under three scientific thrusts: nano-dimensioned "soft" materials (including organic, hybrid, and interfacial nanophases); complex "hard" materials systems (including the crosscutting areas of interfaces and reduced dimensionality that become scientifically critical on the nanoscale); and theory/modeling/simulation. This presentation will summarize the progress towards identification of the specific research focus topics for the Center. Currently proposed topics, based on two workshops with the potential user community, include catalysis, nanomagnetism, synthetic and bio-inspired macromolecular materials, nanophase biomaterials, nanofluidics, optics/photonics, carbon-based nanostructures, collective behavior, nanoscale interface science, virtual synthesis and nanomaterials design, and electronic structure, correlations, and transport. In addition, the proposed 80,000 square foot facility (wet/dry labs, nanofabrication clean rooms, and offices) and the associated technical equipment will be described. The CNMS is scheduled to begin construction in spring, 2003. Initial operations are planned for late in 2004.
Higuera-Trujillo, Juan Luis; López-Tarruella Maldonado, Juan; Llinares Millán, Carmen
2017-11-01
Psychological research into human factors frequently uses simulations to study the relationship between human behaviour and the environment. Their validity depends on their similarity with the physical environments. This paper aims to validate three environmental-simulation display formats: photographs, 360° panoramas, and virtual reality. To do this we compared the psychological and physiological responses evoked by simulated environments set-ups to those from a physical environment setup; we also assessed the users' sense of presence. Analysis show that 360° panoramas offer the closest to reality results according to the participants' psychological responses, and virtual reality according to the physiological responses. Correlations between the feeling of presence and physiological and other psychological responses were also observed. These results may be of interest to researchers using environmental-simulation technologies currently available in order to replicate the experience of physical environments. Copyright © 2017 Elsevier Ltd. All rights reserved.
Virtual socialization in adults with spina bifida.
Chan, Wendy M; Dicianno, Brad E
2011-03-01
To use spina bifida (SB) as a model of chronic physical disability to study the associations of virtual socialization, friendships, and quality of life (QOL) in adults. Cross-sectional survey. Subjects were recruited from residential living facilities, outpatient clinics, and the University of Pittsburgh Medical Center (UPMC) research registry. Inclusion criteria were age between 18 and 80 years and clinical diagnoses of SB cystica (myelomeningocele) and hydrocephalus. The exclusion criterion was the diagnosis of SB occulta. Sixty-three eligible adults were enrolled, and all completed the study. The survey via questionnaire was performed in person or over the telephone. Data collected included the World Health Organization's Medical Outcomes Study 26-item Short Form, Economic Self-Sufficiency from the Craig Handicap Assessment and Reporting Technique Short Form, virtual socializing habits, and number of friends. Three linear regression models were performed, each with a unique dependent variable: number of friends, psychological QOL, or social QOL. The following independent variables were included in all models: age, gender, ethnicity, economic self-sufficiency, marital status, education level, lesion level, health status, user group, collection method, and time spent virtually socializing. In addition, each regression model included the dependent variables from the other 2 models in its independent variables. Increased degree of virtual socialization (VS) was associated with a greater number of friends (P = .003, r = .684). Mean (standard deviation) numbers of friends by VS groups were the following: users, n = 4.9 ± 2.7; semi-users, n = 3.8 ± 2.7; and nonusers, n = 2.1 ± 2.3, which represent a 2.3 times greater number of friends between the users and nonusers. The effect of virtual socialization on QOL was also positive, however, not statistically significant. People with chronic physical disabilities, such as SB, are at high risk for peer rejection and long-term social avoidance. Users of the most immersive forms of virtual socialization, have more real world friends than both semi-users and nonusers. Any form of VS, whether immersive or real time, may improve the opportunity for meaningful social encounters. Prospective intervention studies are needed to elucidate whether a causal positive relationship between virtual socialization and friendships exists. Further research is needed to clarify virtual socialization's impact on QOL; however, the upward trend in all 4 domains of QOL across user groups suggests similar potential benefits. Copyright © 2011 American Academy of Physical Medicine and Rehabilitation. Published by Elsevier Inc. All rights reserved.
Evaluating the Effects of Immersive Embodied Interaction on Cognition in Virtual Reality
NASA Astrophysics Data System (ADS)
Parmar, Dhaval
Virtual reality is on its advent of becoming mainstream household technology, as technologies such as head-mounted displays, trackers, and interaction devices are becoming affordable and easily available. Virtual reality (VR) has immense potential in enhancing the fields of education and training, and its power can be used to spark interest and enthusiasm among learners. It is, therefore, imperative to evaluate the risks and benefits that immersive virtual reality poses to the field of education. Research suggests that learning is an embodied process. Learning depends on grounded aspects of the body including action, perception, and interactions with the environment. This research aims to study if immersive embodiment through the means of virtual reality facilitates embodied cognition. A pedagogical VR solution which takes advantage of embodied cognition can lead to enhanced learning benefits. Towards achieving this goal, this research presents a linear continuum for immersive embodied interaction within virtual reality. This research evaluates the effects of three levels of immersive embodied interactions on cognitive thinking, presence, usability, and satisfaction among users in the fields of science, technology, engineering, and mathematics (STEM) education. Results from the presented experiments show that immersive virtual reality is greatly effective in knowledge acquisition and retention, and highly enhances user satisfaction, interest and enthusiasm. Users experience high levels of presence and are profoundly engaged in the learning activities within the immersive virtual environments. The studies presented in this research evaluate pedagogical VR software to train and motivate students in STEM education, and provide an empirical analysis comparing desktop VR (DVR), immersive VR (IVR), and immersive embodied VR (IEVR) conditions for learning. This research also proposes a fully immersive embodied interaction metaphor (IEIVR) for learning of computational concepts as a future direction, and presents the challenges faced in implementing the IEIVR metaphor due to extended periods of immersion. Results from the conducted studies help in formulating guidelines for virtual reality and education researchers working in STEM education and training, and for educators and curriculum developers seeking to improve student engagement in the STEM fields.
NASA Technical Reports Server (NTRS)
Schnase, John L.; Tamkin, Glenn S.; Ripley, W. David III; Stong, Savannah; Gill, Roger; Duffy, Daniel Q.
2012-01-01
Scientific data services are becoming an important part of the NASA Center for Climate Simulation's mission. Our technological response to this expanding role is built around the concept of a Virtual Climate Data Server (vCDS), repetitive provisioning, image-based deployment and distribution, and virtualization-as-a-service. The vCDS is an iRODS-based data server specialized to the needs of a particular data-centric application. We use RPM scripts to build vCDS images in our local computing environment, our local Virtual Machine Environment, NASA s Nebula Cloud Services, and Amazon's Elastic Compute Cloud. Once provisioned into one or more of these virtualized resource classes, vCDSs can use iRODS s federation capabilities to create an integrated ecosystem of managed collections that is scalable and adaptable to changing resource requirements. This approach enables platform- or software-asa- service deployment of vCDS and allows the NCCS to offer virtualization-as-a-service: a capacity to respond in an agile way to new customer requests for data services.
ERIC Educational Resources Information Center
Chen, Yu-Hsuan; Wang, Chang-Hwa
2018-01-01
Although research has indicated that augmented reality (AR)-facilitated instruction improves learning performance, further investigation of the usefulness of AR from a psychological perspective has been recommended. Researchers consider presence a major psychological effect when users are immersed in virtual reality environments. However, most…
Making Learning Fun: Quest Atlantis, A Game Without Guns
ERIC Educational Resources Information Center
Barab, Sasha; Thomas, Michael; Dodge, Tyler; Carteaux, Robert; Tuzun, Hakan
2005-01-01
This article describes the Quest Atlantis (QA) project, a learning and teaching project that employs a multiuser, virtual environment to immerse children, ages 9-12, in educational tasks. QA combines strategies used in commercial gaming environments with lessons from educational research on learning and motivation. It allows users at participating…
A Trusted Portable Computing Device
NASA Astrophysics Data System (ADS)
Ming-wei, Fang; Jun-jun, Wu; Peng-fei, Yu; Xin-fang, Zhang
A trusted portable computing device and its security mechanism were presented to solve the security issues, such as the attack of virus and Trojan horse, the lost and stolen of storage device, in mobile office. It used smart card to build a trusted portable security base, virtualization to create a secure virtual execution environment, two-factor authentication mechanism to identify legitimate users, and dynamic encryption to protect data privacy. The security environment described in this paper is characteristic of portability, security and reliability. It can meet the security requirement of mobile office.
Ames Lab 101: C6: Virtual Engineering
McCorkle, Doug
2018-01-01
Ames Laboratory scientist Doug McCorkle explains the importance of virtual engineering and talks about the C6. The C6 is a three-dimensional, fully-immersive synthetic environment residing in the center atrium of Iowa State University's Howe Hall.
ConfocalVR: Immersive Visualization Applied to Confocal Microscopy.
Stefani, Caroline; Lacy-Hulbert, Adam; Skillman, Thomas
2018-06-24
ConfocalVR is a virtual reality (VR) application created to improve the ability of researchers to study the complexity of cell architecture. Confocal microscopes take pictures of fluorescently labeled proteins or molecules at different focal planes to create a stack of 2D images throughout the specimen. Current software applications reconstruct the 3D image and render it as a 2D projection onto a computer screen where users need to rotate the image to expose the full 3D structure. This process is mentally taxing, breaks down if you stop the rotation, and does not take advantage of the eye's full field of view. ConfocalVR exploits consumer-grade virtual reality (VR) systems to fully immerse the user in the 3D cellular image. In this virtual environment the user can: 1) adjust image viewing parameters without leaving the virtual space, 2) reach out and grab the image to quickly rotate and scale the image to focus on key features, and 3) interact with other users in a shared virtual space enabling real-time collaborative exploration and discussion. We found that immersive VR technology allows the user to rapidly understand cellular architecture and protein or molecule distribution. We note that it is impossible to understand the value of immersive visualization without experiencing it first hand, so we encourage readers to get access to a VR system, download this software, and evaluate it for yourself. The ConfocalVR software is available for download at http://www.confocalvr.com, and is free for nonprofits. Copyright © 2018. Published by Elsevier Ltd.
The Virtual Mission - A step-wise approach to large space missions
NASA Technical Reports Server (NTRS)
Hansen, Elaine; Jones, Morgan; Hooke, Adrian; Pomphrey, Richard
1992-01-01
Attention is given to the Virtual Mission (VM) concept, wherein multiple scientific instruments will be on different platforms, in different orbits, operated from different control centers, at different institutions, and reporting to different user groups. The VM concept enables NASA's science and application users to accomplish their broad science goals with a fleet made up of smaller, more focused spacecraft and to alleviate the difficulties involved with single, large, complex spacecraft. The concept makes possible the stepwise 'go-as-you-pay' extensible approach recommended by Augustine (1990). It enables scientists to mix and match the use of many smaller satellites in novel ways to respond to new scientific ideas and needs.
Earth System Grid II (ESG): Turning Climate Model Datasets Into Community Resources
NASA Astrophysics Data System (ADS)
Williams, D.; Middleton, D.; Foster, I.; Nevedova, V.; Kesselman, C.; Chervenak, A.; Bharathi, S.; Drach, B.; Cinquni, L.; Brown, D.; Strand, G.; Fox, P.; Garcia, J.; Bernholdte, D.; Chanchio, K.; Pouchard, L.; Chen, M.; Shoshani, A.; Sim, A.
2003-12-01
High-resolution, long-duration simulations performed with advanced DOE SciDAC/NCAR climate models will produce tens of petabytes of output. To be useful, this output must be made available to global change impacts researchers nationwide, both at national laboratories and at universities, other research laboratories, and other institutions. To this end, we propose to create a new Earth System Grid, ESG-II - a virtual collaborative environment that links distributed centers, users, models, and data. ESG-II will provide scientists with virtual proximity to the distributed data and resources that they require to perform their research. The creation of this environment will significantly increase the scientific productivity of U.S. climate researchers by turning climate datasets into community resources. In creating ESG-II, we will integrate and extend a range of Grid and collaboratory technologies, including the DODS remote access protocols for environmental data, Globus Toolkit technologies for authentication, resource discovery, and resource access, and Data Grid technologies developed in other projects. We will develop new technologies for (1) creating and operating "filtering servers" capable of performing sophisticated analyses, and (2) delivering results to users. In so doing, we will simultaneously contribute to climate science and advance the state of the art in collaboratory technology. We expect our results to be useful to numerous other DOE projects. The three-year R&D program will be undertaken by a talented and experienced team of computer scientists at five laboratories (ANL, LBNL, LLNL, NCAR, ORNL) and one university (ISI), working in close collaboration with climate scientists at several sites.
Using virtual reality to assess user experience.
Rebelo, Francisco; Noriega, Paulo; Duarte, Emília; Soares, Marcelo
2012-12-01
The aim of this article is to discuss how user experience (UX) evaluation can benefit from the use of virtual reality (VR). UX is usually evaluated in laboratory settings. However, considering that UX occurs as a consequence of the interaction between the product, the user, and the context of use, the assessment of UX can benefit from a more ecological test setting. VR provides the means to develop realistic-looking virtual environments with the advantage of allowing greater control of the experimental conditions while granting good ecological validity. The methods used to evaluate UX, as well as their main limitations, are identified.The currentVR equipment and its potential applications (as well as its limitations and drawbacks) to overcome some of the limitations in the assessment of UX are highlighted. The relevance of VR for UX studies is discussed, and a VR-based framework for evaluating UX is presented. UX research may benefit from a VR-based methodology in the scopes of user research (e.g., assessment of users' expectations derived from their lifestyles) and human-product interaction (e.g., assessment of users' emotions since the first moment of contact with the product and then during the interaction). This article provides knowledge to researchers and professionals engaged in the design of technological interfaces about the usefulness of VR in the evaluation of UX.
Lorenz, Mario; Brade, Jennifer; Diamond, Lisa; Sjölie, Daniel; Busch, Marc; Tscheligi, Manfred; Klimant, Philipp; Heyde, Christoph-E; Hammer, Niels
2018-04-23
Virtual Reality (VR) is used for a variety of applications ranging from entertainment to psychological medicine. VR has been demonstrated to influence higher order cognitive functions and cortical plasticity, with implications on phobia and stroke treatment. An integral part for successful VR is a high sense of presence - a feeling of 'being there' in the virtual scenario. The underlying cognitive and perceptive functions causing presence in VR scenarios are however not completely known. It is evident that the brain function is influenced by drugs, such as ethanol, potentially confounding cortical plasticity, also in VR. As ethanol is ubiquitous and forms part of daily life, understanding the effects of ethanol on presence and user experience, the attitudes and emotions about using VR applications, is important. This exploratory study aims at contributing towards an understanding of how low-dose ethanol intake influences presence, user experience and their relationship in a validated VR context. It was found that low-level ethanol consumption did influence presence and user experience, but on a minimal level. In contrast, correlations between presence and user experience were strongly influenced by low-dose ethanol. Ethanol consumption may consequently alter cognitive and perceptive functions related to the connections between presence and user experience.
Immersive Collaboration Simulations: Multi-User Virtual Environments and Augmented Realities
NASA Technical Reports Server (NTRS)
Dede, Chris
2008-01-01
Emerging information technologies are reshaping the following: shifts in the knowledge and skills society values, development of new methods of teaching and learning, and changes in the characteristics of learning.
Simulation Exploration through Immersive Parallel Planes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brunhart-Lupo, Nicholas J; Bush, Brian W; Gruchalla, Kenny M
We present a visualization-driven simulation system that tightly couples systems dynamics simulations with an immersive virtual environment to allow analysts to rapidly develop and test hypotheses in a high-dimensional parameter space. To accomplish this, we generalize the two-dimensional parallel-coordinates statistical graphic as an immersive 'parallel-planes' visualization for multivariate time series emitted by simulations running in parallel with the visualization. In contrast to traditional parallel coordinate's mapping the multivariate dimensions onto coordinate axes represented by a series of parallel lines, we map pairs of the multivariate dimensions onto a series of parallel rectangles. As in the case of parallel coordinates, eachmore » individual observation in the dataset is mapped to a polyline whose vertices coincide with its coordinate values. Regions of the rectangles can be 'brushed' to highlight and select observations of interest: a 'slider' control allows the user to filter the observations by their time coordinate. In an immersive virtual environment, users interact with the parallel planes using a joystick that can select regions on the planes, manipulate selection, and filter time. The brushing and selection actions are used to both explore existing data as well as to launch additional simulations corresponding to the visually selected portions of the input parameter space. As soon as the new simulations complete, their resulting observations are displayed in the virtual environment. This tight feedback loop between simulation and immersive analytics accelerates users' realization of insights about the simulation and its output.« less
Simulation Exploration through Immersive Parallel Planes: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brunhart-Lupo, Nicholas; Bush, Brian W.; Gruchalla, Kenny
We present a visualization-driven simulation system that tightly couples systems dynamics simulations with an immersive virtual environment to allow analysts to rapidly develop and test hypotheses in a high-dimensional parameter space. To accomplish this, we generalize the two-dimensional parallel-coordinates statistical graphic as an immersive 'parallel-planes' visualization for multivariate time series emitted by simulations running in parallel with the visualization. In contrast to traditional parallel coordinate's mapping the multivariate dimensions onto coordinate axes represented by a series of parallel lines, we map pairs of the multivariate dimensions onto a series of parallel rectangles. As in the case of parallel coordinates, eachmore » individual observation in the dataset is mapped to a polyline whose vertices coincide with its coordinate values. Regions of the rectangles can be 'brushed' to highlight and select observations of interest: a 'slider' control allows the user to filter the observations by their time coordinate. In an immersive virtual environment, users interact with the parallel planes using a joystick that can select regions on the planes, manipulate selection, and filter time. The brushing and selection actions are used to both explore existing data as well as to launch additional simulations corresponding to the visually selected portions of the input parameter space. As soon as the new simulations complete, their resulting observations are displayed in the virtual environment. This tight feedback loop between simulation and immersive analytics accelerates users' realization of insights about the simulation and its output.« less
NASA Astrophysics Data System (ADS)
McMullen, Kyla A.
Although the concept of virtual spatial audio has existed for almost twenty-five years, only in the past fifteen years has modern computing technology enabled the real-time processing needed to deliver high-precision spatial audio. Furthermore, the concept of virtually walking through an auditory environment did not exist. The applications of such an interface have numerous potential uses. Spatial audio has the potential to be used in various manners ranging from enhancing sounds delivered in virtual gaming worlds to conveying spatial locations in real-time emergency response systems. To incorporate this technology in real-world systems, various concerns should be addressed. First, to widely incorporate spatial audio into real-world systems, head-related transfer functions (HRTFs) must be inexpensively created for each user. The present study further investigated an HRTF subjective selection procedure previously developed within our research group. Users discriminated auditory cues to subjectively select their preferred HRTF from a publicly available database. Next, the issue of training to find virtual sources was addressed. Listeners participated in a localization training experiment using their selected HRTFs. The training procedure was created from the characterization of successful search strategies in prior auditory search experiments. Search accuracy significantly improved after listeners performed the training procedure. Next, in the investigation of auditory spatial memory, listeners completed three search and recall tasks with differing recall methods. Recall accuracy significantly decreased in tasks that required the storage of sound source configurations in memory. To assess the impacts of practical scenarios, the present work assessed the performance effects of: signal uncertainty, visual augmentation, and different attenuation modeling. Fortunately, source uncertainty did not affect listeners' ability to recall or identify sound sources. The present study also found that the presence of visual reference frames significantly increased recall accuracy. Additionally, the incorporation of drastic attenuation significantly improved environment recall accuracy. Through investigating the aforementioned concerns, the present study made initial footsteps guiding the design of virtual auditory environments that support spatial configuration recall.
WeaVR: a self-contained and wearable immersive virtual environment simulation system.
Hodgson, Eric; Bachmann, Eric R; Vincent, David; Zmuda, Michael; Waller, David; Calusdian, James
2015-03-01
We describe WeaVR, a computer simulation system that takes virtual reality technology beyond specialized laboratories and research sites and makes it available in any open space, such as a gymnasium or a public park. Novel hardware and software systems enable HMD-based immersive virtual reality simulations to be conducted in any arbitrary location, with no external infrastructure and little-to-no setup or site preparation. The ability of the WeaVR system to provide realistic motion-tracked navigation for users, to improve the study of large-scale navigation, and to generate usable behavioral data is shown in three demonstrations. First, participants navigated through a full-scale virtual grocery store while physically situated in an open grass field. Trajectory data are presented for both normal tracking and for tracking during the use of redirected walking that constrained users to a predefined area. Second, users followed a straight path within a virtual world for distances of up to 2 km while walking naturally and being redirected to stay within the field, demonstrating the ability of the system to study large-scale navigation by simulating virtual worlds that are potentially unlimited in extent. Finally, the portability and pedagogical implications of this system were demonstrated by taking it to a regional high school for live use by a computer science class on their own school campus.
Use of virtual reality to promote hand therapy post-stroke
NASA Astrophysics Data System (ADS)
Tsoupikova, Daria; Stoykov, Nikolay; Vick, Randy; Li, Yu; Kamper, Derek; Listenberger, Molly
2013-03-01
A novel artistic virtual reality (VR) environment was developed and tested for use as a rehabilitation protocol for post-stroke hand rehabilitation therapy. The system was developed by an interdisciplinary team of engineers, art therapists, occupational therapists, and VR artists to improve patients' motivation and engagement. Specific exercises were developed to explicitly promote the practice of therapeutic tasks requiring hand and arm coordination for upper extremity rehabilitation. Here we describe system design, development, and user testing for efficiency, subject's satisfaction and clinical feasibility. We report results of the completed qualitative, pre-clinical pilot study of the system effectiveness for therapy. Fourteen stroke survivors with chronic hemiparesis participated in a single training session within the environment to gauge user response to the protocol through a custom survey. Results indicate that users found the system comfortable, enjoyable, tiring; instructions clear, and reported a high level of satisfaction with the VR environment and rehabilitation task variety and difficulty. Most patients reported very positive impressions of the VR environment and rated it highly, appreciating its engagement and motivation. We are currently conducting a longitudinal intervention study over 6 weeks in stroke survivors with chronic hemiparesis. Initial results following use of the system on the first subjects demonstrate that the system is operational and can facilitate therapy for post stroke patients with upper extremity impairment.
IceCube Polar Virtual Reality exhibit: immersive learning for learners of all ages
NASA Astrophysics Data System (ADS)
Madsen, J.; Bravo Gallart, S.; Chase, A.; Dougherty, P.; Gagnon, D.; Pronto, K.; Rush, M.; Tredinnick, R.
2017-12-01
The IceCube Polar Virtual Reality project is an innovative, interactive exhibit that explains the operation and science of a flagship experiment in polar research, the IceCube Neutrino Observatory. The exhibit allows users to travel from the South Pole, where the detector is located, to the furthest reaches of the universe, learning how the detection of high-energy neutrinos has opened a new view to the universe. This novel exhibit combines a multitouch tabletop display system and commercially available virtual reality (VR) head-mounted displays to enable informal STEM learning of polar research. The exhibit, launched in early November 2017 during the Wisconsin Science Festival in Madison, WI, will study how immersive VR can enhance informal STEM learning. The foundation of this project is built upon a strong collaborative effort between the Living Environments Laboratory (LEL), the Wisconsin IceCube Particle Astrophysics Center (WIPAC), and the Field Day Laboratory groups from the University of Wisconsin-Madison campus. The project is funded through an NSF Advancing Informal STEM Learning (AISL) grant, under a special call for engaging students and the public in polar research. This exploratory pathways project seeks to build expertise to allow future extensions. The plan is to submit a subsequent AISL Broad Implementation proposal to add more 3D environments for other Antarctic research topics and locations in the future. We will describe the current implementation of the project and discuss the challenges and opportunities of working with an interdisciplinary team of scientists and technology and education researchers. We will also present preliminary assessment results, which seek to answer questions such as: Did users gain a better understanding of IceCube research from interacting with the exhibit? Do both technologies (touch table and VR headset) provide the same level of engagement? Is one technology better suited for specific learning outcomes?
Prototyping a Hybrid Cooperative and Tele-robotic Surgical System for Retinal Microsurgery.
Balicki, Marcin; Xia, Tian; Jung, Min Yang; Deguet, Anton; Vagvolgyi, Balazs; Kazanzides, Peter; Taylor, Russell
2011-06-01
This paper presents the design of a tele-robotic microsurgical platform designed for development of cooperative and tele-operative control schemes, sensor based smart instruments, user interfaces and new surgical techniques with eye surgery as the driving application. The system is built using the distributed component-based cisst libraries and the Surgical Assistant Workstation framework. It includes a cooperatively controlled EyeRobot2, a da Vinci Master manipulator, and a remote stereo visualization system. We use constrained optimization based virtual fixture control to provide Virtual Remote-Center-of-Motion (vRCM) and haptic feedback. Such system can be used in a hybrid setup, combining local cooperative control with remote tele-operation, where an experienced surgeon can provide hand-over-hand tutoring to a novice user. In another scheme, the system can provide haptic feedback based on virtual fixtures constructed from real-time force and proximity sensor information.
Prototyping a Hybrid Cooperative and Tele-robotic Surgical System for Retinal Microsurgery
Balicki, Marcin; Xia, Tian; Jung, Min Yang; Deguet, Anton; Vagvolgyi, Balazs; Kazanzides, Peter; Taylor, Russell
2013-01-01
This paper presents the design of a tele-robotic microsurgical platform designed for development of cooperative and tele-operative control schemes, sensor based smart instruments, user interfaces and new surgical techniques with eye surgery as the driving application. The system is built using the distributed component-based cisst libraries and the Surgical Assistant Workstation framework. It includes a cooperatively controlled EyeRobot2, a da Vinci Master manipulator, and a remote stereo visualization system. We use constrained optimization based virtual fixture control to provide Virtual Remote-Center-of-Motion (vRCM) and haptic feedback. Such system can be used in a hybrid setup, combining local cooperative control with remote tele-operation, where an experienced surgeon can provide hand-over-hand tutoring to a novice user. In another scheme, the system can provide haptic feedback based on virtual fixtures constructed from real-time force and proximity sensor information. PMID:24398557
3DUI assisted lower and upper member therapy.
Uribe-Quevedo, Alvaro; Perez-Gutierrez, Byron
2012-01-01
3DUIs are becoming very popular among researchers, developers and users as they allow more immersive and interactive experiences by taking advantage of the human dexterity. The features offered by these interfaces outside the gaming environment, have allowed the development of applications in the medical area by enhancing the user experience and aiding the therapy process in controlled and monitored environments. Using mainstream videogame 3DUIs based on inertial and image sensors available in the market, this work presents the development of a virtual environment and its navigation through lower member captured gestures for assisting motion during therapy.
3D Flow visualization in virtual reality
NASA Astrophysics Data System (ADS)
Pietraszewski, Noah; Dhillon, Ranbir; Green, Melissa
2017-11-01
By viewing fluid dynamic isosurfaces in virtual reality (VR), many of the issues associated with the rendering of three-dimensional objects on a two-dimensional screen can be addressed. In addition, viewing a variety of unsteady 3D data sets in VR opens up novel opportunities for education and community outreach. In this work, the vortex wake of a bio-inspired pitching panel was visualized using a three-dimensional structural model of Q-criterion isosurfaces rendered in virtual reality using the HTC Vive. Utilizing the Unity cross-platform gaming engine, a program was developed to allow the user to control and change this model's position and orientation in three-dimensional space. In addition to controlling the model's position and orientation, the user can ``scroll'' forward and backward in time to analyze the formation and shedding of vortices in the wake. Finally, the user can toggle between different quantities, while keeping the time step constant, to analyze flow parameter relationships at specific times during flow development. The information, data, or work presented herein was funded in part by an award from NYS Department of Economic Development (DED) through the Syracuse Center of Excellence.
a Low-Cost and Lightweight 3d Interactive Real Estate-Purposed Indoor Virtual Reality Application
NASA Astrophysics Data System (ADS)
Ozacar, K.; Ortakci, Y.; Kahraman, I.; Durgut, R.; Karas, I. R.
2017-11-01
Interactive 3D architectural indoor design have been more popular after it benefited from Virtual Reality (VR) technologies. VR brings computer-generated 3D content to real life scale and enable users to observe immersive indoor environments so that users can directly modify it. This opportunity enables buyers to purchase a property off-the-plan cheaper through virtual models. Instead of showing property through 2D plan or renders, this visualized interior architecture of an on-sale unbuilt property is demonstrated beforehand so that the investors have an impression as if they were in the physical building. However, current applications either use highly resource consuming software, or are non-interactive, or requires specialist to create such environments. In this study, we have created a real-estate purposed low-cost high quality fully interactive VR application that provides a realistic interior architecture of the property by using free and lightweight software: Sweet Home 3D and Unity. A preliminary study showed that participants generally liked proposed real estate-purposed VR application, and it satisfied the expectation of the property buyers.
Love 2.0: a quantitative exploration of sex and relationships in the virtual world Second Life.
Craft, Ashley John
2012-08-01
This study presents the quantitative results of a web-based survey exploring the experiences of those who seek sex and relationships in the virtual world of Second Life. The survey gathered data on demographics, relationships, and sexual behaviors from 235 Second Life residents to compare with U.S. General Social Survey data on Internet users and the general population. The Second Life survey also gathered data on interests in and experiences with a number of sexual practices in both offline and online environments. Comparative analysis found that survey participants were significantly older, more educated, and less religious than a wider group of Internet users, and in certain age groups were far less likely to be married or have children. Motivations for engaging in cybersex were presented. Analysis of interest and experience of different sexual practices supported findings by other researchers that online environments facilitated access, but also indicated that interest in certain sexual practices could differ between offline and online environments.
Virtual fixtures as tools to enhance operator performance in telepresence environments
NASA Astrophysics Data System (ADS)
Rosenberg, Louis B.
1993-12-01
This paper introduces the notion of virtual fixtures for use in telepresence systems and presents an empirical study which demonstrates that such virtual fixtures can greatly enhance operator performance within remote environments. Just as tools and fixtures in the real world can enhance human performance by guiding manual operations, providing localizing references, and reducing the mental processing required to perform a task, virtual fixtures are computer generated percepts overlaid on top of the reflection of a remote workspace which can provide similar benefits. Like a ruler guiding a pencil in a real manipulation task, a virtual fixture overlaid on top of a remote workspace can act to reduce the mental processing required to perform a task, limit the workload of certain sensory modalities, and most of all allow precision and performance to exceed natural human abilities. Because such perceptual overlays are virtual constructions they can be diverse in modality, abstract in form, and custom tailored to individual task or user needs. This study investigates the potential of virtual fixtures by implementing simple combinations of haptic and auditory sensations as perceptual overlays during a standardized telemanipulation task.
Dynamic Extension of a Virtualized Cluster by using Cloud Resources
NASA Astrophysics Data System (ADS)
Oberst, Oliver; Hauth, Thomas; Kernert, David; Riedel, Stephan; Quast, Günter
2012-12-01
The specific requirements concerning the software environment within the HEP community constrain the choice of resource providers for the outsourcing of computing infrastructure. The use of virtualization in HPC clusters and in the context of cloud resources is therefore a subject of recent developments in scientific computing. The dynamic virtualization of worker nodes in common batch systems provided by ViBatch serves each user with a dynamically virtualized subset of worker nodes on a local cluster. Now it can be transparently extended by the use of common open source cloud interfaces like OpenNebula or Eucalyptus, launching a subset of the virtual worker nodes within the cloud. This paper demonstrates how a dynamically virtualized computing cluster is combined with cloud resources by attaching remotely started virtual worker nodes to the local batch system.
Science Initiatives of the US Virtual Astronomical Observatory
NASA Astrophysics Data System (ADS)
Hanisch, R. J.
2012-09-01
The United States Virtual Astronomical Observatory program is the operational facility successor to the National Virtual Observatory development project. The primary goal of the US VAO is to build on the standards, protocols, and associated infrastructure developed by NVO and the International Virtual Observatory Alliance partners and to bring to fruition a suite of applications and web-based tools that greatly enhance the research productivity of professional astronomers. To this end, and guided by the advice of our Science Council (Fabbiano et al. 2011), we have focused on five science initiatives in the first two years of VAO operations: 1) scalable cross-comparisons between astronomical source catalogs, 2) dynamic spectral energy distribution construction, visualization, and model fitting, 3) integration and periodogram analysis of time series data from the Harvard Time Series Center and NASA Star and Exoplanet Database, 4) integration of VO data discovery and access tools into the IRAF data analysis environment, and 5) a web-based portal to VO data discovery, access, and display tools. We are also developing tools for data linking and semantic discovery, and have a plan for providing data mining and advanced statistical analysis resources for VAO users. Initial versions of these applications and web-based services are being released over the course of the summer and fall of 2011, with further updates and enhancements planned for throughout 2012 and beyond.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
Sandia National Laboratories has developed a state-of-the-art augmented reality training system for close-quarters combat (CQB). This system uses a wearable augmented reality system to place the user in a real environment while engaging enemy combatants in virtual space (Boston Dynamics DI-Guy). Umbra modeling and simulation environment is used to integrate and control the AR system.
Second Life: Creating Worlds of Wonder for Language Learners
ERIC Educational Resources Information Center
Ocasio, Michelle A.
2016-01-01
This article describes Second Life, a three-dimensional virtual environment in which a user creates an avatar for the purpose of socializing, learning, developing skills, and exploring a variety of academic and social areas. Since its inception in 2003, Second Life has been used by educators to build and foster innovative learning environments and…
Rivera-Gutierrez, Diego; Ferdig, Rick; Li, Jian; Lok, Benjamin
2014-04-01
We have created You, M.D., an interactive museum exhibit in which users learn about topics in public health literacy while interacting with virtual humans. You, M.D. is equipped with a weight sensor, a height sensor and a Microsoft Kinect that gather basic user information. Conceptually, You, M.D. could use this user information to dynamically select the appearance of the virtual humans in the interaction attempting to improve learning outcomes and user perception for each particular user. For this concept to be possible, a better understanding of how different elements of the visual appearance of a virtual human affects user perceptions is required. In this paper, we present the results of an initial user study with a large sample size (n =333) ran using You, M.D. The study measured users reactions based on the users gender and body-mass index (BMI) when facing virtual humans with BMI either concordant or discordant from the users BMI. The results of the study indicate that concordance between the users BMI and the virtual humans BMI affects male and female users differently. The results also show that female users rate virtual humans as more knowledgeable than male users rate the same virtual humans.
The impact of physical navigation on spatial organization for sensemaking.
Andrews, Christopher; North, Chris
2013-12-01
Spatial organization has been proposed as a compelling approach to externalizing the sensemaking process. However, there are two ways in which space can be provided to the user: by creating a physical workspace that the user can interact with directly, such as can be provided by a large, high-resolution display, or through the use of a virtual workspace that the user navigates using virtual navigation techniques such as zoom and pan. In this study we explicitly examined the use of spatial sensemaking techniques within these two environments. The results demonstrate that these two approaches to providing sensemaking space are not equivalent, and that the greater embodiment afforded by the physical workspace changes how the space is perceived and used, leading to increased externalization of the sensemaking process.
Exploring 4D Flow Data in an Immersive Virtual Environment
NASA Astrophysics Data System (ADS)
Stevens, A. H.; Butkiewicz, T.
2017-12-01
Ocean models help us to understand and predict a wide range of intricate physical processes which comprise the atmospheric and oceanic systems of the Earth. Because these models output an abundance of complex time-varying three-dimensional (i.e., 4D) data, effectively conveying the myriad information from a given model poses a significant visualization challenge. The majority of the research effort into this problem has concentrated around synthesizing and examining methods for representing the data itself; by comparison, relatively few studies have looked into the potential merits of various viewing conditions and virtual environments. We seek to improve our understanding of the benefits offered by current consumer-grade virtual reality (VR) systems through an immersive, interactive 4D flow visualization system. Our dataset is a Regional Ocean Modeling System (ROMS) model representing a 12-hour tidal cycle of the currents within New Hampshire's Great Bay estuary. The model data was loaded into a custom VR particle system application using the OpenVR software library and the HTC Vive hardware, which tracks a headset and two six-degree-of-freedom (6DOF) controllers within a 5m-by-5m area. The resulting visualization system allows the user to coexist in the same virtual space as the data, enabling rapid and intuitive analysis of the flow model through natural interactions with the dataset and within the virtual environment. Whereas a traditional computer screen typically requires the user to reposition a virtual camera in the scene to obtain the desired view of the data, in virtual reality the user can simply move their head to the desired viewpoint, completely eliminating the mental context switches from data exploration/analysis to view adjustment and back. The tracked controllers become tools to quickly manipulate (reposition, reorient, and rescale) the dataset and to interrogate it by, e.g., releasing dye particles into the flow field, probing scalar velocities, placing a cutting plane through a region of interest, etc. It is hypothesized that the advantages afforded by head-tracked viewing and 6DOF interaction devices will lead to faster and more efficient examination of 4D flow data. A human factors study is currently being prepared to empirically evaluate this method of visualization and interaction.
ERIC Educational Resources Information Center
Rodríguez-Ardura, Inma; Meseguer-Artola, Antoni
2016-01-01
User retention is a major goal for higher education institutions running their teaching and learning programmes online. This is the first investigation into how the senses of presence and flow, together with perceptions about two central elements of the virtual education environment (didactic resource quality and instructor attitude), facilitate…
Stage Cylindrical Immersive Display
NASA Technical Reports Server (NTRS)
Abramyan, Lucy; Norris, Jeffrey S.; Powell, Mark W.; Mittman, David S.; Shams, Khawaja S.
2011-01-01
Panoramic images with a wide field of view intend to provide a better understanding of an environment by placing objects of the environment on one seamless image. However, understanding the sizes and relative positions of the objects in a panorama is not intuitive and prone to errors because the field of view is unnatural to human perception. Scientists are often faced with the difficult task of interpreting the sizes and relative positions of objects in an environment when viewing an image of the environment on computer monitors or prints. A panorama can display an object that appears to be to the right of the viewer when it is, in fact, behind the viewer. This misinterpretation can be very costly, especially when the environment is remote and/or only accessible by unmanned vehicles. A 270 cylindrical display has been developed that surrounds the viewer with carefully calibrated panoramic imagery that correctly engages their natural kinesthetic senses and provides a more accurate awareness of the environment. The cylindrical immersive display offers a more natural window to the environment than a standard cubic CAVE (Cave Automatic Virtual Environment), and the geometry allows multiple collocated users to simultaneously view data and share important decision-making tasks. A CAVE is an immersive virtual reality environment that allows one or more users to absorb themselves in a virtual environment. A common CAVE setup is a room-sized cube where the cube sides act as projection planes. By nature, all cubic CAVEs face a problem with edge matching at edges and corners of the display. Modern immersive displays have found ways to minimize seams by creating very tight edges, and rely on the user to ignore the seam. One significant deficiency of flat-walled CAVEs is that the sense of orientation and perspective within the scene is broken across adjacent walls. On any single wall, parallel lines properly converge at their vanishing point as they should, and the sense of perspective within the scene contained on only one wall has integrity. Unfortunately, parallel lines that lie on adjacent walls do not necessarily remain parallel. This results in inaccuracies in the scene that can distract the viewer and subtract from the immersive experience of the CAVE.
Virtual Global Magnetic Observatory - Concept and Implementation
NASA Astrophysics Data System (ADS)
Papitashvili, V.; Clauer, R.; Petrov, V.; Saxena, A.
2002-12-01
The existing World Data Centers (WDC) continue to serve excellently the worldwide scientific community in providing free access to a huge number of global geophysical databases. Various institutions at different geographic locations house these Centers, mainly organized by a scientific discipline. However, population of the Centers requires mandatory or voluntary submission of locally collected data. Recently many digital geomagnetic datasets have been placed on the World Wide Web and some of these sets have not been even submitted to any data center. This has created an urgent need for more sophisticated search engines capable of identifying geomagnetic data on the Web and then retrieving a certain amount of data for the scientific analysis. In this study, we formulate a concept of the virtual global magnetic observatory (VGMO) that currently uses a pre-set list of the Web-based geomagnetic data holders (including WDC) as retrieving a requested case-study interval. Saving the retrieved data locally over the multiple requests, a VGMO user begins to build his/her own data sub-center, which does not need to search the Web if the newly requested interval will be within a span of the earlier retrieved data. At the same time, this self-populated sub-center becomes available to other VGMO users down on the requests chain. Some aspects of the Web``crawling'' helping to identify the newly ``webbed'' digital geomagnetic data are also considered.
Rizzo, Albert; Buckwalter, J Galen; John, Bruce; Newman, Brad; Parsons, Thomas; Kenny, Patrick; Williams, Josh
2012-01-01
The incidence of posttraumatic stress disorder (PTSD) in returning OEF/OIF military personnel is creating a significant healthcare challenge. This has served to motivate research on how to better develop and disseminate evidence-based treatments for PTSD. One emerging form of treatment for combat-related PTSD that has shown promise involves the delivery of exposure therapy using immersive Virtual Reality (VR). Initial outcomes from open clinical trials have been positive and fully randomized controlled trials are currently in progress to further validate this approach. Based on our research group's initial positive outcomes using VR to emotionally engage and successfully treat persons undergoing exposure therapy for PTSD, we have begun development in a similar VR-based approach to deliver stress resilience training with military service members prior to their initial deployment. The Stress Resilience In Virtual Environments (STRIVE) project aims to create a set of combat simulations (derived from our existing Virtual Iraq/Afghanistan exposure therapy system) that are part of a multi-episode narrative experience. Users can be immersed within challenging combat contexts and interact with virtual characters within these episodes as part of an experiential learning approach for training a range of psychoeducational and cognitive-behavioral emotional coping strategies believed to enhance stress resilience. The STRIVE project aims to present this approach to service members prior to deployment as part of a program designed to better prepare military personnel for the types of emotional challenges that are inherent in the combat environment. During these virtual training experiences users are monitored physiologically as part of a larger investigation into the biomarkers of the stress response. One such construct, Allostatic Load, is being directly investigated via physiological and neuro-hormonal analysis from specimen collections taken immediately before and after engagement in the STRIVE virtual experience.
NASA Astrophysics Data System (ADS)
Weiland, C.; Chadwick, W. W.; Hanshumaker, W.; Osis, V.; Hamilton, C.
2002-12-01
We have created a new interactive exhibit in which the user can sit down and simulate that they are making a dive to the seafloor with the remotely operated vehicle (ROV) named ROPOS. The exhibit immerses the user in an interactive experience that is naturally fun but also educational. This new public display is located at the Hatfield Marine Science Visitor Center in Newport, Oregon. The exhibit is designed to look like the real ROPOS control console and includes three video monitors, a PC, a DVD player, an overhead speaker, graphic panels, buttons, lights, dials, and a seat in front of a joystick. The dives are based on real seafloor settings at Axial seamount, an active submarine volcano on the Juan de Fuca Ridge (NE Pacific) that is also the location of a seafloor observatory called NeMO. The user can choose between 1 of 3 different dives sites in the caldera of Axial Volcano. Once a dive is chosen, then the user watches ROPOS being deployed and then arrives into a 3-D computer-generated seafloor environment that is based on the real world but is easier to visualize and navigate. Once on the bottom, the user is placed within a 360 degree panorama and can look in all directions by manipulating the joystick. By clicking on markers embedded in the scene, the user can then either move to other panorama locations via movies that travel through the 3-D virtual environment, or they can play video clips from actual ROPOS dives specifically related to that scene. Audio accompanying the video clips informs the user where they are going or what they are looking at. After the user is finished exploring the dive site they end the dive by leaving the bottom and watching the ROV being recovered onto the ship at the surface. The user can then choose a different dive or make the same dive again. Within the three simulated dives there are a total of 6 arrival and departure movies, 7 seafloor panoramas, 12 travel movies, and 23 ROPOS video clips. The exhibit software was created with Macromedia Director using Apple Quicktime and Quicktime VR. The exhibit is based on the NeMO Explorer web site (http://www.pmel.noaa.gov/vents/nemo/explorer.html).
Sun, Huey-Min; Li, Shang-Phone; Zhu, Yu-Qian; Hsiao, Bo
2015-09-01
Technological advance in human-computer interaction has attracted increasing research attention, especially in the field of virtual reality (VR). Prior research has focused on examining the effects of VR on various outcomes, for example, learning and health. However, which factors affect the final outcomes? That is, what kind of VR system design will achieve higher usability? This question remains largely. Furthermore, when we look at VR system deployment from a human-computer interaction (HCI) lens, does user's attitude play a role in achieving the final outcome? This study aims to understand the effect of immersion and involvement, as well as users' regulatory focus on usability for a somatosensory VR learning system. This study hypothesized that regulatory focus and presence can effectively enhance user's perceived usability. Survey data from 78 students in Taiwan indicated that promotion focus is positively related to user's perceived efficiency, whereas involvement and promotion focus are positively related to user's perceived effectiveness. Promotion focus also predicts user satisfaction and overall usability perception. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Wearable Virtual White Cane Network for navigating people with visual impairment.
Gao, Yabiao; Chandrawanshi, Rahul; Nau, Amy C; Tse, Zion Tsz Ho
2015-09-01
Navigating the world with visual impairments presents inconveniences and safety concerns. Although a traditional white cane is the most commonly used mobility aid due to its low cost and acceptable functionality, electronic traveling aids can provide more functionality as well as additional benefits. The Wearable Virtual Cane Network is an electronic traveling aid that utilizes ultrasound sonar technology to scan the surrounding environment for spatial information. The Wearable Virtual Cane Network is composed of four sensing nodes: one on each of the user's wrists, one on the waist, and one on the ankle. The Wearable Virtual Cane Network employs vibration and sound to communicate object proximity to the user. While conventional navigation devices are typically hand-held and bulky, the hands-free design of our prototype allows the user to perform other tasks while using the Wearable Virtual Cane Network. When the Wearable Virtual Cane Network prototype was tested for distance resolution and range detection limits at various displacements and compared with a traditional white cane, all participants performed significantly above the control bar (p < 4.3 × 10(-5), standard t-test) in distance estimation. Each sensor unit can detect an object with a surface area as small as 1 cm(2) (1 cm × 1 cm) located 70 cm away. Our results showed that the walking speed for an obstacle course was increased by 23% on average when subjects used the Wearable Virtual Cane Network rather than the white cane. The obstacle course experiment also shows that the use of the white cane in combination with the Wearable Virtual Cane Network can significantly improve navigation over using either the white cane or the Wearable Virtual Cane Network alone (p < 0.05, paired t-test). © IMechE 2015.
Authoring Tours of Geospatial Data With KML and Google Earth
NASA Astrophysics Data System (ADS)
Barcay, D. P.; Weiss-Malik, M.
2008-12-01
As virtual globes become widely adopted by the general public, the use of geospatial data has expanded greatly. With the popularization of Google Earth and other platforms, GIS systems have become virtual reality platforms. Using these platforms, a casual user can easily explore the world, browse massive data-sets, create powerful 3D visualizations, and share those visualizations with millions of people using the KML language. This technology has raised the bar for professionals and academics alike. It is now expected that studies and projects will be accompanied by compelling, high-quality visualizations. In this new landscape, a presentation of geospatial data can be the most effective form of advertisement for a project: engaging both the general public and the scientific community in a unified interactive experience. On the other hand, merely dumping a dataset into a virtual globe can be a disorienting, alienating experience for many users. To create an effective, far-reaching presentation, an author must take care to make their data approachable to a wide variety of users with varying knowledge of the subject matter, expertise in virtual globes, and attention spans. To that end, we present techniques for creating self-guided interactive tours of data represented in KML and visualized in Google Earth. Using these methods, we provide the ability to move the camera through the world while dynamically varying the content, style, and visibility of the displayed data. Such tours can automatically guide users through massive, complex datasets: engaging a broad user-base, and conveying subtle concepts that aren't immediately apparent when viewing the raw data. To the casual user these techniques result in an extremely compelling experience similar to watching video. Unlike video though, these techniques maintain the rich interactive environment provided by the virtual globe, allowing users to explore the data in detail and to add other data sources to the presentation.
VERS: a virtual environment for reconstructive surgery planning
NASA Astrophysics Data System (ADS)
Montgomery, Kevin N.
1997-05-01
The virtual environment for reconstructive surgery (VERS) project at the NASA Ames Biocomputation Center is applying virtual reality technology to aid surgeons in planning surgeries. We are working with a craniofacial surgeon at Stanford to assemble and visualize the bone structure of patients requiring reconstructive surgery either through developmental abnormalities or trauma. This project is an extension of our previous work in 3D reconstruction, mesh generation, and immersive visualization. The current VR system, consisting of an SGI Onyx RE2, FakeSpace BOOM and ImmersiveWorkbench, Virtual Technologies CyberGlove and Ascension Technologies tracker, is currently in development and has already been used to visualize defects preoperatively. In the near future it will be used to more fully plan the surgery and compute the projected result to soft tissue structure. This paper presents the work in progress and details the production of a high-performance, collaborative, and networked virtual environment.
The Impact of User-Input Devices on Virtual Desktop Trainers
2010-09-01
playing the game more enjoyable. Some of these changes include the design of controllers, the controller interface, and ergonomic changes made to...within subjects experimental design to evaluate young active duty Soldier’s ability to move and shoot in a virtual environment using different input...sufficient gaming proficiency, resulting in more time dedicated to training military skills. We employed a within subjects experimental design to
Cognitive Aspects of Collaboration in 3d Virtual Environments
NASA Astrophysics Data System (ADS)
Juřík, V.; Herman, L.; Kubíček, P.; Stachoň, Z.; Šašinka, Č.
2016-06-01
Human-computer interaction has entered the 3D era. The most important models representing spatial information — maps — are transferred into 3D versions regarding the specific content to be displayed. Virtual worlds (VW) become promising area of interest because of possibility to dynamically modify content and multi-user cooperation when solving tasks regardless to physical presence. They can be used for sharing and elaborating information via virtual images or avatars. Attractiveness of VWs is emphasized also by possibility to measure operators' actions and complex strategies. Collaboration in 3D environments is the crucial issue in many areas where the visualizations are important for the group cooperation. Within the specific 3D user interface the operators' ability to manipulate the displayed content is explored regarding such phenomena as situation awareness, cognitive workload and human error. For such purpose, the VWs offer a great number of tools for measuring the operators' responses as recording virtual movement or spots of interest in the visual field. Study focuses on the methodological issues of measuring the usability of 3D VWs and comparing them with the existing principles of 2D maps. We explore operators' strategies to reach and interpret information regarding the specific type of visualization and different level of immersion.
NASA Astrophysics Data System (ADS)
Grandi, C.; Italiano, A.; Salomoni, D.; Calabrese Melcarne, A. K.
2011-12-01
WNoDeS, an acronym for Worker Nodes on Demand Service, is software developed at CNAF-Tier1, the National Computing Centre of the Italian Institute for Nuclear Physics (INFN) located in Bologna. WNoDeS provides on demand, integrated access to both Grid and Cloud resources through virtualization technologies. Besides the traditional use of computing resources in batch mode, users need to have interactive and local access to a number of systems. WNoDeS can dynamically select these computers instantiating Virtual Machines, according to the requirements (computing, storage and network resources) of users through either the Open Cloud Computing Interface API, or through a web console. An interactive use is usually limited to activities in user space, i.e. where the machine configuration is not modified. In some other instances the activity concerns development and testing of services and thus implies the modification of the system configuration (and, therefore, root-access to the resource). The former use case is a simple extension of the WNoDeS approach, where the resource is provided in interactive mode. The latter implies saving the virtual image at the end of each user session so that it can be presented to the user at subsequent requests. This work describes how the LHC experiments at INFN-Bologna are testing and making use of these dynamically created ad-hoc machines via WNoDeS to support flexible, interactive analysis and software development at the INFN Tier-1 Computing Centre.
NASA Technical Reports Server (NTRS)
Johnson, David W.
1992-01-01
Virtual realities are a type of human-computer interface (HCI) and as such may be understood from a historical perspective. In the earliest era, the computer was a very simple, straightforward machine. Interaction was human manipulation of an inanimate object, little more than the provision of an explicit instruction set to be carried out without deviation. In short, control resided with the user. In the second era of HCI, some level of intelligence and control was imparted to the system to enable a dialogue with the user. Simple context sensitive help systems are early examples, while more sophisticated expert system designs typify this era. Control was shared more equally. In this, the third era of the HCI, the constructed system emulates a particular environment, constructed with rules and knowledge about 'reality'. Control is, in part, outside the realm of the human-computer dialogue. Virtual reality systems are discussed.
NASA Astrophysics Data System (ADS)
Lukes, George E.; Cain, Joel M.
1996-02-01
The Advanced Distributed Simulation (ADS) Synthetic Environments Program seeks to create robust virtual worlds from operational terrain and environmental data sources of sufficient fidelity and currency to interact with the real world. While some applications can be met by direct exploitation of standard digital terrain data, more demanding applications -- particularly those support operations 'close to the ground' -- are well-served by emerging capabilities for 'value-adding' by the user working with controlled imagery. For users to rigorously refine and exploit controlled imagery within functionally different workstations they must have a shared framework to allow interoperability within and between these environments in terms of passing image and object coordinates and other information using a variety of validated sensor models. The Synthetic Environments Program is now being expanded to address rapid construction of virtual worlds with research initiatives in digital mapping, softcopy workstations, and cartographic image understanding. The Synthetic Environments Program is also participating in a joint initiative for a sensor model applications programer's interface (API) to ensure that a common controlled imagery exploitation framework is available to all researchers, developers and users. This presentation provides an introduction to ADS and the associated requirements for synthetic environments to support synthetic theaters of war. It provides a technical rationale for exploring applications of image understanding technology to automated cartography in support of ADS and related programs benefitting from automated analysis of mapping, earth resources and reconnaissance imagery. And it provides an overview and status of the joint initiative for a sensor model API.
Hub, Andreas; Hartter, Tim; Kombrink, Stefan; Ertl, Thomas
2008-01-01
PURPOSE.: This study describes the development of a multi-functional assistant system for the blind which combines localisation, real and virtual navigation within modelled environments and the identification and tracking of fixed and movable objects. The approximate position of buildings is determined with a global positioning sensor (GPS), then the user establishes exact position at a specific landmark, like a door. This location initialises indoor navigation, based on an inertial sensor, a step recognition algorithm and map. Tracking of movable objects is provided by another inertial sensor and a head-mounted stereo camera, combined with 3D environmental models. This study developed an algorithm based on shape and colour to identify objects and used a common face detection algorithm to inform the user of the presence and position of others. The system allows blind people to determine their position with approximately 1 metre accuracy. Virtual exploration of the environment can be accomplished by moving one's finger on a touch screen of a small portable tablet PC. The name of rooms, building features and hazards, modelled objects and their positions are presented acoustically or in Braille. Given adequate environmental models, this system offers blind people the opportunity to navigate independently and safely, even within unknown environments. Additionally, the system facilitates education and rehabilitation by providing, in several languages, object names, features and relative positions.
Development of a virtual flight simulator.
Kuntz Rangel, Rodrigo; Guimarães, Lamartine N F; de Assis Correa, Francisco
2002-10-01
We present the development of a flight simulator that allows the user to interact in a created environment by means of virtual reality devices. This environment simulates the sight of a pilot in an airplane cockpit. The environment is projected in a helmet visor and allows the pilot to see inside as well as outside the cockpit. The movement of the airplane is independent of the movement of the pilot's head, which means that the airplane might travel in one direction while the pilot is looking at a 30 degrees angle with respect to the traveled direction. In this environment, the pilot will be able to take off, fly, and land the airplane. So far, the objects in the environment are geometrical figures. This is an ongoing project, and only partial results are available now.
Inertial Sensor-Based Touch and Shake Metaphor for Expressive Control of 3D Virtual Avatars
Patil, Shashidhar; Chintalapalli, Harinadha Reddy; Kim, Dubeom; Chai, Youngho
2015-01-01
In this paper, we present an inertial sensor-based touch and shake metaphor for expressive control of a 3D virtual avatar in a virtual environment. An intuitive six degrees-of-freedom wireless inertial motion sensor is used as a gesture and motion control input device with a sensor fusion algorithm. The algorithm enables user hand motions to be tracked in 3D space via magnetic, angular rate, and gravity sensors. A quaternion-based complementary filter is implemented to reduce noise and drift. An algorithm based on dynamic time-warping is developed for efficient recognition of dynamic hand gestures with real-time automatic hand gesture segmentation. Our approach enables the recognition of gestures and estimates gesture variations for continuous interaction. We demonstrate the gesture expressivity using an interactive flexible gesture mapping interface for authoring and controlling a 3D virtual avatar and its motion by tracking user dynamic hand gestures. This synthesizes stylistic variations in a 3D virtual avatar, producing motions that are not present in the motion database using hand gesture sequences from a single inertial motion sensor. PMID:26094629
Design and Implement of Astronomical Cloud Computing Environment In China-VO
NASA Astrophysics Data System (ADS)
Li, Changhua; Cui, Chenzhou; Mi, Linying; He, Boliang; Fan, Dongwei; Li, Shanshan; Yang, Sisi; Xu, Yunfei; Han, Jun; Chen, Junyi; Zhang, Hailong; Yu, Ce; Xiao, Jian; Wang, Chuanjun; Cao, Zihuang; Fan, Yufeng; Liu, Liang; Chen, Xiao; Song, Wenming; Du, Kangyu
2017-06-01
Astronomy cloud computing environment is a cyber-Infrastructure for Astronomy Research initiated by Chinese Virtual Observatory (China-VO) under funding support from NDRC (National Development and Reform commission) and CAS (Chinese Academy of Sciences). Based on virtualization technology, astronomy cloud computing environment was designed and implemented by China-VO team. It consists of five distributed nodes across the mainland of China. Astronomer can get compuitng and storage resource in this cloud computing environment. Through this environments, astronomer can easily search and analyze astronomical data collected by different telescopes and data centers , and avoid the large scale dataset transportation.
Astronomical virtual observatory and the place and role of Bulgarian one
NASA Astrophysics Data System (ADS)
Petrov, Georgi; Dechev, Momchil; Slavcheva-Mihova, Luba; Duchlev, Peter; Mihov, Bojko; Kochev, Valentin; Bachev, Rumen
2009-07-01
Virtual observatory could be defined as a collection of integrated astronomical data archives and software tools that utilize computer networks to create an environment in which research can be conducted. Several countries have initiated national virtual observatory programs that combine existing databases from ground-based and orbiting observatories, scientific facility especially equipped to detect and record naturally occurring scientific phenomena. As a result, data from all the world's major observatories will be available to all users and to the public. This is significant not only because of the immense volume of astronomical data but also because the data on stars and galaxies has been compiled from observations in a variety of wavelengths-optical, radio, infrared, gamma ray, X-ray and more. In a virtual observatory environment, all of this data is integrated so that it can be synthesized and used in a given study. During the autumn of the 2001 (26.09.2001) six organizations from Europe put the establishment of the Astronomical Virtual Observatory (AVO)-ESO, ESA, Astrogrid, CDS, CNRS, Jodrell Bank (Dolensky et al., 2003). Its aims have been outlined as follows: - To provide comparative analysis of large sets of multiwavelength data; - To reuse data collected by a single source; - To provide uniform access to data; - To make data available to less-advantaged communities; - To be an educational tool. The Virtual observatory includes: - Tools that make it easy to locate and retrieve data from catalogues, archives, and databases worldwide; - Tools for data analysis, simulation, and visualization; - Tools to compare observations with results obtained from models, simulations and theory; - Interoperability: services that can be used regardless of the clients computing platform, operating system and software capabilities; - Access to data in near real-time, archived data and historical data; - Additional information - documentation, user-guides, reports, publications, news and so on. This large growth of astronomical data and the necessity of an easy access to those data led to the foundation of the International Virtual Observatory Alliance (IVOA). IVOA was formed in June 2002. By January 2005, the IVOA has grown to include 15 funded VO projects from Australia, Canada, China, Europe, France, Germany, Hungary, India, Italy, Japan, Korea, Russia, Spain, the United Kingdom, and the United States. At the time being Bulgaria is not a member of European Astronomical Virtual Observatory and as the Bulgarian Virtual Observatory is not a legal entity, we are not members of IVOA. The main purpose of the project is Bulgarian Virtual Observatory to join the leading virtual astronomical institutions in the world. Initially the Bulgarian Virtual Observatory will include: - BG Galaxian virtual observatory; - BG Solar virtual observatory; - Department Star clusters of IA, BAS; - WFPDB group of IA, BAS. All available data will be integrated in the Bulgarian centers of astronomical data, conducted by the Wide Field Plate Archive data centre. For the above purpose POSTGRESQL or/and MySQL will be installed on the server of BG-VO and SAADA tools, ESO-MEX or/and DAL ToolKit to transform our FITS files in standard format for VO-tools. A part of the participants was acquainted with the principles of these products during the "Days of virtual observatory in Sofia" January, 2008.
NASA Technical Reports Server (NTRS)
Rabelo, Luis C.
2002-01-01
This is a report of my activities as a NASA Fellow during the summer of 2002 at the NASA Kennedy Space Center (KSC). The core of these activities is the assigned project: the Virtual Test Bed (VTB) from the Spaceport Engineering and Technology Directorate. The VTB Project has its foundations in the NASA Ames Research Center (ARC) Intelligent Launch & Range Operations program. The objective of the VTB project is to develop a new and unique collaborative computing environment where simulation models can be hosted and integrated in a seamless fashion. This collaborative computing environment will be used to build a Virtual Range as well as a Virtual Spaceport. This project will work as a technology pipeline to research, develop, test and validate R&D efforts against real time operations without interfering with the actual operations or consuming the operational personnel s time. This report will also focus on the systems issues required to conceptualize and provide form to a systems architecture capable of handling the different demands.
Ambient Intelligence in Multimeda and Virtual Reality Environments for the rehabilitation
NASA Astrophysics Data System (ADS)
Benko, Attila; Cecilia, Sik Lanyi
This chapter presents a general overview about the use of multimedia and virtual reality in rehabilitation and assistive and preventive healthcare. This chapter deals with multimedia, virtual reality applications based AI intended for use by medical doctors, nurses, special teachers and further interested persons. It describes methods how multimedia and virtual reality is able to assist their work. These include the areas how multimedia and virtual reality can help the patients everyday life and their rehabilitation. In the second part of the chapter we present the Virtual Therapy Room (VTR) a realized application for aphasic patients that was created for practicing communication and expressing emotions in a group therapy setting. The VTR shows a room that contains a virtual therapist and four virtual patients (avatars). The avatars are utilizing their knowledge base in order to answer the questions of the user providing an AI environment for the rehabilitation. The user of the VTR is the aphasic patient who has to solve the exercises. The picture that is relevant for the actual task appears on the virtual blackboard. Patient answers questions of the virtual therapist. Questions are about pictures describing an activity or an object in different levels. Patient can ask an avatar for answer. If the avatar knows the answer the avatars emotion changes to happy instead of sad. The avatar expresses its emotions in different dimensions. Its behavior, face-mimic, voice-tone and response also changes. The emotion system can be described as a deterministic finite automaton where places are emotion-states and the transition function of the automaton is derived from the input-response reaction of an avatar. Natural language processing techniques were also implemented in order to establish highquality human-computer interface windows for each of the avatars. Aphasic patients are able to interact with avatars via these interfaces. At the end of the chapter we visualize the possible future research field.
Haidar, Ali N; Zasada, Stefan J; Coveney, Peter V; Abdallah, Ali E; Beckles, Bruce; Jones, Mike A S
2011-06-06
We present applications of audited credential delegation (ACD), a usable security solution for authentication, authorization and auditing in distributed virtual physiological human (VPH) project environments that removes the use of digital certificates from end-users' experience. Current security solutions are based on public key infrastructure (PKI). While PKI offers strong security for VPH projects, it suffers from serious usability shortcomings in terms of end-user acquisition and management of credentials which deter scientists from exploiting distributed VPH environments. By contrast, ACD supports the use of local credentials. Currently, a local ACD username-password combination can be used to access grid-based resources while Shibboleth support is underway. Moreover, ACD provides seamless and secure access to shared patient data, tools and infrastructure, thus supporting the provision of personalized medicine for patients, scientists and clinicians participating in e-health projects from a local to the widest international scale.
Haidar, Ali N.; Zasada, Stefan J.; Coveney, Peter V.; Abdallah, Ali E.; Beckles, Bruce; Jones, Mike A. S.
2011-01-01
We present applications of audited credential delegation (ACD), a usable security solution for authentication, authorization and auditing in distributed virtual physiological human (VPH) project environments that removes the use of digital certificates from end-users' experience. Current security solutions are based on public key infrastructure (PKI). While PKI offers strong security for VPH projects, it suffers from serious usability shortcomings in terms of end-user acquisition and management of credentials which deter scientists from exploiting distributed VPH environments. By contrast, ACD supports the use of local credentials. Currently, a local ACD username–password combination can be used to access grid-based resources while Shibboleth support is underway. Moreover, ACD provides seamless and secure access to shared patient data, tools and infrastructure, thus supporting the provision of personalized medicine for patients, scientists and clinicians participating in e-health projects from a local to the widest international scale. PMID:22670214
Design Virtual Reality Scene Roam for Tour Animations Base on VRML and Java
NASA Astrophysics Data System (ADS)
Cao, Zaihui; hu, Zhongyan
Virtual reality has been involved in a wide range of academic and commercial applications. It can give users a natural feeling of the environment by creating realistic virtual worlds. Implementing a virtual tour through a model of a tourist area on the web has become fashionable. In this paper, we present a web-based application that allows a user to, walk through, see, and interact with a fully three-dimensional model of the tourist area. Issues regarding navigation and disorientation areaddressed and we suggest a combination of the metro map and an intuitive navigation system. Finally we present a prototype which implements our ideas. The application of VR techniques integrates the visualization and animation of the three dimensional modelling to landscape analysis. The use of the VRML format produces the possibility to obtain some views of the 3D model and to explore it in real time. It is an important goal for the spatial information sciences.
Measuring Presence in Virtual Environments
1994-10-01
viewpoint to change what they see, or to reposition their head to affect binaural hearing, or to search the environment haptically, they will experience a...increase presence in an alternate environment. For example a head mounted display that isolates the user from the real world may increase the sense...movement interface devices such as treadmills and trampolines , different gloves, and auditory equipment. Even as a low end technological implementation of
Mission Exploitation Platform PROBA-V
NASA Astrophysics Data System (ADS)
Goor, Erwin
2016-04-01
VITO and partners developed an end-to-end solution to drastically improve the exploitation of the PROBA-V EO-data archive (http://proba-v.vgt.vito.be/), the past mission SPOT-VEGETATION and derived vegetation parameters by researchers, service providers and end-users. The analysis of time series of data (+1PB) is addressed, as well as the large scale on-demand processing of near real-time data. From November 2015 an operational Mission Exploitation Platform (MEP) PROBA-V, as an ESA pathfinder project, will be gradually deployed at the VITO data center with direct access to the complete data archive. Several applications will be released to the users, e.g. - A time series viewer, showing the evolution of PROBA-V bands and derived vegetation parameters for any area of interest. - Full-resolution viewing services for the complete data archive. - On-demand processing chains e.g. for the calculation of N-daily composites. - A Virtual Machine will be provided with access to the data archive and tools to work with this data, e.g. various toolboxes and support for R and Python. After an initial release in January 2016, a research platform will gradually be deployed allowing users to design, debug and test applications on the platform. From the MEP PROBA-V, access to Sentinel-2 and landsat data will be addressed as well, e.g. to support the Cal/Val activities of the users. Users can make use of powerful Web based tools and can self-manage virtual machines to perform their work on the infrastructure at VITO with access to the complete data archive. To realise this, private cloud technology (openStack) is used and a distributed processing environment is built based on Hadoop. The Hadoop ecosystem offers a lot of technologies (Spark, Yarn, Accumulo, etc.) which we integrate with several open-source components. The impact of this MEP on the user community will be high and will completely change the way of working with the data and hence open the large time series to a larger community of users. The presentation will address these benefits for the users and discuss on the technical challenges in implementing this MEP.
Visualizing the process of interaction in a 3D environment
NASA Astrophysics Data System (ADS)
Vaidya, Vivek; Suryanarayanan, Srikanth; Krishnan, Kajoli; Mullick, Rakesh
2007-03-01
As the imaging modalities used in medicine transition to increasingly three-dimensional data the question of how best to interact with and analyze this data becomes ever more pressing. Immersive virtual reality systems seem to hold promise in tackling this, but how individuals learn and interact in these environments is not fully understood. Here we will attempt to show some methods in which user interaction in a virtual reality environment can be visualized and how this can allow us to gain greater insight into the process of interaction/learning in these systems. Also explored is the possibility of using this method to improve understanding and management of ergonomic issues within an interface.
Productive confusions: learning from simulations of pandemic virus outbreaks in Second Life
NASA Astrophysics Data System (ADS)
Cárdenas, Micha; Greci, Laura S.; Hurst, Samantha; Garman, Karen; Hoffman, Helene; Huang, Ricky; Gates, Michael; Kho, Kristen; Mehrmand, Elle; Porteous, Todd; Calvitti, Alan; Higginbotham, Erin; Agha, Zia
2011-03-01
Users of immersive virtual reality environments have reported a wide variety of side and after effects including the confusion of characteristics of the real and virtual worlds. Perhaps this side effect of confusing the virtual and real can be turned around to explore the possibilities for immersion with minimal technological support in virtual world group training simulations. This paper will describe observations from my time working as an artist/researcher with the UCSD School of Medicine (SoM) and Veterans Administration San Diego Healthcare System (VASDHS) to develop trainings for nurses, doctors and Hospital Incident Command staff that simulate pandemic virus outbreaks. By examining moments of slippage between realities, both into and out of the virtual environment, moments of the confusion of boundaries between real and virtual, we can better understand methods for creating immersion. I will use the mixing of realities as a transversal line of inquiry, borrowing from virtual reality studies, game studies, and anthropological studies to better understand the mechanisms of immersion in virtual worlds. Focusing on drills conducted in Second Life, I will examine moments of training to learn the software interface, moments within the drill and interviews after the drill.
The HEPiX Virtualisation Working Group: Towards a Grid of Clouds
NASA Astrophysics Data System (ADS)
Cass, Tony
2012-12-01
The use of virtual machine images, as for example with Cloud services such as Amazon's Elastic Compute Cloud, is attractive for users as they have a guaranteed execution environment, something that cannot today be provided across sites participating in computing grids such as the Worldwide LHC Computing Grid. However, Grid sites often operate within computer security frameworks which preclude the use of remotely generated images. The HEPiX Virtualisation Working Group was setup with the objective to enable use of remotely generated virtual machine images at Grid sites and, to this end, has introduced the idea of trusted virtual machine images which are guaranteed to be secure and configurable by sites such that security policy commitments can be met. This paper describes the requirements and details of these trusted virtual machine images and presents a model for their use to facilitate the integration of Grid- and Cloud-based computing environments for High Energy Physics.
Simulation Platform: a cloud-based online simulation environment.
Yamazaki, Tadashi; Ikeno, Hidetoshi; Okumura, Yoshihiro; Satoh, Shunji; Kamiyama, Yoshimi; Hirata, Yutaka; Inagaki, Keiichiro; Ishihara, Akito; Kannon, Takayuki; Usui, Shiro
2011-09-01
For multi-scale and multi-modal neural modeling, it is needed to handle multiple neural models described at different levels seamlessly. Database technology will become more important for these studies, specifically for downloading and handling the neural models seamlessly and effortlessly. To date, conventional neuroinformatics databases have solely been designed to archive model files, but the databases should provide a chance for users to validate the models before downloading them. In this paper, we report our on-going project to develop a cloud-based web service for online simulation called "Simulation Platform". Simulation Platform is a cloud of virtual machines running GNU/Linux. On a virtual machine, various software including developer tools such as compilers and libraries, popular neural simulators such as GENESIS, NEURON and NEST, and scientific software such as Gnuplot, R and Octave, are pre-installed. When a user posts a request, a virtual machine is assigned to the user, and the simulation starts on that machine. The user remotely accesses to the machine through a web browser and carries out the simulation, without the need to install any software but a web browser on the user's own computer. Therefore, Simulation Platform is expected to eliminate impediments to handle multiple neural models that require multiple software. Copyright © 2011 Elsevier Ltd. All rights reserved.
Reprint of: Simulation Platform: a cloud-based online simulation environment.
Yamazaki, Tadashi; Ikeno, Hidetoshi; Okumura, Yoshihiro; Satoh, Shunji; Kamiyama, Yoshimi; Hirata, Yutaka; Inagaki, Keiichiro; Ishihara, Akito; Kannon, Takayuki; Usui, Shiro
2011-11-01
For multi-scale and multi-modal neural modeling, it is needed to handle multiple neural models described at different levels seamlessly. Database technology will become more important for these studies, specifically for downloading and handling the neural models seamlessly and effortlessly. To date, conventional neuroinformatics databases have solely been designed to archive model files, but the databases should provide a chance for users to validate the models before downloading them. In this paper, we report our on-going project to develop a cloud-based web service for online simulation called "Simulation Platform". Simulation Platform is a cloud of virtual machines running GNU/Linux. On a virtual machine, various software including developer tools such as compilers and libraries, popular neural simulators such as GENESIS, NEURON and NEST, and scientific software such as Gnuplot, R and Octave, are pre-installed. When a user posts a request, a virtual machine is assigned to the user, and the simulation starts on that machine. The user remotely accesses to the machine through a web browser and carries out the simulation, without the need to install any software but a web browser on the user's own computer. Therefore, Simulation Platform is expected to eliminate impediments to handle multiple neural models that require multiple software. Copyright © 2011 Elsevier Ltd. All rights reserved.
Buzzi, Jacopo; Ferrigno, Giancarlo; Jansma, Joost M.; De Momi, Elena
2017-01-01
Teleoperated robotic systems are widely spreading in multiple different fields, from hazardous environments exploration to surgery. In teleoperation, users directly manipulate a master device to achieve task execution at the slave robot side; this interaction is fundamental to guarantee both system stability and task execution performance. In this work, we propose a non-disruptive method to study the arm endpoint stiffness. We evaluate how users exploit the kinetic redundancy of the arm to achieve stability and precision during the execution of different tasks with different master devices. Four users were asked to perform two planar trajectories following virtual tasks using both a serial and a parallel link master device. Users' arm kinematics and muscular activation were acquired and combined with a user-specific musculoskeletal model to estimate the joint stiffness. Using the arm kinematic Jacobian, the arm end-point stiffness was derived. The proposed non-disruptive method is capable of estimating the arm endpoint stiffness during the execution of virtual teleoperated tasks. The obtained results are in accordance with the existing literature in human motor control and show, throughout the tested trajectory, a modulation of the arm endpoint stiffness that is affected by task characteristics and hand speed and acceleration. PMID:29018319
Kim, Jong Bae; Brienza, David M
2006-01-01
A Remote Accessibility Assessment System (RAAS) that uses three-dimensional (3-D) reconstruction technology is being developed; it enables clinicians to assess the wheelchair accessibility of users' built environments from a remote location. The RAAS uses commercial software to construct 3-D virtualized environments from photographs. We developed custom screening algorithms and instruments for analyzing accessibility. Characteristics of the camera and 3-D reconstruction software chosen for the system significantly affect its overall reliability. In this study, we performed an accuracy assessment to verify that commercial hardware and software can construct accurate 3-D models by analyzing the accuracy of dimensional measurements in a virtual environment and a comparison of dimensional measurements from 3-D models created with four cameras/settings. Based on these two analyses, we were able to specify a consumer-grade digital camera and PhotoModeler (EOS Systems, Inc, Vancouver, Canada) software for this system. Finally, we performed a feasibility analysis of the system in an actual environment to evaluate its ability to assess the accessibility of a wheelchair user's typical built environment. The field test resulted in an accurate accessibility assessment and thus validated our system.
Cyber entertainment system using an immersive networked virtual environment
NASA Astrophysics Data System (ADS)
Ihara, Masayuki; Honda, Shinkuro; Kobayashi, Minoru; Ishibashi, Satoshi
2002-05-01
Authors are examining a cyber entertainment system that applies IPT (Immersive Projection Technology) displays to the entertainment field. This system enables users who are in remote locations to communicate with each other so that they feel as if they are together. Moreover, the system enables those users to experience a high degree of presence, this is due to provision of stereoscopic vision as well as a haptic interface and stereo sound. This paper introduces this system from the viewpoint of space sharing across the network and elucidates its operation using the theme of golf. The system is developed by integrating avatar control, an I/O device, communication links, virtual interaction, mixed reality, and physical simulations. Pairs of these environments are connected across the network. This allows the two players to experience competition. An avatar of each player is displayed by the other player's IPT display in the remote location and is driven by only two magnetic sensors. That is, in the proposed system, users don't need to wear any data suit with a lot of sensors and they are able to play golf without any encumbrance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, William Eugene
These slides describe different strategies for installing Python software. Although I am a big fan of Python software development, robust strategies for software installation remains a challenge. This talk describes several different installation scenarios. The Good: the user has administrative privileges - Installing on Windows with an installer executable, Installing with Linux application utility, Installing a Python package from the PyPI repository, and Installing a Python package from source. The Bad: the user does not have administrative privileges - Using a virtual environment to isolate package installations, and Using an installer executable on Windows with a virtual environment. The Ugly:more » the user needs to install an extension package from source - Installing a Python extension package from source, and PyCoinInstall - Managing builds for Python extension packages. The last item referring to PyCoinInstall describes a utility being developed for the COIN-OR software, which is used within the operations research community. COIN-OR includes a variety of Python and C++ software packages, and this script uses a simple plug-in system to support the management of package builds and installation.« less
[Use of virtual reality in forensic psychiatry. A new paradigm?].
Fromberger, P; Jordan, K; Müller, J L
2014-03-01
For more than 20 years virtual realities (VR) have been successfully used in the assessment and treatment of psychiatric disorders. The most important advantages of VR are the high ecological validity of virtual environments, the entire controllability of virtual stimuli in the virtual environment and the capability to induce the sensation of being in the virtual environment instead of the physical environment. VRs provide the opportunity to face the user with stimuli and situations which are not available or too risky in reality. Despite these advantages VR-based applications have not yet been applied in forensic psychiatry. On the basis of an overview of the recent state-of-the-art in VR-based applications in general psychiatry, the article demonstrates the advantages and possibilities of VR-based applications in forensic psychiatry. Up to now only preliminary studies regarding the VR-based assessment of pedophilic interests exist. These studies demonstrate the potential of ecologically valid VR-based applications for the assessment of forensically relevant disorders. One of the most important advantages is the possibility of VR to assess the behavior of forensic inpatients in crime-related situations without endangering others. This provides completely new possibilities not only regarding the assessment but also for the treatment of forensic inpatients. Before utilizing these possibilities in the clinical practice exhaustive research and development will be necessary. Given the high potential of VR-based applications, this effort would be worth it.
Llnking the EarthScope Data Virtual Catalog to the GEON Portal
NASA Astrophysics Data System (ADS)
Lin, K.; Memon, A.; Baru, C.
2008-12-01
The EarthScope Data Portal provides a unified, single-point of access to EarthScope data and products from USArray, Plate Boundary Observatory (PBO), and San Andreas Fault Observatory at Depth (SAFOD) experiments. The portal features basic search and data access capabilities to allow users to discover and access EarthScope data using spatial, temporal, and other metadata-based (data type, station specific) search conditions. The portal search module is the user interface implementation of the EarthScope Data Search Web Service. This Web Service acts as a virtual catalog that in turn invokes Web services developed by IRIS (Incorporated Research Institutions for Seismology), UNAVCO (University NAVSTAR Consortium), and GFZ (German Research Center for Geosciences) to search for EarthScope data in the archives at each of these locations. These Web Services provide information about all resources (data) that match the specified search conditions. In this presentation we will describe how the EarthScope Data Search Web service can be integrated into the GEONsearch application in the GEON Portal (see http://portal.geongrid.org). Thus, a search request issued at the GEON Portal will also search the EarthScope virtual catalog thereby providing users seamless access to data in GEON as well as the Earthscope via a common user interface.
Distance Learning: Information Access and Services for Virtual Users.
ERIC Educational Resources Information Center
Iyer, Hemalata, Ed.
This volume centers broadly on information support services for distance education. The articles in this book can be categorized into two areas: access to information resources for distance learners, and studies of distance learning programs. Contents include: "The Challenges and Benefits of Asynchronous Learning Networks" (Daphne…
Virtual reality enhanced mannequin (VREM) that is well received by resuscitation experts.
Semeraro, Federico; Frisoli, Antonio; Bergamasco, Massimo; Cerchiari, Erga L
2009-04-01
The objective of this study was to test acceptance of, and interest in, a newly developed prototype of virtual reality enhanced mannequin (VREM) on a sample of congress attendees who volunteered to participate in the evaluation session and to respond to a specifically designed questionnaire. A commercial Laerdal HeartSim 4000 mannequin was developed to integrate virtual reality (VR) technologies with specially developed virtual reality software to increase the immersive perception of emergency scenarios. To evaluate the acceptance of a virtual reality enhanced mannequin (VREM), we presented it to a sample of 39 possible users. Each evaluation session involved one trainee and two instructors with a standardized procedure and scenario: the operator was invited by the instructor to wear the data-gloves and the head mounted display and was briefly introduced to the scope of the simulation. The instructor helped the operator familiarize himself with the environment. After the patient's collapse, the operator was asked to check the patient's clinical conditions and start CPR. Finally, the patient started to recover signs of circulation and the evaluation session was concluded. Each participant was then asked to respond to a questionnaire designed to explore the trainee's perception in the areas of user-friendliness, realism, and interaction/immersion. Overall, the evaluation of the system was very positive, as was the feeling of immersion and realism of the environment and simulation. Overall, 84.6% of the participants judged the virtual reality experience as interesting and believed that its development could be very useful for healthcare training. The prototype of the virtual reality enhanced mannequin was well-liked, without interfence by interaction devices, and deserves full technological development and validation in emergency medical training.
ERIC Educational Resources Information Center
Bricken, Meredith; Byrne, Chris M.
The goal of this study was to take a first step in evaluating the potential of virtual reality (VR) as a learning environment. The context of the study was The Technology Academy, a technology-oriented summer day camp for students ages 5-18, where student activities center around hands-on exploration of new technology (e.g., robotics, MIDI digital…
Augmented Reality for Close Quarters Combat
None
2018-01-16
Sandia National Laboratories has developed a state-of-the-art augmented reality training system for close-quarters combat (CQB). This system uses a wearable augmented reality system to place the user in a real environment while engaging enemy combatants in virtual space (Boston Dynamics DI-Guy). Umbra modeling and simulation environment is used to integrate and control the AR system.
User Control and Task Authenticity for Spatial Learning in 3D Environments
ERIC Educational Resources Information Center
Dalgarno, Barney; Harper, Barry
2004-01-01
This paper describes two empirical studies which investigated the importance for spatial learning of view control and object manipulation within 3D environments. A 3D virtual chemistry laboratory was used as the research instrument. Subjects, who were university undergraduate students (34 in the first study and 80 in the second study), undertook…
ERIC Educational Resources Information Center
Masson, Steve; Vazquez-Abad, Jesus
2006-01-01
This paper proposes a new way to integrate history of science in science education to promote conceptual change by introducing the notion of historical microworld, which is a computer-based interactive learning environment respecting historic conceptions. In this definition, "interactive" means that the user can act upon the virtual environment by…
ERIC Educational Resources Information Center
Alvarez, Nahum; Sanchez-Ruiz, Antonio; Cavazza, Marc; Shigematsu, Mika; Prendinger, Helmut
2015-01-01
The use of three-dimensional virtual environments in training applications supports the simulation of complex scenarios and realistic object behaviour. While these environments have the potential to provide an advanced training experience to students, it is difficult to design and manage a training session in real time due to the number of…
Combined virtual and real robotic test-bed for single operator control of multiple robots
NASA Astrophysics Data System (ADS)
Lee, Sam Y.-S.; Hunt, Shawn; Cao, Alex; Pandya, Abhilash
2010-04-01
Teams of heterogeneous robots with different dynamics or capabilities could perform a variety of tasks such as multipoint surveillance, cooperative transport and explorations in hazardous environments. In this study, we work with heterogeneous robots of semi-autonomous ground and aerial robots for contaminant localization. We developed a human interface system which linked every real robot to its virtual counterpart. A novel virtual interface has been integrated with Augmented Reality that can monitor the position and sensory information from video feed of ground and aerial robots in the 3D virtual environment, and improve user situational awareness. An operator can efficiently control the real multi-robots using the Drag-to-Move method on the virtual multi-robots. This enables an operator to control groups of heterogeneous robots in a collaborative way for allowing more contaminant sources to be pursued simultaneously. The advanced feature of the virtual interface system is guarded teleoperation. This can be used to prevent operators from accidently driving multiple robots into walls and other objects. Moreover, the feature of the image guidance and tracking is able to reduce operator workload.
Using EMG to anticipate head motion for virtual-environment applications
NASA Technical Reports Server (NTRS)
Barniv, Yair; Aguilar, Mario; Hasanbelliu, Erion
2005-01-01
In virtual environment (VE) applications, where virtual objects are presented in a see-through head-mounted display, virtual images must be continuously stabilized in space in response to user's head motion. Time delays in head-motion compensation cause virtual objects to "swim" around instead of being stable in space which results in misalignment errors when overlaying virtual and real objects. Visual update delays are a critical technical obstacle for implementing head-mounted displays in applications such as battlefield simulation/training, telerobotics, and telemedicine. Head motion is currently measurable by a head-mounted 6-degrees-of-freedom inertial measurement unit. However, even given this information, overall VE-system latencies cannot be reduced under about 25 ms. We present a novel approach to eliminating latencies, which is premised on the fact that myoelectric signals from a muscle precede its exertion of force, thereby limb or head acceleration. We thus suggest utilizing neck-muscles' myoelectric signals to anticipate head motion. We trained a neural network to map such signals onto equivalent time-advanced inertial outputs. The resulting network can achieve time advances of up to 70 ms.
Using EMG to anticipate head motion for virtual-environment applications.
Barniv, Yair; Aguilar, Mario; Hasanbelliu, Erion
2005-06-01
In virtual environment (VE) applications, where virtual objects are presented in a see-through head-mounted display, virtual images must be continuously stabilized in space in response to user's head motion. Time delays in head-motion compensation cause virtual objects to "swim" around instead of being stable in space which results in misalignment errors when overlaying virtual and real objects. Visual update delays are a critical technical obstacle for implementing head-mounted displays in applications such as battlefield simulation/training, telerobotics, and telemedicine. Head motion is currently measurable by a head-mounted 6-degrees-of-freedom inertial measurement unit. However, even given this information, overall VE-system latencies cannot be reduced under about 25 ms. We present a novel approach to eliminating latencies, which is premised on the fact that myoelectric signals from a muscle precede its exertion of force, thereby limb or head acceleration. We thus suggest utilizing neck-muscles' myoelectric signals to anticipate head motion. We trained a neural network to map such signals onto equivalent time-advanced inertial outputs. The resulting network can achieve time advances of up to 70 ms.
NETL - Supercomputing: NETL Simulation Based Engineering User Center (SBEUC)
None
2018-02-07
NETL's Simulation-Based Engineering User Center, or SBEUC, integrates one of the world's largest high-performance computers with an advanced visualization center. The SBEUC offers a collaborative environment among researchers at NETL sites and those working through the NETL-Regional University Alliance.
NETL - Supercomputing: NETL Simulation Based Engineering User Center (SBEUC)
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2013-09-30
NETL's Simulation-Based Engineering User Center, or SBEUC, integrates one of the world's largest high-performance computers with an advanced visualization center. The SBEUC offers a collaborative environment among researchers at NETL sites and those working through the NETL-Regional University Alliance.
2005-12-01
weapon system evaluation as a high-level architecture and distributed interactive simulation 6 compliant, human-in-the-loop, virtual environment...Directorate to participate in the Limited Early User Evaluation (LEUE) of the Common Avionics Architecture System (CAAS) cockpit. ARL conducted a human...CAAS, the UH-60M PO conducted a limited early user evaluation (LEUE) to evaluate the integration of the CAAS in the UH-60M crew station. The
NASA Technical Reports Server (NTRS)
Orr, Joel N.
1995-01-01
This reflection of human-computer interface and its requirements as virtual technology is advanced, proposes a new term: 'Pezonomics'. The term replaces the term ergonomics ('the law of work') with a definition pointing to 'the law of play.' The necessity of this term, the author reasons, comes from the need to 'capture the essence of play and calibrate our computer systems to its cadences.' Pezonomics will ensure that artificial environments, in particular virtual reality, are user friendly.
2010-12-01
Ibid. 24. Ibid. 25. Ibid. 26. Carlson, “Verizon Unifies Communications ,” 1. 27. Bednarz, “Users Turn to Virtual Data Marts,” 56. 28. Coombs ...systems that do not communicate .”7 Data format standards are an oft-tried interoperability approach to homogenize interfaces between functional, physical...instance—and not the collection sources used to create the warning. Unfortunately, the intelligence community (IC) has yet to widely decouple
Exploiting virtual synchrony in distributed systems
NASA Technical Reports Server (NTRS)
Birman, Kenneth P.; Joseph, Thomas A.
1987-01-01
Applications of a virtually synchronous environment are described for distributed programming, which underlies a collection of distributed programming tools in the ISIS2 system. A virtually synchronous environment allows processes to be structured into process groups, and makes events like broadcasts to the group as an entity, group membership changes, and even migration of an activity from one place to another appear to occur instantaneously, in other words, synchronously. A major advantage to this approach is that many aspects of a distributed application can be treated independently without compromising correctness. Moreover, user code that is designed as if the system were synchronous can often be executed concurrently. It is argued that this approach to building distributed and fault tolerant software is more straightforward, more flexible, and more likely to yield correct solutions than alternative approaches.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Markidis, S.; Rizwan, U.
The use of virtual nuclear control room can be an effective and powerful tool for training personnel working in the nuclear power plants. Operators could experience and simulate the functioning of the plant, even in critical situations, without being in a real power plant or running any risk. 3D models can be exported to Virtual Reality formats and then displayed in the Virtual Reality environment providing an immersive 3D experience. However, two major limitations of this approach are that 3D models exhibit static textures, and they are not fully interactive and therefore cannot be used effectively in training personnel. Inmore » this paper we first describe a possible solution for embedding the output of a computer application in a 3D virtual scene, coupling real-world applications and VR systems. The VR system reported here grabs the output of an application running on an X server; creates a texture with the output and then displays it on a screen or a wall in the virtual reality environment. We then propose a simple model for providing interaction between the user in the VR system and the running simulator. This approach is based on the use of internet-based application that can be commanded by a laptop or tablet-pc added to the virtual environment. (authors)« less
Utilization of Virtual Server Technology in Mission Operations
NASA Technical Reports Server (NTRS)
Felton, Larry; Lankford, Kimberly; Pitts, R. Lee; Pruitt, Robert W.
2010-01-01
Virtualization provides the opportunity to continue to do "more with less"---more computing power with fewer physical boxes, thus reducing the overall hardware footprint, power and cooling requirements, software licenses, and their associated costs. This paper explores the tremendous advantages and any disadvantages of virtualization in all of the environments associated with software and systems development to operations flow. It includes the use and benefits of the Intelligent Platform Management Interface (IPMI) specification, and identifies lessons learned concerning hardware and network configurations. Using the Huntsville Operations Support Center (HOSC) at NASA Marshall Space Flight Center as an example, we demonstrate that deploying virtualized servers as a means of managing computing resources is applicable and beneficial to many areas of application, up to and including flight operations.
Ragan, Eric D; Scerbo, Siroberto; Bacim, Felipe; Bowman, Doug A
2017-08-01
Many types of virtual reality (VR) systems allow users to use natural, physical head movements to view a 3D environment. In some situations, such as when using systems that lack a fully surrounding display or when opting for convenient low-effort interaction, view control can be enabled through a combination of physical and virtual turns to view the environment, but the reduced realism could potentially interfere with the ability to maintain spatial orientation. One solution to this problem is to amplify head rotations such that smaller physical turns are mapped to larger virtual turns, allowing trainees to view the entire surrounding environment with small head movements. This solution is attractive because it allows semi-natural physical view control rather than requiring complete physical rotations or a fully-surrounding display. However, the effects of amplified head rotations on spatial orientation and many practical tasks are not well understood. In this paper, we present an experiment that evaluates the influence of amplified head rotation on 3D search, spatial orientation, and cybersickness. In the study, we varied the amount of amplification and also varied the type of display used (head-mounted display or surround-screen CAVE) for the VR search task. By evaluating participants first with amplification and then without, we were also able to study training transfer effects. The findings demonstrate the feasibility of using amplified head rotation to view 360 degrees of virtual space, but noticeable problems were identified when using high amplification with a head-mounted display. In addition, participants were able to more easily maintain a sense of spatial orientation when using the CAVE version of the application, which suggests that visibility of the user's body and awareness of the CAVE's physical environment may have contributed to the ability to use the amplification technique while keeping track of orientation.
Constraint, Intelligence, and Control Hierarchy in Virtual Environments. Chapter 1
NASA Technical Reports Server (NTRS)
Sheridan, Thomas B.
2007-01-01
This paper seeks to deal directly with the question of what makes virtual actors and objects that are experienced in virtual environments seem real. (The term virtual reality, while more common in public usage, is an oxymoron; therefore virtual environment is the preferred term in this paper). Reality is difficult topic, treated for centuries in those sub-fields of philosophy called ontology- "of or relating to being or existence" and epistemology- "the study of the method and grounds of knowledge, especially with reference to its limits and validity" (both from Webster s, 1965). Advances in recent decades in the technologies of computers, sensors and graphics software have permitted human users to feel present or experience immersion in computer-generated virtual environments. This has motivated a keen interest in probing this phenomenon of presence and immersion not only philosophically but also psychologically and physiologically in terms of the parameters of the senses and sensory stimulation that correlate with the experience (Ellis, 1991). The pages of the journal Presence: Teleoperators and Virtual Environments have seen much discussion of what makes virtual environments seem real (see, e.g., Slater, 1999; Slater et al. 1994; Sheridan, 1992, 2000). Stephen Ellis, when organizing the meeting that motivated this paper, suggested to invited authors that "We may adopt as an organizing principle for the meeting that the genesis of apparently intelligent interaction arises from an upwelling of constraints determined by a hierarchy of lower levels of behavioral interaction. "My first reaction was "huh?" and my second was "yeah, that seems to make sense." Accordingly the paper seeks to explain from the author s viewpoint, why Ellis s hypothesis makes sense. What is the connection of "presence" or "immersion" of an observer in a virtual environment, to "constraints" and what types of constraints. What of "intelligent interaction," and is it the intelligence of the observer or the intelligence of the environment (whatever the latter may mean) that is salient? And finally, what might be relevant about "upwelling" of constraints as determined by a hierarchy of levels of interaction?