Sample records for manipulate virtual objects

  1. Third-Graders Learn about Fractions Using Virtual Manipulatives: A Classroom Study

    ERIC Educational Resources Information Center

    Reimer, Kelly; Moyer, Patricia S.

    2005-01-01

    With recent advances in computer technology, it is no surprise that the manipulation of objects in mathematics classrooms now includes the manipulation of objects on the computer screen. These objects, referred to as "virtual manipulatives," are essentially replicas of physical manipulatives placed on the World Wide Web in the form of computer…

  2. Object Creation and Human Factors Evaluation for Virtual Environments

    NASA Technical Reports Server (NTRS)

    Lindsey, Patricia F.

    1998-01-01

    The main objective of this project is to provide test objects for simulated environments utilized by the recently established Army/NASA Virtual Innovations Lab (ANVIL) at Marshall Space Flight Center, Huntsville, Al. The objective of the ANVIL lab is to provide virtual reality (VR) models and environments and to provide visualization and manipulation methods for the purpose of training and testing. Visualization equipment used in the ANVIL lab includes head-mounted and boom-mounted immersive virtual reality display devices. Objects in the environment are manipulated using data glove, hand controller, or mouse. These simulated objects are solid or surfaced three dimensional models. They may be viewed or manipulated from any location within the environment and may be viewed on-screen or via immersive VR. The objects are created using various CAD modeling packages and are converted into the virtual environment using dVise. This enables the object or environment to be viewed from any angle or distance for training or testing purposes.

  3. A Study of Multi-Representation of Geometry Problem Solving with Virtual Manipulatives and Whiteboard System

    ERIC Educational Resources Information Center

    Hwang, Wu-Yuin; Su, Jia-Han; Huang, Yueh-Min; Dong, Jian-Jie

    2009-01-01

    In this paper, the development of an innovative Virtual Manipulatives and Whiteboard (VMW) system is described. The VMW system allowed users to manipulate virtual objects in 3D space and find clues to solve geometry problems. To assist with multi-representation transformation, translucent multimedia whiteboards were used to provide a virtual 3D…

  4. Virtual and concrete manipulatives: a comparison of approaches for solving mathematics problems for students with autism spectrum disorder.

    PubMed

    Bouck, Emily C; Satsangi, Rajiv; Doughty, Teresa Taber; Courtney, William T

    2014-01-01

    Students with autism spectrum disorder (ASD) are included in general education classes and expected to participate in general education content, such as mathematics. Yet, little research explores academically-based mathematics instruction for this population. This single subject alternating treatment design study explored the effectiveness of concrete (physical objects that can be manipulated) and virtual (3-D objects from the Internet that can be manipulated) manipulatives to teach single- and double-digit subtraction skills. Participants in this study included three elementary-aged students (ages ranging from 6 to 10) diagnosed with ASD. Students were selected from a clinic-based setting, where all participants received medically necessary intensive services provided via one-to-one, trained therapists. Both forms of manipulatives successfully assisted students in accurately and independently solving subtraction problem. However, all three students demonstrated greater accuracy and faster independence with the virtual manipulatives as compared to the concrete manipulatives. Beyond correctly solving the subtraction problems, students were also able to generalize their learning of subtraction through concrete and virtual manipulatives to more real-world applications.

  5. Direct Manipulation in Virtual Reality

    NASA Technical Reports Server (NTRS)

    Bryson, Steve

    2003-01-01

    Virtual Reality interfaces offer several advantages for scientific visualization such as the ability to perceive three-dimensional data structures in a natural way. The focus of this chapter is direct manipulation, the ability for a user in virtual reality to control objects in the virtual environment in a direct and natural way, much as objects are manipulated in the real world. Direct manipulation provides many advantages for the exploration of complex, multi-dimensional data sets, by allowing the investigator the ability to intuitively explore the data environment. Because direct manipulation is essentially a control interface, it is better suited for the exploration and analysis of a data set than for the publishing or communication of features found in that data set. Thus direct manipulation is most relevant to the analysis of complex data that fills a volume of three-dimensional space, such as a fluid flow data set. Direct manipulation allows the intuitive exploration of that data, which facilitates the discovery of data features that would be difficult to find using more conventional visualization methods. Using a direct manipulation interface in virtual reality, an investigator can, for example, move a data probe about in space, watching the results and getting a sense of how the data varies within its spatial volume.

  6. A Standard-Compliant Virtual Meeting System with Active Video Object Tracking

    NASA Astrophysics Data System (ADS)

    Lin, Chia-Wen; Chang, Yao-Jen; Wang, Chih-Ming; Chen, Yung-Chang; Sun, Ming-Ting

    2002-12-01

    This paper presents an H.323 standard compliant virtual video conferencing system. The proposed system not only serves as a multipoint control unit (MCU) for multipoint connection but also provides a gateway function between the H.323 LAN (local-area network) and the H.324 WAN (wide-area network) users. The proposed virtual video conferencing system provides user-friendly object compositing and manipulation features including 2D video object scaling, repositioning, rotation, and dynamic bit-allocation in a 3D virtual environment. A reliable, and accurate scheme based on background image mosaics is proposed for real-time extracting and tracking foreground video objects from the video captured with an active camera. Chroma-key insertion is used to facilitate video objects extraction and manipulation. We have implemented a prototype of the virtual conference system with an integrated graphical user interface to demonstrate the feasibility of the proposed methods.

  7. Design of a lightweight, cost effective thimble-like sensor for haptic applications based on contact force sensors.

    PubMed

    Ferre, Manuel; Galiana, Ignacio; Aracil, Rafael

    2011-01-01

    This paper describes the design and calibration of a thimble that measures the forces applied by a user during manipulation of virtual and real objects. Haptic devices benefit from force measurement capabilities at their end-point. However, the heavy weight and cost of force sensors prevent their widespread incorporation in these applications. The design of a lightweight, user-adaptable, and cost-effective thimble with four contact force sensors is described in this paper. The sensors are calibrated before being placed in the thimble to provide normal and tangential forces. Normal forces are exerted directly by the fingertip and thus can be properly measured. Tangential forces are estimated by sensors strategically placed in the thimble sides. Two applications are provided in order to facilitate an evaluation of sensorized thimble performance. These applications focus on: (i) force signal edge detection, which determines task segmentation of virtual object manipulation, and (ii) the development of complex object manipulation models, wherein the mechanical features of a real object are obtained and these features are then reproduced for training by means of virtual object manipulation.

  8. Design of a Lightweight, Cost Effective Thimble-Like Sensor for Haptic Applications Based on Contact Force Sensors

    PubMed Central

    Ferre, Manuel; Galiana, Ignacio; Aracil, Rafael

    2011-01-01

    This paper describes the design and calibration of a thimble that measures the forces applied by a user during manipulation of virtual and real objects. Haptic devices benefit from force measurement capabilities at their end-point. However, the heavy weight and cost of force sensors prevent their widespread incorporation in these applications. The design of a lightweight, user-adaptable, and cost-effective thimble with four contact force sensors is described in this paper. The sensors are calibrated before being placed in the thimble to provide normal and tangential forces. Normal forces are exerted directly by the fingertip and thus can be properly measured. Tangential forces are estimated by sensors strategically placed in the thimble sides. Two applications are provided in order to facilitate an evaluation of sensorized thimble performance. These applications focus on: (i) force signal edge detection, which determines task segmentation of virtual object manipulation, and (ii) the development of complex object manipulation models, wherein the mechanical features of a real object are obtained and these features are then reproduced for training by means of virtual object manipulation. PMID:22247677

  9. Method and Apparatus for Virtual Interactive Medical Imaging by Multiple Remotely-Located Users

    NASA Technical Reports Server (NTRS)

    Ross, Muriel D. (Inventor); Twombly, Ian Alexander (Inventor); Senger, Steven O. (Inventor)

    2003-01-01

    A virtual interactive imaging system allows the displaying of high-resolution, three-dimensional images of medical data to a user and allows the user to manipulate the images, including rotation of images in any of various axes. The system includes a mesh component that generates a mesh to represent a surface of an anatomical object, based on a set of data of the object, such as from a CT or MRI scan or the like. The mesh is generated so as to avoid tears, or holes, in the mesh, providing very high-quality representations of topographical features of the object, particularly at high- resolution. The system further includes a virtual surgical cutting tool that enables the user to simulate the removal of a piece or layer of a displayed object, such as a piece of skin or bone, view the interior of the object, manipulate the removed piece, and reattach the removed piece if desired. The system further includes a virtual collaborative clinic component, which allows the users of multiple, remotely-located computer systems to collaboratively and simultaneously view and manipulate the high-resolution, three-dimensional images of the object in real-time.

  10. Design of virtual three-dimensional instruments for sound control

    NASA Astrophysics Data System (ADS)

    Mulder, Axel Gezienus Elith

    An environment for designing virtual instruments with 3D geometry has been prototyped and applied to real-time sound control and design. It enables a sound artist, musical performer or composer to design an instrument according to preferred or required gestural and musical constraints instead of constraints based only on physical laws as they apply to an instrument with a particular geometry. Sounds can be created, edited or performed in real-time by changing parameters like position, orientation and shape of a virtual 3D input device. The virtual instrument can only be perceived through a visualization and acoustic representation, or sonification, of the control surface. No haptic representation is available. This environment was implemented using CyberGloves, Polhemus sensors, an SGI Onyx and by extending a real- time, visual programming language called Max/FTS, which was originally designed for sound synthesis. The extension involves software objects that interface the sensors and software objects that compute human movement and virtual object features. Two pilot studies have been performed, involving virtual input devices with the behaviours of a rubber balloon and a rubber sheet for the control of sound spatialization and timbre parameters. Both manipulation and sonification methods affect the naturalness of the interaction. Informal evaluation showed that a sonification inspired by the physical world appears natural and effective. More research is required for a natural sonification of virtual input device features such as shape, taking into account possible co- articulation of these features. While both hands can be used for manipulation, left-hand-only interaction with a virtual instrument may be a useful replacement for and extension of the standard keyboard modulation wheel. More research is needed to identify and apply manipulation pragmatics and movement features, and to investigate how they are co-articulated, in the mapping of virtual object parameters. While the virtual instruments can be adapted to exploit many manipulation gestures, further work is required to reduce the need for technical expertise to realize adaptations. Better virtual object simulation techniques and faster sensor data acquisition will improve the performance of virtual instruments. The design environment which has been developed should prove useful as a (musical) instrument prototyping tool and as a tool for researching the optimal adaptation of machines to humans.

  11. Welcome to Wonderland: The Influence of the Size and Shape of a Virtual Hand On the Perceived Size and Shape of Virtual Objects

    PubMed Central

    Linkenauger, Sally A.; Leyrer, Markus; Bülthoff, Heinrich H.; Mohler, Betty J.

    2013-01-01

    The notion of body-based scaling suggests that our body and its action capabilities are used to scale the spatial layout of the environment. Here we present four studies supporting this perspective by showing that the hand acts as a metric which individuals use to scale the apparent sizes of objects in the environment. However to test this, one must be able to manipulate the size and/or dimensions of the perceiver’s hand which is difficult in the real world due to impliability of hand dimensions. To overcome this limitation, we used virtual reality to manipulate dimensions of participants’ fully-tracked, virtual hands to investigate its influence on the perceived size and shape of virtual objects. In a series of experiments, using several measures, we show that individuals’ estimations of the sizes of virtual objects differ depending on the size of their virtual hand in the direction consistent with the body-based scaling hypothesis. Additionally, we found that these effects were specific to participants’ virtual hands rather than another avatar’s hands or a salient familiar-sized object. While these studies provide support for a body-based approach to the scaling of the spatial layout, they also demonstrate the influence of virtual bodies on perception of virtual environments. PMID:23874681

  12. The use of physical and virtual manipulatives in an undergraduate mechanical engineering (Dynamics) course

    NASA Astrophysics Data System (ADS)

    Pan, Edward A.

    Science, technology, engineering, and mathematics (STEM) education is a national focus. Engineering education, as part of STEM education, needs to adapt to meet the needs of the nation in a rapidly changing world. Using computer-based visualization tools and corresponding 3D printed physical objects may help nontraditional students succeed in engineering classes. This dissertation investigated how adding physical or virtual learning objects (called manipulatives) to courses that require mental visualization of mechanical systems can aid student performance. Dynamics is one such course, and tends to be taught using lecture and textbooks with static diagrams of moving systems. Students often fail to solve the problems correctly and an inability to mentally visualize the system can contribute to student difficulties. This study found no differences between treatment groups on quantitative measures of spatial ability and conceptual knowledge. There were differences between treatments on measures of mechanical reasoning ability, in favor of the use of physical and virtual manipulatives over static diagrams alone. There were no major differences in student performance between the use of physical and virtual manipulatives. Students used the physical and virtual manipulatives to test their theories about how the machines worked, however their actual time handling the manipulatives was extremely limited relative to the amount of time they spent working on the problems. Students used the physical and virtual manipulatives as visual aids when communicating about the problem with their partners, and this behavior was also seen with Traditional group students who had to use the static diagrams and gesture instead. The explanations students gave for how the machines worked provided evidence of mental simulation; however, their causal chain analyses were often flawed, probably due to attempts to decrease cognitive load. Student opinions about the static diagrams and dynamic models varied by type of model (static, physical, virtual), but were generally favorable. The Traditional group students, however, indicated that the lack of adequate representation of motion in the static diagrams was a problem, and wished they had access to the physical and virtual models.

  13. Promoting Technology Uses in the Elementary Mathematics Classroom: Lessons in Pedagogy from Zoltan Dienes

    ERIC Educational Resources Information Center

    Connell, Michael; Abramovich, Sergei

    2016-01-01

    Today technology allows for the utilization of new classes of mathematical objects which are themselves subject to new modes of student interaction. A series of notable examples may be found in the National Library of Virtual Manipulatives. These virtual manipulatives draw much of their power from their physical embodiment in the form of hand-on…

  14. Mathematical Basis of Knowledge Discovery and Autonomous Intelligent Architectures - Technology for the Creation of Virtual objects in the Real World

    DTIC Science & Technology

    2005-12-14

    control of position/orientation of mobile TV cameras. 9 Unit 9 Force interaction system Unit 6 Helmet mounted displays robot like device drive...joints of the master arm (see Unit 1) which joint coordinates are tracked by the virtual manipulator. Unit 6 . Two displays built in the helmet...special device for simulating the tactile- kinaesthetic effect of immersion. When virtual body is a manipulator it comprises: − master arm with 6

  15. Grip force control during virtual object interaction: effect of force feedback,accuracy demands, and training.

    PubMed

    Gibo, Tricia L; Bastian, Amy J; Okamura, Allison M

    2014-03-01

    When grasping and manipulating objects, people are able to efficiently modulate their grip force according to the experienced load force. Effective grip force control involves providing enough grip force to prevent the object from slipping, while avoiding excessive force to avoid damage and fatigue. During indirect object manipulation via teleoperation systems or in virtual environments, users often receive limited somatosensory feedback about objects with which they interact. This study examines the effects of force feedback, accuracy demands, and training on grip force control during object interaction in a virtual environment. The task required subjects to grasp and move a virtual object while tracking a target. When force feedback was not provided, subjects failed to couple grip and load force, a capability fundamental to direct object interaction. Subjects also exerted larger grip force without force feedback and when accuracy demands of the tracking task were high. In addition, the presence or absence of force feedback during training affected subsequent performance, even when the feedback condition was switched. Subjects' grip force control remained reminiscent of their employed grip during the initial training. These results motivate the use of force feedback during telemanipulation and highlight the effect of force feedback during training.

  16. Vibrotactile sensory substitution for object manipulation: amplitude versus pulse train frequency modulation.

    PubMed

    Stepp, Cara E; Matsuoka, Yoky

    2012-01-01

    Incorporating sensory feedback with prosthetic devices is now possible, but the optimal methods of providing such feedback are still unknown. The relative utility of amplitude and pulse train frequency modulated stimulation paradigms for providing vibrotactile feedback for object manipulation was assessed in 10 participants. The two approaches were studied during virtual object manipulation using a robotic interface as a function of presentation order and a simultaneous cognitive load. Despite the potential pragmatic benefits associated with pulse train frequency modulated vibrotactile stimulation, comparison of the approach with amplitude modulation indicates that amplitude modulation vibrotactile stimulation provides superior feedback for object manipulation.

  17. Virtual hand: a 3D tactile interface to virtual environments

    NASA Astrophysics Data System (ADS)

    Rogowitz, Bernice E.; Borrel, Paul

    2008-02-01

    We introduce a novel system that allows users to experience the sensation of touch in a computer graphics environment. In this system, the user places his/her hand on an array of pins, which is moved about space on a 6 degree-of-freedom robot arm. The surface of the pins defines a surface in the virtual world. This "virtual hand" can move about the virtual world. When the virtual hand encounters an object in the virtual world, the heights of the pins are adjusted so that they represent the object's shape, surface, and texture. A control system integrates pin and robot arm motions to transmit information about objects in the computer graphics world to the user. It also allows the user to edit, change and move the virtual objects, shapes and textures. This system provides a general framework for touching, manipulating, and modifying objects in a 3-D computer graphics environment, which may be useful in a wide range of applications, including computer games, computer aided design systems, and immersive virtual worlds.

  18. A virtual work space for both hands manipulation with coherency between kinesthetic and visual sensation

    NASA Technical Reports Server (NTRS)

    Ishii, Masahiro; Sukanya, P.; Sato, Makoto

    1994-01-01

    This paper describes the construction of a virtual work space for tasks performed by two handed manipulation. We intend to provide a virtual environment that encourages users to accomplish tasks as they usually act in a real environment. Our approach uses a three dimensional spatial interface device that allows the user to handle virtual objects by hand and be able to feel some physical properties such as contact, weight, etc. We investigated suitable conditions for constructing our virtual work space by simulating some basic assembly work, a face and fit task. We then selected the conditions under which the subjects felt most comfortable in performing this task and set up our virtual work space. Finally, we verified the possibility of performing more complex tasks in this virtual work space by providing simple virtual models and then let the subjects create new models by assembling these components. The subjects can naturally perform assembly operations and accomplish the task. Our evaluation shows that this virtual work space has the potential to be used for performing tasks that require two-handed manipulation or cooperation between both hands in a natural manner.

  19. How Do Students Learn to See Concepts in Visualizations? Social Learning Mechanisms with Physical and Virtual Representations

    ERIC Educational Resources Information Center

    Rau, Martina A.

    2017-01-01

    STEM instruction often uses visual representations. To benefit from these, students need to understand how representations show domain-relevant concepts. Yet, this is difficult for students. Prior research shows that physical representations (objects that students manipulate by hand) and virtual representations (objects on a computer screen that…

  20. Virtual Boutique: a 3D modeling and content-based management approach to e-commerce

    NASA Astrophysics Data System (ADS)

    Paquet, Eric; El-Hakim, Sabry F.

    2000-12-01

    The Virtual Boutique is made out of three modules: the decor, the market and the search engine. The decor is the physical space occupied by the Virtual Boutique. It can reproduce any existing boutique. For this purpose, photogrammetry is used. A set of pictures of a real boutique or space is taken and a virtual 3D representation of this space is calculated from them. Calculations are performed with software developed at NRC. This representation consists of meshes and texture maps. The camera used in the acquisition process determines the resolution of the texture maps. Decorative elements are added like painting, computer generated objects and scanned objects. The objects are scanned with laser scanner developed at NRC. This scanner allows simultaneous acquisition of range and color information based on white laser beam triangulation. The second module, the market, is made out of all the merchandises and the manipulators, which are used to manipulate and compare the objects. The third module, the search engine, can search the inventory based on an object shown by the customer in order to retrieve similar objects base don shape and color. The items of interest are displayed in the boutique by reconfiguring the market space, which mean that the boutique can be continuously customized according to the customer's needs. The Virtual Boutique is entirely written in Java 3D and can run in mono and stereo mode and has been optimized in order to allow high quality rendering.

  1. Evaluating Remapped Physical Reach for Hand Interactions with Passive Haptics in Virtual Reality.

    PubMed

    Han, Dustin T; Suhail, Mohamed; Ragan, Eric D

    2018-04-01

    Virtual reality often uses motion tracking to incorporate physical hand movements into interaction techniques for selection and manipulation of virtual objects. To increase realism and allow direct hand interaction, real-world physical objects can be aligned with virtual objects to provide tactile feedback and physical grasping. However, unless a physical space is custom configured to match a specific virtual reality experience, the ability to perfectly match the physical and virtual objects is limited. Our research addresses this challenge by studying methods that allow one physical object to be mapped to multiple virtual objects that can exist at different virtual locations in an egocentric reference frame. We study two such techniques: one that introduces a static translational offset between the virtual and physical hand before a reaching action, and one that dynamically interpolates the position of the virtual hand during a reaching motion. We conducted two experiments to assess how the two methods affect reaching effectiveness, comfort, and ability to adapt to the remapping techniques when reaching for objects with different types of mismatches between physical and virtual locations. We also present a case study to demonstrate how the hand remapping techniques could be used in an immersive game application to support realistic hand interaction while optimizing usability. Overall, the translational technique performed better than the interpolated reach technique and was more robust for situations with larger mismatches between virtual and physical objects.

  2. Creating objects and object categories for studying perception and perceptual learning.

    PubMed

    Hauffen, Karin; Bart, Eugene; Brady, Mark; Kersten, Daniel; Hegdé, Jay

    2012-11-02

    In order to quantitatively study object perception, be it perception by biological systems or by machines, one needs to create objects and object categories with precisely definable, preferably naturalistic, properties. Furthermore, for studies on perceptual learning, it is useful to create novel objects and object categories (or object classes) with such properties. Many innovative and useful methods currently exist for creating novel objects and object categories (also see refs. 7,8). However, generally speaking, the existing methods have three broad types of shortcomings. First, shape variations are generally imposed by the experimenter, and may therefore be different from the variability in natural categories, and optimized for a particular recognition algorithm. It would be desirable to have the variations arise independently of the externally imposed constraints. Second, the existing methods have difficulty capturing the shape complexity of natural objects. If the goal is to study natural object perception, it is desirable for objects and object categories to be naturalistic, so as to avoid possible confounds and special cases. Third, it is generally hard to quantitatively measure the available information in the stimuli created by conventional methods. It would be desirable to create objects and object categories where the available information can be precisely measured and, where necessary, systematically manipulated (or 'tuned'). This allows one to formulate the underlying object recognition tasks in quantitative terms. Here we describe a set of algorithms, or methods, that meet all three of the above criteria. Virtual morphogenesis (VM) creates novel, naturalistic virtual 3-D objects called 'digital embryos' by simulating the biological process of embryogenesis. Virtual phylogenesis (VP) creates novel, naturalistic object categories by simulating the evolutionary process of natural selection. Objects and object categories created by these simulations can be further manipulated by various morphing methods to generate systematic variations of shape characteristics. The VP and morphing methods can also be applied, in principle, to novel virtual objects other than digital embryos, or to virtual versions of real-world objects. Virtual objects created in this fashion can be rendered as visual images using a conventional graphical toolkit, with desired manipulations of surface texture, illumination, size, viewpoint and background. The virtual objects can also be 'printed' as haptic objects using a conventional 3-D prototyper. We also describe some implementations of these computational algorithms to help illustrate the potential utility of the algorithms. It is important to distinguish the algorithms from their implementations. The implementations are demonstrations offered solely as a 'proof of principle' of the underlying algorithms. It is important to note that, in general, an implementation of a computational algorithm often has limitations that the algorithm itself does not have. Together, these methods represent a set of powerful and flexible tools for studying object recognition and perceptual learning by biological and computational systems alike. With appropriate extensions, these methods may also prove useful in the study of morphogenesis and phylogenesis.

  3. Laboratory E-Notebooks: A Learning Object-Based Repository

    ERIC Educational Resources Information Center

    Abari, Ilior; Pierre, Samuel; Saliah-Hassane, Hamadou

    2006-01-01

    During distributed virtual laboratory experiment sessions, a major problem is to be able to collect, store, manage and share heterogeneous data (intermediate results, analysis, annotations, etc) manipulated simultaneously by geographically distributed teammates composing a virtual team. The electronic notebook is a possible response to this…

  4. Integration of the virtual 3D model of a control system with the virtual controller

    NASA Astrophysics Data System (ADS)

    Herbuś, K.; Ociepka, P.

    2015-11-01

    Nowadays the design process includes simulation analysis of different components of a constructed object. It involves the need for integration of different virtual object to simulate the whole investigated technical system. The paper presents the issues related to the integration of a virtual 3D model of a chosen control system of with a virtual controller. The goal of integration is to verify the operation of an adopted object of in accordance with the established control program. The object of the simulation work is the drive system of a tunneling machine for trenchless work. In the first stage of work was created an interactive visualization of functioning of the 3D virtual model of a tunneling machine. For this purpose, the software of the VR (Virtual Reality) class was applied. In the elaborated interactive application were created adequate procedures allowing controlling the drive system of a translatory motion, a rotary motion and the drive system of a manipulator. Additionally was created the procedure of turning on and off the output crushing head, mounted on the last element of the manipulator. In the elaborated interactive application have been established procedures for receiving input data from external software, on the basis of the dynamic data exchange (DDE), which allow controlling actuators of particular control systems of the considered machine. In the next stage of work, the program on a virtual driver, in the ladder diagram (LD) language, was created. The control program was developed on the basis of the adopted work cycle of the tunneling machine. The element integrating the virtual model of the tunneling machine for trenchless work with the virtual controller is the application written in a high level language (Visual Basic). In the developed application was created procedures responsible for collecting data from the running, in a simulation mode, virtual controller and transferring them to the interactive application, in which is verified the operation of the adopted research object. The carried out work allowed foot the integration of the virtual model of the control system of the tunneling machine with the virtual controller, enabling the verification of its operation.

  5. Integrating Virtual Worlds with Tangible User Interfaces for Teaching Mathematics: A Pilot Study.

    PubMed

    Guerrero, Graciela; Ayala, Andrés; Mateu, Juan; Casades, Laura; Alamán, Xavier

    2016-10-25

    This article presents a pilot study of the use of two new tangible interfaces and virtual worlds for teaching geometry in a secondary school. The first tangible device allows the user to control a virtual object in six degrees of freedom. The second tangible device is used to modify virtual objects, changing attributes such as position, size, rotation and color. A pilot study on using these devices was carried out at the "Florida Secundaria" high school. A virtual world was built where students used the tangible interfaces to manipulate geometrical figures in order to learn different geometrical concepts. The pilot experiment results suggest that the use of tangible interfaces and virtual worlds allowed a more meaningful learning (concepts learnt were more durable).

  6. Magnetosensitive e-skins with directional perception for augmented reality

    PubMed Central

    Cañón Bermúdez, Gilbert Santiago; Karnaushenko, Dmitriy D.; Karnaushenko, Daniil; Lebanov, Ana; Bischoff, Lothar; Kaltenbrunner, Martin; Fassbender, Jürgen; Schmidt, Oliver G.; Makarov, Denys

    2018-01-01

    Electronic skins equipped with artificial receptors are able to extend our perception beyond the modalities that have naturally evolved. These synthetic receptors offer complimentary information on our surroundings and endow us with novel means of manipulating physical or even virtual objects. We realize highly compliant magnetosensitive skins with directional perception that enable magnetic cognition, body position tracking, and touchless object manipulation. Transfer printing of eight high-performance spin valve sensors arranged into two Wheatstone bridges onto 1.7-μm-thick polyimide foils ensures mechanical imperceptibility. This resembles a new class of interactive devices extracting information from the surroundings through magnetic tags. We demonstrate this concept in augmented reality systems with virtual knob-turning functions and the operation of virtual dialing pads, based on the interaction with magnetic fields. This technology will enable a cornucopia of applications from navigation, motion tracking in robotics, regenerative medicine, and sports and gaming to interaction in supplemented reality. PMID:29376121

  7. Creating Objects and Object Categories for Studying Perception and Perceptual Learning

    PubMed Central

    Hauffen, Karin; Bart, Eugene; Brady, Mark; Kersten, Daniel; Hegdé, Jay

    2012-01-01

    In order to quantitatively study object perception, be it perception by biological systems or by machines, one needs to create objects and object categories with precisely definable, preferably naturalistic, properties1. Furthermore, for studies on perceptual learning, it is useful to create novel objects and object categories (or object classes) with such properties2. Many innovative and useful methods currently exist for creating novel objects and object categories3-6 (also see refs. 7,8). However, generally speaking, the existing methods have three broad types of shortcomings. First, shape variations are generally imposed by the experimenter5,9,10, and may therefore be different from the variability in natural categories, and optimized for a particular recognition algorithm. It would be desirable to have the variations arise independently of the externally imposed constraints. Second, the existing methods have difficulty capturing the shape complexity of natural objects11-13. If the goal is to study natural object perception, it is desirable for objects and object categories to be naturalistic, so as to avoid possible confounds and special cases. Third, it is generally hard to quantitatively measure the available information in the stimuli created by conventional methods. It would be desirable to create objects and object categories where the available information can be precisely measured and, where necessary, systematically manipulated (or 'tuned'). This allows one to formulate the underlying object recognition tasks in quantitative terms. Here we describe a set of algorithms, or methods, that meet all three of the above criteria. Virtual morphogenesis (VM) creates novel, naturalistic virtual 3-D objects called 'digital embryos' by simulating the biological process of embryogenesis14. Virtual phylogenesis (VP) creates novel, naturalistic object categories by simulating the evolutionary process of natural selection9,12,13. Objects and object categories created by these simulations can be further manipulated by various morphing methods to generate systematic variations of shape characteristics15,16. The VP and morphing methods can also be applied, in principle, to novel virtual objects other than digital embryos, or to virtual versions of real-world objects9,13. Virtual objects created in this fashion can be rendered as visual images using a conventional graphical toolkit, with desired manipulations of surface texture, illumination, size, viewpoint and background. The virtual objects can also be 'printed' as haptic objects using a conventional 3-D prototyper. We also describe some implementations of these computational algorithms to help illustrate the potential utility of the algorithms. It is important to distinguish the algorithms from their implementations. The implementations are demonstrations offered solely as a 'proof of principle' of the underlying algorithms. It is important to note that, in general, an implementation of a computational algorithm often has limitations that the algorithm itself does not have. Together, these methods represent a set of powerful and flexible tools for studying object recognition and perceptual learning by biological and computational systems alike. With appropriate extensions, these methods may also prove useful in the study of morphogenesis and phylogenesis. PMID:23149420

  8. Learning Area and Perimeter with Virtual Manipulatives

    ERIC Educational Resources Information Center

    Bouck, Emily; Flanagan, Sara; Bouck, Mary

    2015-01-01

    Manipulatives are considered a best practice for educating students with disabilities, but little research exists which examines virtual manipulatives as tool for supporting students in mathematics. This project investigated the use of a virtual manipulative through the National Library of Virtual Manipulatives--polynominoes (i.e., tiles)--as a…

  9. Generating Contextual Descriptions of Virtual Reality (VR) Spaces

    NASA Astrophysics Data System (ADS)

    Olson, D. M.; Zaman, C. H.; Sutherland, A.

    2017-12-01

    Virtual reality holds great potential for science communication, education, and research. However, interfaces for manipulating data and environments in virtual worlds are limited and idiosyncratic. Furthermore, speech and vision are the primary modalities by which humans collect information about the world, but the linking of visual and natural language domains is a relatively new pursuit in computer vision. Machine learning techniques have been shown to be effective at image and speech classification, as well as at describing images with language (Karpathy 2016), but have not yet been used to describe potential actions. We propose a technique for creating a library of possible context-specific actions associated with 3D objects in immersive virtual worlds based on a novel dataset generated natively in virtual reality containing speech, image, gaze, and acceleration data. We will discuss the design and execution of a user study in virtual reality that enabled the collection and the development of this dataset. We will also discuss the development of a hybrid machine learning algorithm linking vision data with environmental affordances in natural language. Our findings demonstrate that it is possible to develop a model which can generate interpretable verbal descriptions of possible actions associated with recognized 3D objects within immersive VR environments. This suggests promising applications for more intuitive user interfaces through voice interaction within 3D environments. It also demonstrates the potential to apply vast bodies of embodied and semantic knowledge to enrich user interaction within VR environments. This technology would allow for applications such as expert knowledge annotation of 3D environments, complex verbal data querying and object manipulation in virtual spaces, and computer-generated, dynamic 3D object affordances and functionality during simulations.

  10. Monitoring and analysis of data in cyberspace

    NASA Technical Reports Server (NTRS)

    Schwuttke, Ursula M. (Inventor); Angelino, Robert (Inventor)

    2001-01-01

    Information from monitored systems is displayed in three dimensional cyberspace representations defining a virtual universe having three dimensions. Fixed and dynamic data parameter outputs from the monitored systems are visually represented as graphic objects that are positioned in the virtual universe based on relationships to the system and to the data parameter categories. Attributes and values of the data parameters are indicated by manipulating properties of the graphic object such as position, color, shape, and motion.

  11. Integrating Virtual Worlds with Tangible User Interfaces for Teaching Mathematics: A Pilot Study

    PubMed Central

    Guerrero, Graciela; Ayala, Andrés; Mateu, Juan; Casades, Laura; Alamán, Xavier

    2016-01-01

    This article presents a pilot study of the use of two new tangible interfaces and virtual worlds for teaching geometry in a secondary school. The first tangible device allows the user to control a virtual object in six degrees of freedom. The second tangible device is used to modify virtual objects, changing attributes such as position, size, rotation and color. A pilot study on using these devices was carried out at the “Florida Secundaria” high school. A virtual world was built where students used the tangible interfaces to manipulate geometrical figures in order to learn different geometrical concepts. The pilot experiment results suggest that the use of tangible interfaces and virtual worlds allowed a more meaningful learning (concepts learnt were more durable). PMID:27792132

  12. The SEE Experience: Edutainment in 3D Virtual Worlds.

    ERIC Educational Resources Information Center

    Di Blas, Nicoletta; Paolini, Paolo; Hazan, Susan

    Shared virtual worlds are innovative applications where several users, represented by Avatars, simultaneously access via Internet a 3D space. Users cooperate through interaction with the environment and with each other, manipulating objects and chatting as they go. Apart from in the well documented online action games industry, now often played…

  13. "Virtual Feel" Capaciflectors

    NASA Technical Reports Server (NTRS)

    Vranish, John M.

    2006-01-01

    The term "virtual feel" denotes a type of capaciflector (an advanced capacitive proximity sensor) and a methodology for designing and using a sensor of this type to guide a robot in manipulating a tool (e.g., a wrench socket) into alignment with a mating fastener (e.g., a bolt head) or other electrically conductive object. A capaciflector includes at least one sensing electrode, excited with an alternating voltage, that puts out a signal indicative of the capacitance between that electrode and a proximal object.

  14. Z-depth integration: a new technique for manipulating z-depth properties in composited scenes

    NASA Astrophysics Data System (ADS)

    Steckel, Kayla; Whittinghill, David

    2014-02-01

    This paper presents a new technique in the production pipeline of asset creation for virtual environments called Z-Depth Integration (ZeDI). ZeDI is intended to reduce the time required to place elements at the appropriate z-depth within a scene. Though ZeDI is intended for use primarily in two-dimensional scene composition, depth-dependent "flat" animated objects are often critical elements of augmented and virtual reality applications (AR/VR). ZeDI is derived from "deep image compositing", a capacity implemented within the OpenEXR file format. In order to trick the human eye into perceiving overlapping scene elements as being in front of or behind one another, the developer must manually manipulate which pixels of an element are visible in relation to other objects embedded within the environment's image sequence. ZeDI improves on this process by providing a means for interacting with procedurally extracted z-depth data from a virtual environment scene. By streamlining the process of defining objects' depth characteristics, it is expected that the time and energy required for developers to create compelling AR/VR scenes will be reduced. In the proof of concept presented in this manuscript, ZeDI is implemented for pre-rendered virtual scene construction via an AfterEffects software plug-in.

  15. A Data Management System for International Space Station Simulation Tools

    NASA Technical Reports Server (NTRS)

    Betts, Bradley J.; DelMundo, Rommel; Elcott, Sharif; McIntosh, Dawn; Niehaus, Brian; Papasin, Richard; Mah, Robert W.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Groups associated with the design, operational, and training aspects of the International Space Station make extensive use of modeling and simulation tools. Users of these tools often need to access and manipulate large quantities of data associated with the station, ranging from design documents to wiring diagrams. Retrieving and manipulating this data directly within the simulation and modeling environment can provide substantial benefit to users. An approach for providing these kinds of data management services, including a database schema and class structure, is presented. Implementation details are also provided as a data management system is integrated into the Intelligent Virtual Station, a modeling and simulation tool developed by the NASA Ames Smart Systems Research Laboratory. One use of the Intelligent Virtual Station is generating station-related training procedures in a virtual environment, The data management component allows users to quickly and easily retrieve information related to objects on the station, enhancing their ability to generate accurate procedures. Users can associate new information with objects and have that information stored in a database.

  16. Televirtuality: "Being There" in the 21st Century.

    ERIC Educational Resources Information Center

    Jacobson, Robert

    Virtual worlds technology (VWT) uses special computer hardware and software to link humans with computers in natural ways. A data model, or virtual world, is created and presented as a three-dimensional world of sights and sounds. The participant manipulates apparent objects in the world, and in so doing, alters the data model. VWT will become…

  17. Virtual Manipulatives in the K-12 Classroom.

    ERIC Educational Resources Information Center

    Moyer, Patricia S.; Bolyard, Johnna J.; Spikell, Mark A.

    Innovations in technology, along with the growing prevalence of the Internet and its increasing availability in classrooms and homes throughout the world, have created a new class of manipulatives, virtual manipulatives. These "virtual manipulatives" offer a new, enhanced approach for teaching and learning mathematics using manipulatives and…

  18. A Proposed Treatment for Visual Field Loss caused by Traumatic Brain Injury using Interactive Visuotactile Virtual Environment

    NASA Astrophysics Data System (ADS)

    Farkas, Attila J.; Hajnal, Alen; Shiratuddin, Mohd F.; Szatmary, Gabriella

    In this paper, we propose a novel approach of using interactive virtual environment technology in Vision Restoration Therapy caused by Traumatic Brain Injury. We called the new system Interactive Visuotactile Virtual Environment and it holds a promise of expanding the scope of already existing rehabilitation techniques. Traditional vision rehabilitation methods are based on passive psychophysical training procedures, and can last up to six months before any modest improvements can be seen in patients. A highly immersive and interactive virtual environment will allow the patient to practice everyday activities such as object identification and object manipulation through the use 3D motion sensoring handheld devices such data glove or the Nintendo Wiimote. Employing both perceptual and action components in the training procedures holds the promise of more efficient sensorimotor rehabilitation. Increased stimulation of visual and sensorimotor areas of the brain should facilitate a comprehensive recovery of visuomotor function by exploiting the plasticity of the central nervous system. Integrated with a motion tracking system and an eye tracking device, the interactive virtual environment allows for the creation and manipulation of a wide variety of stimuli, as well as real-time recording of hand-, eye- and body movements and coordination. The goal of the project is to design a cost-effective and efficient vision restoration system.

  19. Integrated Data Visualization and Virtual Reality Tool

    NASA Technical Reports Server (NTRS)

    Dryer, David A.

    1998-01-01

    The Integrated Data Visualization and Virtual Reality Tool (IDVVRT) Phase II effort was for the design and development of an innovative Data Visualization Environment Tool (DVET) for NASA engineers and scientists, enabling them to visualize complex multidimensional and multivariate data in a virtual environment. The objectives of the project were to: (1) demonstrate the transfer and manipulation of standard engineering data in a virtual world; (2) demonstrate the effects of design and changes using finite element analysis tools; and (3) determine the training and engineering design and analysis effectiveness of the visualization system.

  20. Direct manipulation of virtual objects

    NASA Astrophysics Data System (ADS)

    Nguyen, Long K.

    Interacting with a Virtual Environment (VE) generally requires the user to correctly perceive the relative position and orientation of virtual objects. For applications requiring interaction in personal space, the user may also need to accurately judge the position of the virtual object relative to that of a real object, for example, a virtual button and the user's real hand. This is difficult since VEs generally only provide a subset of the cues experienced in the real world. Complicating matters further, VEs presented by currently available visual displays may be inaccurate or distorted due to technological limitations. Fundamental physiological and psychological aspects of vision as they pertain to the task of object manipulation were thoroughly reviewed. Other sensory modalities -- proprioception, haptics, and audition -- and their cross-interactions with each other and with vision are briefly discussed. Visual display technologies, the primary component of any VE, were canvassed and compared. Current applications and research were gathered and categorized by different VE types and object interaction techniques. While object interaction research abounds in the literature, pockets of research gaps remain. Direct, dexterous, manual interaction with virtual objects in Mixed Reality (MR), where the real, seen hand accurately and effectively interacts with virtual objects, has not yet been fully quantified. An experimental test bed was designed to provide the highest accuracy attainable for salient visual cues in personal space. Optical alignment and user calibration were carefully performed. The test bed accommodated the full continuum of VE types and sensory modalities for comprehensive comparison studies. Experimental designs included two sets, each measuring depth perception and object interaction. The first set addressed the extreme end points of the Reality-Virtuality (R-V) continuum -- Immersive Virtual Environment (IVE) and Reality Environment (RE). This validated, linked, and extended several previous research findings, using one common test bed and participant pool. The results provided a proven method and solid reference points for further research. The second set of experiments leveraged the first to explore the full R-V spectrum and included additional, relevant sensory modalities. It consisted of two full-factorial experiments providing for rich data and key insights into the effect of each type of environment and each modality on accuracy and timeliness of virtual object interaction. The empirical results clearly showed that mean depth perception error in personal space was less than four millimeters whether the stimuli presented were real, virtual, or mixed. Likewise, mean error for the simple task of pushing a button was less than four millimeters whether the button was real or virtual. Mean task completion time was less than one second. Key to the high accuracy and quick task performance time observed was the correct presentation of the visual cues, including occlusion, stereoscopy, accommodation, and convergence. With performance results already near optimal level with accurate visual cues presented, adding proprioception, audio, and haptic cues did not significantly improve performance. Recommendations for future research include enhancement of the visual display and further experiments with more complex tasks and additional control variables.

  1. An interactive VR system based on full-body tracking and gesture recognition

    NASA Astrophysics Data System (ADS)

    Zeng, Xia; Sang, Xinzhu; Chen, Duo; Wang, Peng; Guo, Nan; Yan, Binbin; Wang, Kuiru

    2016-10-01

    Most current virtual reality (VR) interactions are realized with the hand-held input device which leads to a low degree of presence. There is other solutions using sensors like Leap Motion to recognize the gestures of users in order to interact in a more natural way, but the navigation in these systems is still a problem, because they fail to map the actual walking to virtual walking only with a partial body of the user represented in the synthetic environment. Therefore, we propose a system in which users can walk around in the virtual environment as a humanoid model, selecting menu items and manipulating with the virtual objects using natural hand gestures. With a Kinect depth camera, the system tracks the joints of the user, mapping them to a full virtual body which follows the move of the tracked user. The movements of the feet can be detected to determine whether the user is in walking state, so that the walking of model in the virtual world can be activated and stopped by means of animation control in Unity engine. This method frees the hands of users comparing to traditional navigation way using hand-held device. We use the point cloud data getting from Kinect depth camera to recognize the gestures of users, such as swiping, pressing and manipulating virtual objects. Combining the full body tracking and gestures recognition using Kinect, we achieve our interactive VR system in Unity engine with a high degree of presence.

  2. Calculus of nonrigid surfaces for geometry and texture manipulation.

    PubMed

    Bronstein, Alexander; Bronstein, Michael; Kimmel, Ron

    2007-01-01

    We present a geometric framework for automatically finding intrinsic correspondence between three-dimensional nonrigid objects. We model object deformation as near isometries and find the correspondence as the minimum-distortion mapping. A generalization of multidimensional scaling is used as the numerical core of our approach. As a result, we obtain the possibility to manipulate the extrinsic geometry and the texture of the objects as vectors in a linear space. We demonstrate our method on the problems of expression-invariant texture mapping onto an animated three-dimensional face, expression exaggeration, morphing between faces, and virtual body painting.

  3. The Comparative Effectiveness of Physical, Virtual, and Virtual-Physical Manipulatives on Third-Grade Students' Science Achievement and Conceptual Understanding of Evaporation and Condensation

    ERIC Educational Resources Information Center

    Wang, Tzu-Ling; Tseng, Yi-Kuan

    2018-01-01

    The purpose of this study was to investigate the relative effectiveness of experimenting with physical manipulatives alone, virtual manipulatives alone, and virtual preceding physical manipulatives (combination environment) on third-grade students' science achievement and conceptual understanding in the domain of state changes of water, focusing…

  4. Physical versus Virtual Manipulative Experimentation in Physics Learning

    ERIC Educational Resources Information Center

    Zacharia, Zacharias C.; Olympiou, Georgios

    2011-01-01

    The aim of this study was to investigate whether physical or virtual manipulative experimentation can differentiate physics learning. There were four experimental conditions, namely Physical Manipulative Experimentation (PME), Virtual Manipulative Experimentation (VME), and two sequential combinations of PME and VME, as well as a control condition…

  5. Predictability, Force and (Anti-)Resonance in Complex Object Control.

    PubMed

    Maurice, Pauline; Hogan, Neville; Sternad, Dagmar

    2018-04-18

    Manipulation of complex objects as in tool use is ubiquitous and has given humans an evolutionary advantage. This study examined the strategies humans choose when manipulating an object with underactuated internal dynamics, such as a cup of coffee. The object's dynamics renders the temporal evolution complex, possibly even chaotic, and difficult to predict. A cart-and-pendulum model, loosely mimicking coffee sloshing in a cup, was implemented in a virtual environment with a haptic interface. Participants rhythmically manipulated the virtual cup containing a rolling ball; they could choose the oscillation frequency, while the amplitude was prescribed. Three hypotheses were tested: 1) humans decrease interaction forces between hand and object; 2) humans increase the predictability of the object dynamics; 3) humans exploit the resonances of the coupled object-hand system. Analysis revealed that humans chose either a high-frequency strategy with anti-phase cup-and-ball movements or a low-frequency strategy with in-phase cup-and-ball movements. Counter Hypothesis 1, they did not decrease interaction force; instead, they increased the predictability of the interaction dynamics, quantified by mutual information, supporting Hypothesis 2. To address Hypothesis 3, frequency analysis of the coupled hand-object system revealed two resonance frequencies separated by an anti-resonance frequency. The low-frequency strategy exploited one resonance, while the high-frequency strategy afforded more choice, consistent with the frequency response of the coupled system; both strategies avoided the anti-resonance. Hence, humans did not prioritize interaction force, but rather strategies that rendered interactions predictable. These findings highlight that physical interactions with complex objects pose control challenges not present in unconstrained movements.

  6. Interactive Virtual and Physical Manipulatives for Improving Students' Spatial Skills

    ERIC Educational Resources Information Center

    Ha, Oai; Fang, Ning

    2018-01-01

    An innovative educational technology called interactive virtual and physical manipulatives (VPM) is developed to improve students' spatial skills. With VPM technology, not only can students touch and play with real-world physical manipulatives in their hands but also they can see how the corresponding virtual manipulatives (i.e., computer…

  7. A Comparison Study of Polyominoes Explorations in a Physical and Virtual Manipulative Environment

    ERIC Educational Resources Information Center

    Yuan, Y.; Lee, C. -Y.; Wang, C. -H.

    2010-01-01

    This study develops virtual manipulative, polyominoes kits for junior high school students to explore polyominoes. The current work conducts a non-equivalent group pretest-post-test quasi-experimental design to compare the performance difference between using physical manipulatives and virtual manipulatives in finding the number of polyominoes.…

  8. Virtual reality and interactive 3D as effective tools for medical training.

    PubMed

    Webb, George; Norcliffe, Alex; Cannings, Peter; Sharkey, Paul; Roberts, Dave

    2003-01-01

    CAVE-like displays allow a user to walk in to a virtual environment, and use natural movement to change the viewpoint of virtual objects which they can manipulate with a hand held device. This maps well to many surgical procedures offering strong potential for training and planning. These devices may be networked together allowing geographically remote users to share the interactive experience. This maps to the strong need for distance training and planning of surgeons. Our paper shows how the properties of a CAVE-Like facility can be maximised in order to provide an ideal environment for medical training. The implementation of a large 3D-eye is described. The resulting application is that of an eye that can be manipulated and examined by trainee medics under the guidance of a medical expert. The progression and effects of different ailments can be illustrated and corrective procedures, demonstrated.

  9. Virtual Manipulative Materials in Secondary Mathematics: A Theoretical Discussion

    ERIC Educational Resources Information Center

    Namukasa, Immacukate K.; Stanley, Darren; Tuchtie, Martin

    2009-01-01

    With the increased use of computer manipulatives in teaching there is need for theoretical discussions on the role of manipulatives. This paper reviews theoretical rationales for using manipulatives and illustrates how earlier distinctions of manipulative materials are broadened to include new forms of materials such as virtual manipulatives.…

  10. Spatial Reasoning with External Visualizations: What Matters Is What You See, Not whether You Interact

    ERIC Educational Resources Information Center

    Keehner, Madeleine; Hegarty, Mary; Cohen, Cheryl; Khooshabeh, Peter; Montello, Daniel R.

    2008-01-01

    Three experiments examined the effects of interactive visualizations and spatial abilities on a task requiring participants to infer and draw cross sections of a three-dimensional (3D) object. The experiments manipulated whether participants could interactively control a virtual 3D visualization of the object while performing the task, and…

  11. Studies of the field-of-view resolution tradeoff in virtual-reality systems

    NASA Technical Reports Server (NTRS)

    Piantanida, Thomas P.; Boman, Duane; Larimer, James; Gille, Jennifer; Reed, Charles

    1992-01-01

    Most virtual-reality systems use LCD-based displays that achieve a large field-of-view at the expense of resolution. A typical display will consist of approximately 86,000 pixels uniformly distributed over an 80-degree by 60-degree image. Thus, each pixel subtends about 13 minutes of arc at the retina; about the same as the resolvable features of the 20/200 line of a Snellen Eye Chart. The low resolution of LCD-based systems limits task performance in some applications. We have examined target-detection performance in a low-resolution virtual world. Our synthesized three-dimensional virtual worlds consisted of target objects that could be positioned at a fixed distance from the viewer, but at random azimuth and constrained elevation. A virtual world could be bounded by chromatic walls or by wire-frame, or it could be unbounded. Viewers scanned these worlds and indicated by appropriate gestures when they had detected the target object. By manipulating the viewer's field size and the chromatic and luminance contrast of annuli surrounding the field-of-view, we were able to assess the effect of field size on the detection of virtual objects in low-resolution synthetic worlds.

  12. A Proposed Framework for Collaborative Design in a Virtual Environment

    NASA Astrophysics Data System (ADS)

    Breland, Jason S.; Shiratuddin, Mohd Fairuz

    This paper describes a proposed framework for a collaborative design in a virtual environment. The framework consists of components that support a true collaborative design in a real-time 3D virtual environment. In support of the proposed framework, a prototype application is being developed. The authors envision the framework will have, but not limited to the following features: (1) real-time manipulation of 3D objects across the network, (2) support for multi-designer activities and information access, (3) co-existence within same virtual space, etc. This paper also discusses a proposed testing to determine the possible benefits of a collaborative design in a virtual environment over other forms of collaboration, and results from a pilot test.

  13. Master-slave system with force feedback based on dynamics of virtual model

    NASA Technical Reports Server (NTRS)

    Nojima, Shuji; Hashimoto, Hideki

    1994-01-01

    A master-slave system can extend manipulating and sensing capabilities of a human operator to a remote environment. But the master-slave system has two serious problems: one is the mechanically large impedance of the system; the other is the mechanical complexity of the slave for complex remote tasks. These two problems reduce the efficiency of the system. If the slave has local intelligence, it can help the human operator by using its good points like fast calculation and large memory. The authors suggest that the slave is a dextrous hand with many degrees of freedom able to manipulate an object of known shape. It is further suggested that the dimensions of the remote work space be shared by the human operator and the slave. The effect of the large impedance of the system can be reduced in a virtual model, a physical model constructed in a computer with physical parameters as if it were in the real world. A method to determine the damping parameter dynamically for the virtual model is proposed. Experimental results show that this virtual model is better than the virtual model with fixed damping.

  14. Making Sense of Integer Arithmetic: The Effect of Using Virtual Manipulatives on Students' Representational Fluency

    ERIC Educational Resources Information Center

    Bolyard, Johnna; Moyer-Packenham, Patricia

    2012-01-01

    This study investigated how the use of virtual manipulatives in integer instruction impacts student achievement for integer addition and subtraction. Of particular interest was the influence of using virtual manipulatives on students' ability to create and translate among representations for integer computation. The research employed a…

  15. Gender Differences in the Relationship between Taiwanese Adolescents' Mathematics Attitudes and Their Perceptions toward Virtual Manipulatives

    ERIC Educational Resources Information Center

    Lee, Chun-Yi; Yuan, Yuan

    2010-01-01

    This study explored gender differences in the relationship between young people's mathematics attitudes and their perceptions toward virtual manipulatives. Seven hundred eighty junior high school adolescents who participated in the problem-solving activity using virtual manipulatives were selected for examination. The study found the male…

  16. Applied virtual reality at the Research Triangle Institute

    NASA Technical Reports Server (NTRS)

    Montoya, R. Jorge

    1994-01-01

    Virtual Reality (VR) is a way for humans to use computers in visualizing, manipulating and interacting with large geometric data bases. This paper describes a VR infrastructure and its application to marketing, modeling, architectural walk through, and training problems. VR integration techniques used in these applications are based on a uniform approach which promotes portability and reusability of developed modules. For each problem, a 3D object data base is created using data captured by hand or electronically. The object's realism is enhanced through either procedural or photo textures. The virtual environment is created and populated with the data base using software tools which also support interactions with and immersivity in the environment. These capabilities are augmented by other sensory channels such as voice recognition, 3D sound, and tracking. Four applications are presented: a virtual furniture showroom, virtual reality models of the North Carolina Global TransPark, a walk through the Dresden Fraunenkirche, and the maintenance training simulator for the National Guard.

  17. A specification of 3D manipulation in virtual environments

    NASA Technical Reports Server (NTRS)

    Su, S. Augustine; Furuta, Richard

    1994-01-01

    In this paper we discuss the modeling of three basic kinds of 3-D manipulations in the context of a logical hand device and our virtual panel architecture. The logical hand device is a useful software abstraction representing hands in virtual environments. The virtual panel architecture is the 3-D component of the 2-D window systems. Both of the abstractions are intended to form the foundation for adaptable 3-D manipulation.

  18. Virtualization Technologies in Information Systems Education

    ERIC Educational Resources Information Center

    Lunsford, Dale L.

    2009-01-01

    Information systems educators must balance the need to protect the stability, availability, and security of computer laboratories with the learning objectives of various courses. In advanced courses where students need to install, configure, and otherwise manipulate application and operating system settings, this is especially problematic as these…

  19. Using a Virtual Manipulative Environment to Support Students' Organizational Structuring of Volume Units

    ERIC Educational Resources Information Center

    O'Dell, Jenna R.; Barrett, Jeffrey E.; Cullen, Craig J.; Rupnow, Theodore J.; Clements, Douglas H.; Sarama, Julie; Rutherford, George; Beck, Pamela S.

    2017-01-01

    In this study, we investigated how Grade 3 and 4 students' organizational structure for volume units develops through repeated experiences with a virtual manipulative for building prisms. Our data consist of taped clinical interviews within a micro-genetic experiment. We report on student strategy development using a virtual manipulative for…

  20. Motor learning from virtual reality to natural environments in individuals with Duchenne muscular dystrophy.

    PubMed

    Quadrado, Virgínia Helena; Silva, Talita Dias da; Favero, Francis Meire; Tonks, James; Massetti, Thais; Monteiro, Carlos Bandeira de Mello

    2017-11-10

    To examine whether performance improvements in the virtual environment generalize to the natural environment. we had 64 individuals, 32 of which were individuals with DMD and 32 were typically developing individuals. The groups practiced two coincidence timing tasks. In the more tangible button-press task, the individuals were required to 'intercept' a falling virtual object at the moment it reached the interception point by pressing a key on the computer. In the more abstract task, they were instructed to 'intercept' the virtual object by making a hand movement in a virtual environment using a webcam. For individuals with DMD, conducting a coincidence timing task in a virtual environment facilitated transfer to the real environment. However, we emphasize that a task practiced in a virtual environment should have higher rates of difficulties than a task practiced in a real environment. IMPLICATIONS FOR REHABILITATION Virtual environments can be used to promote improved performance in ?real-world? environments. Virtual environments offer the opportunity to create paradigms similar ?real-life? tasks, however task complexity and difficulty levels can be manipulated, graded and enhanced to increase likelihood of success in transfer of learning and performance. Individuals with DMD, in particular, showed immediate performance benefits after using virtual reality.

  1. Challenges to the development of complex virtual reality surgical simulations.

    PubMed

    Seymour, N E; Røtnes, J S

    2006-11-01

    Virtual reality simulation in surgical training has become more widely used and intensely investigated in an effort to develop safer, more efficient, measurable training processes. The development of virtual reality simulation of surgical procedures has begun, but well-described technical obstacles must be overcome to permit varied training in a clinically realistic computer-generated environment. These challenges include development of realistic surgical interfaces and physical objects within the computer-generated environment, modeling of realistic interactions between objects, rendering of the surgical field, and development of signal processing for complex events associated with surgery. Of these, the realistic modeling of tissue objects that are fully responsive to surgical manipulations is the most challenging. Threats to early success include relatively limited resources for development and procurement, as well as smaller potential for return on investment than in other simulation industries that face similar problems. Despite these difficulties, steady progress continues to be made in these areas. If executed properly, virtual reality offers inherent advantages over other training systems in creating a realistic surgical environment and facilitating measurement of surgeon performance. Once developed, complex new virtual reality training devices must be validated for their usefulness in formative training and assessment of skill to be established.

  2. A "Virtual Spin" on the Teaching of Probability

    ERIC Educational Resources Information Center

    Beck, Shari A.; Huse, Vanessa E.

    2007-01-01

    This article, which describes integrating virtual manipulatives with the teaching of probability at the elementary level, puts a "virtual spin" on the teaching of probability to provide more opportunities for students to experience successful learning. The traditional use of concrete manipulatives is enhanced with virtual coins and spinners from…

  3. Laser device

    DOEpatents

    Scott, Jill R.; Tremblay, Paul L.

    2008-08-19

    A laser device includes a virtual source configured to aim laser energy that originates from a true source. The virtual source has a vertical rotational axis during vertical motion of the virtual source and the vertical axis passes through an exit point from which the laser energy emanates independent of virtual source position. The emanating laser energy is collinear with an orientation line. The laser device includes a virtual source manipulation mechanism that positions the virtual source. The manipulation mechanism has a center of lateral pivot approximately coincident with a lateral index and a center of vertical pivot approximately coincident with a vertical index. The vertical index and lateral index intersect at an index origin. The virtual source and manipulation mechanism auto align the orientation line through the index origin during virtual source motion.

  4. Computational techniques to enable visualizing shapes of objects of extra spatial dimensions

    NASA Astrophysics Data System (ADS)

    Black, Don Vaughn, II

    Envisioning extra dimensions beyond the three of common experience is a daunting challenge for three dimensional observers. Intuition relies on experience gained in a three dimensional environment. Gaining experience with virtual four dimensional objects and virtual three manifolds in four-space on a personal computer may provide the basis for an intuitive grasp of four dimensions. In order to enable such a capability for ourselves, it is first necessary to devise and implement a computationally tractable method to visualize, explore, and manipulate objects of dimension beyond three on the personal computer. A technology is described in this dissertation to convert a representation of higher dimensional models into a format that may be displayed in realtime on graphics cards available on many off-the-shelf personal computers. As a result, an opportunity has been created to experience the shape of four dimensional objects on the desktop computer. The ultimate goal has been to provide the user a tangible and memorable experience with mathematical models of four dimensional objects such that the user can see the model from any user selected vantage point. By use of a 4D GUI, an arbitrary convex hull or 3D silhouette of the 4D model can be rotated, panned, scrolled, and zoomed until a suitable dimensionally reduced view or Aspect is obtained. The 4D GUI then allows the user to manipulate a 3-flat hyperplane cutting tool to slice the model at an arbitrary orientation and position to extract or "pluck" an embedded 3D slice or "aspect" from the embedding four-space. This plucked 3D aspect can be viewed from all angles via a conventional 3D viewer using three multiple POV viewports, and optionally exported to a third party CAD viewer for further manipulation. Plucking and Manipulating the Aspect provides a tangible experience for the end-user in the same manner as any 3D Computer Aided Design viewing and manipulation tool does for the engineer or a 3D video game provides for the nascent student.

  5. Frames of Reference in Mobile Augmented Reality Displays

    ERIC Educational Resources Information Center

    Mou, Weimin; Biocca, Frank; Owen, Charles B.; Tang, Arthur; Xiao, Fan; Lim, Lynette

    2004-01-01

    In 3 experiments, the authors investigated spatial updating in augmented reality environments. Participants learned locations of virtual objects on the physical floor. They were turned to appropriate facing directions while blindfolded before making pointing judgments (e.g., "Imagine you are facing X. Point to Y"). Experiments manipulated the…

  6. Food for Thought: The Role of Manipulatives in The Teaching of Fractions

    ERIC Educational Resources Information Center

    Day, Lorraine; Hurrell, Derek

    2017-01-01

    The proliferation of computers, tablets, and internet access has brought the use of virtual manipulatives into the majority of classrooms in the developed world. In responding to the needs of today's students, many of whom are adept at accessing and manipulating technology devices, virtual manipulatives provide a variety of classroom…

  7. The Effects on Students' Conceptual Understanding of Electric Circuits of Introducing Virtual Manipulatives within a Physical Manipulatives-Oriented Curriculum

    ERIC Educational Resources Information Center

    Zacharia, Zacharias C.; de Jong, Ton

    2014-01-01

    This study investigates whether Virtual Manipulatives (VM) within a Physical Manipulatives (PM)-oriented curriculum affect conceptual understanding of electric circuits and related experimentation processes. A pre-post comparison study randomly assigned 194 undergraduates in an introductory physics course to one of five conditions: three…

  8. Implementing virtual reality interfaces for the geosciences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bethel, W.; Jacobsen, J.; Austin, A.

    1996-06-01

    For the past few years, a multidisciplinary team of computer and earth scientists at Lawrence Berkeley National Laboratory has been exploring the use of advanced user interfaces, commonly called {open_quotes}Virtual Reality{close_quotes} (VR), coupled with visualization and scientific computing software. Working closely with industry, these efforts have resulted in an environment in which VR technology is coupled with existing visualization and computational tools. VR technology may be thought of as a user interface. It is useful to think of a spectrum, ranging the gamut from command-line interfaces to completely immersive environments. In the former, one uses the keyboard to enter threemore » or six-dimensional parameters. In the latter, three or six-dimensional information is provided by trackers contained either in hand-held devices or attached to the user in some fashion, e.g. attached to a head-mounted display. Rich, extensible and often complex languages are a vehicle whereby the user controls parameters to manipulate object position and location in a virtual world, but the keyboard is the obstacle in that typing is cumbersome, error-prone and typically slow. In the latter, the user can interact with these parameters by means of motor skills which are highly developed. Two specific geoscience application areas will be highlighted. In the first, we have used VR technology to manipulate three-dimensional input parameters, such as the spatial location of injection or production wells in a reservoir simulator. In the second, we demonstrate how VR technology has been used to manipulate visualization tools, such as a tool for computing streamlines via manipulation of a {open_quotes}rake.{close_quotes} The rake is presented to the user in the form of a {open_quotes}virtual well{close_quotes} icon, and provides parameters used by the streamlines algorithm.« less

  9. Advances in Modal Analysis Using a Robust and Multiscale Method

    NASA Astrophysics Data System (ADS)

    Picard, Cécile; Frisson, Christian; Faure, François; Drettakis, George; Kry, Paul G.

    2010-12-01

    This paper presents a new approach to modal synthesis for rendering sounds of virtual objects. We propose a generic method that preserves sound variety across the surface of an object at different scales of resolution and for a variety of complex geometries. The technique performs automatic voxelization of a surface model and automatic tuning of the parameters of hexahedral finite elements, based on the distribution of material in each cell. The voxelization is performed using a sparse regular grid embedding of the object, which permits the construction of plausible lower resolution approximations of the modal model. We can compute the audible impulse response of a variety of objects. Our solution is robust and can handle nonmanifold geometries that include both volumetric and surface parts. We present a system which allows us to manipulate and tune sounding objects in an appropriate way for games, training simulations, and other interactive virtual environments.

  10. Teaching Mathematics to Young Children through the Use of Concrete and Virtual Manipulatives

    ERIC Educational Resources Information Center

    D'Angelo, Frank; Iliev, Nevin

    2012-01-01

    The use of manipulatives is an essential key to teaching mathematics to young children. Throughout history, different types of manipulatives have been used to aid in comprehension of mathematical concepts including quipu, abaci and pattern blocks. Today, concrete and virtual manipulatives are the tools that early childhood teachers are using in…

  11. Virtual vs. Concrete Manipulatives in Mathematics Teacher Education: Is One Type More Effective than the Other?

    ERIC Educational Resources Information Center

    Hunt, Annita W.; Nipper, Kelli L.; Nash, Linda E.

    2011-01-01

    Are virtual manipulatives as effective as concrete (hands-on) manipulatives in building conceptual understanding of number concepts and relationships in pre-service middle grades teachers? In the past, the use of concrete manipulatives in mathematics courses for Clayton State University's pre-service middle grades teachers has been effective in…

  12. From Vesalius to virtual reality: How embodied cognition facilitates the visualization of anatomy

    NASA Astrophysics Data System (ADS)

    Jang, Susan

    This study examines the facilitative effects of embodiment of a complex internal anatomical structure through three-dimensional ("3-D") interactivity in a virtual reality ("VR") program. Since Shepard and Metzler's influential 1971 study, it has been known that 3-D objects (e.g., multiple-armed cube or external body parts) are visually and motorically embodied in our minds. For example, people take longer to rotate mentally an image of their hand not only when there is a greater degree of rotation, but also when the images are presented in a manner incompatible with their natural body movement (Parsons, 1987a, 1994; Cooper & Shepard, 1975; Sekiyama, 1983). Such findings confirm the notion that our mental images and rotations of those images are in fact confined by the laws of physics and biomechanics, because we perceive, think and reason in an embodied fashion. With the advancement of new technologies, virtual reality programs for medical education now enable users to interact directly in a 3-D environment with internal anatomical structures. Given that such structures are not readily viewable to users and thus not previously susceptible to embodiment, coupled with the VR environment also affording all possible degrees of rotation, how people learn from these programs raises new questions. If we embody external anatomical parts we can see, such as our hands and feet, can we embody internal anatomical parts we cannot see? Does manipulating the anatomical part in virtual space facilitate the user's embodiment of that structure and therefore the ability to visualize the structure mentally? Medical students grouped in yoked-pairs were tasked with mastering the spatial configuration of an internal anatomical structure; only one group was allowed to manipulate the images of this anatomical structure in a 3-D VR environment, whereas the other group could only view the manipulation. The manipulation group outperformed the visual group, suggesting that the interactivity that took place among the manipulation group promoted visual and motoric embodiment, which in turn enhanced learning. Moreover, when accounting for spatial ability, it was found that manipulation benefits students with low spatial ability more than students with high spatial ability.

  13. An Effective Construction Method of Modular Manipulator 3D Virtual Simulation Platform

    NASA Astrophysics Data System (ADS)

    Li, Xianhua; Lv, Lei; Sheng, Rui; Sun, Qing; Zhang, Leigang

    2018-06-01

    This work discusses about a fast and efficient method of constructing an open 3D manipulator virtual simulation platform which make it easier for teachers and students to learn about positive and inverse kinematics of a robot manipulator. The method was carried out using MATLAB. In which, the Robotics Toolbox, MATLAB GUI and 3D animation with the help of modelling using SolidWorks, were fully applied to produce a good visualization of the system. The advantages of using quickly build is its powerful function of the input and output and its ability to simulate a 3D manipulator realistically. In this article, a Schunk six DOF modular manipulator was constructed by the author's research group to be used as example. The implementation steps of this method was detailed described, and thereafter, a high-level open and realistic visualization manipulator 3D virtual simulation platform was achieved. With the graphs obtained from simulation, the test results show that the manipulator 3D virtual simulation platform can be constructed quickly with good usability and high maneuverability, and it can meet the needs of scientific research and teaching.

  14. Control of repulsive force in a virtual environment using an electrorheological haptic master for a surgical robot application

    NASA Astrophysics Data System (ADS)

    Oh, Jong-Seok; Choi, Seung-Hyun; Choi, Seung-Bok

    2014-01-01

    This paper presents control performances of a new type of four-degrees-of-freedom (4-DOF) haptic master that can be used for robot-assisted minimally invasive surgery (RMIS). By adopting a controllable electrorheological (ER) fluid, the function of the proposed master is realized as a haptic feedback as well as remote manipulation. In order to verify the efficacy of the proposed master and method, an experiment is conducted with deformable objects featuring human organs. Since the use of real human organs is difficult for control due to high cost and moral hazard, an excellent alternative method, the virtual reality environment, is used for control in this work. In order to embody a human organ in the virtual space, the experiment adopts a volumetric deformable object represented by a shape-retaining chain linked (S-chain) model which has salient properties such as fast and realistic deformation of elastic objects. In haptic architecture for RMIS, the desired torque/force and desired position originating from the object of the virtual slave and operator of the haptic master are transferred to each other. In order to achieve the desired torque/force trajectories, a sliding mode controller (SMC) which is known to be robust to uncertainties is designed and empirically implemented. Tracking control performances for various torque/force trajectories from the virtual slave are evaluated and presented in the time domain.

  15. In a demanding task, three-handed manipulation is preferred to two-handed manipulation

    NASA Astrophysics Data System (ADS)

    Abdi, Elahe; Burdet, Etienne; Bouri, Mohamed; Himidan, Sharifa; Bleuler, Hannes

    2016-02-01

    Equipped with a third hand under their direct control, surgeons may be able to perform certain surgical interventions alone; this would reduce the need for a human assistant and related coordination difficulties. However, does human performance improve with three hands compared to two hands? To evaluate this possibility, we carried out a behavioural study on the performance of naive adults catching objects with three virtual hands controlled by their two hands and right foot. The subjects could successfully control the virtual hands in a few trials. With this control strategy, the workspace of the hands was inversely correlated with the task velocity. The comparison of performance between the three and two hands control revealed no significant difference of success in catching falling objects and in average effort during the tasks. Subjects preferred the three handed control strategy, found it easier, with less physical and mental burden. Although the coordination of the foot with the natural hands increased trial after trial, about two minutes of practice was not sufficient to develop a sense of ownership towards the third arm.

  16. In a demanding task, three-handed manipulation is preferred to two-handed manipulation.

    PubMed

    Abdi, Elahe; Burdet, Etienne; Bouri, Mohamed; Himidan, Sharifa; Bleuler, Hannes

    2016-02-25

    Equipped with a third hand under their direct control, surgeons may be able to perform certain surgical interventions alone; this would reduce the need for a human assistant and related coordination difficulties. However, does human performance improve with three hands compared to two hands? To evaluate this possibility, we carried out a behavioural study on the performance of naive adults catching objects with three virtual hands controlled by their two hands and right foot. The subjects could successfully control the virtual hands in a few trials. With this control strategy, the workspace of the hands was inversely correlated with the task velocity. The comparison of performance between the three and two hands control revealed no significant difference of success in catching falling objects and in average effort during the tasks. Subjects preferred the three handed control strategy, found it easier, with less physical and mental burden. Although the coordination of the foot with the natural hands increased trial after trial, about two minutes of practice was not sufficient to develop a sense of ownership towards the third arm.

  17. Towards control of dexterous hand manipulations using a silicon Pattern Generator.

    PubMed

    Russell, Alexander; Tenore, Francesco; Singhal, Girish; Thakor, Nitish; Etienne-Cummings, Ralph

    2008-01-01

    This work demonstrates how an in silico Pattern Generator (PG) can be used as a low power control system for rhythmic hand movements in an upper-limb prosthesis. Neural spike patterns, which encode rotation of a cylindrical object, were implemented in a custom Very Large Scale Integration chip. PG control was tested by using the decoded control signals to actuate the fingers of a virtual prosthetic arm. This system provides a framework for prototyping and controlling dexterous hand manipulation tasks in a compact and efficient solution.

  18. The National Virtual Observatory

    NASA Astrophysics Data System (ADS)

    Hanisch, Robert J.

    2001-06-01

    The National Virtual Observatory is a distributed computational facility that will provide access to the ``virtual sky''-the federation of astronomical data archives, object catalogs, and associated information services. The NVO's ``virtual telescope'' is a common framework for requesting, retrieving, and manipulating information from diverse, distributed resources. The NVO will make it possible to seamlessly integrate data from the new all-sky surveys, enabling cross-correlations between multi-Terabyte catalogs and providing transparent access to the underlying image or spectral data. Success requires high performance computational systems, high bandwidth network services, agreed upon standards for the exchange of metadata, and collaboration among astronomers, astronomical data and information service providers, information technology specialists, funding agencies, and industry. International cooperation at the onset will help to assure that the NVO simultaneously becomes a global facility. .

  19. A decade of telerobotics in rehabilitation: Demonstrated utility blocked by the high cost of manipulation and the complexity of the user interface

    NASA Technical Reports Server (NTRS)

    Leifer, Larry; Michalowski, Stefan; Vanderloos, Machiel

    1991-01-01

    The Stanford/VA Interactive Robotics Laboratory set out in 1978 to test the hypothesis that industrial robotics technology could be applied to serve the manipulation needs of severely impaired individuals. Five generations of hardware, three generations of system software, and over 125 experimental subjects later, we believe that genuine utility is achievable. The experience includes development of over 65 task applications using voiced command, joystick control, natural language command and 3D object designation technology. A brief foray into virtual environments, using flight simulator technology, was instructive. If reality and virtuality come for comparable prices, you cannot beat reality. A detailed review of assistive robot anatomy and the performance specifications needed to achieve cost/beneficial utility will be used to support discussion of the future of rehabilitation telerobotics. Poised on the threshold of commercial viability, but constrained by the high cost of technically adequate manipulators, this worthy application domain flounders temporarily. In the long run, it will be the user interface that governs utility.

  20. Virtual Manipulatives: What They Are and How Teachers Can Use Them

    ERIC Educational Resources Information Center

    Bouck, Emily C.; Flanagan, Sara M.

    2010-01-01

    Research on the positive impact of using concrete manipulatives in mathematics for students with high-incidence disabilities is clear. Maccini and Gagnon (2000) considered manipulatives to be a best practice in terms of educating students with high-incidence disabilities in mathematics. It would follow, then, that research on virtual manipulatives…

  1. Tuning self-motion perception in virtual reality with visual illusions.

    PubMed

    Bruder, Gerd; Steinicke, Frank; Wieland, Phil; Lappe, Markus

    2012-07-01

    Motion perception in immersive virtual environments significantly differs from the real world. For example, previous work has shown that users tend to underestimate travel distances in virtual environments (VEs). As a solution to this problem, researchers proposed to scale the mapped virtual camera motion relative to the tracked real-world movement of a user until real and virtual motion are perceived as equal, i.e., real-world movements could be mapped with a larger gain to the VE in order to compensate for the underestimation. However, introducing discrepancies between real and virtual motion can become a problem, in particular, due to misalignments of both worlds and distorted space cognition. In this paper, we describe a different approach that introduces apparent self-motion illusions by manipulating optic flow fields during movements in VEs. These manipulations can affect self-motion perception in VEs, but omit a quantitative discrepancy between real and virtual motions. In particular, we consider to which regions of the virtual view these apparent self-motion illusions can be applied, i.e., the ground plane or peripheral vision. Therefore, we introduce four illusions and show in experiments that optic flow manipulation can significantly affect users' self-motion judgments. Furthermore, we show that with such manipulations of optic flow fields the underestimation of travel distances can be compensated.

  2. User Control and Task Authenticity for Spatial Learning in 3D Environments

    ERIC Educational Resources Information Center

    Dalgarno, Barney; Harper, Barry

    2004-01-01

    This paper describes two empirical studies which investigated the importance for spatial learning of view control and object manipulation within 3D environments. A 3D virtual chemistry laboratory was used as the research instrument. Subjects, who were university undergraduate students (34 in the first study and 80 in the second study), undertook…

  3. The Use of Physical and Virtual Manipulatives in an Undergraduate Mechanical Engineering (Dynamics) Course

    ERIC Educational Resources Information Center

    Pan, Edward A.

    2013-01-01

    Science, technology, engineering, and mathematics (STEM) education is a national focus. Engineering education, as part of STEM education, needs to adapt to meet the needs of the nation in a rapidly changing world. Using computer-based visualization tools and corresponding 3D printed physical objects may help nontraditional students succeed in…

  4. Using "Second Life" in School Librarianship

    ERIC Educational Resources Information Center

    Perez, Lisa

    2009-01-01

    In this article, the author discusses using Second Life (SL) in school librarianship. SL is a multi-user virtual environment in which persons create avatars to allow them to move and interact with other avatars. They can build and manipulate objects. To move, they can walk, run, fly, or teleport. There are many areas within SL to allow people to…

  5. Perturbing Practices: A Case Study of the Effects of Virtual Manipulatives as Novel Didactic Objects on Rational Function Instruction

    ERIC Educational Resources Information Center

    Pampel, Krysten

    2017-01-01

    The advancement of technology has substantively changed the practices of numerous professions, including teaching. When an instructor first adopts a new technology, established classroom practices are perturbed. These perturbations can have positive and negative, large or small, and long- or short-term effects on instructors' abilities to teach…

  6. Visual and somatic sensory feedback of brain activity for intuitive surgical robot manipulation.

    PubMed

    Miura, Satoshi; Matsumoto, Yuya; Kobayashi, Yo; Kawamura, Kazuya; Nakashima, Yasutaka; Fujie, Masakatsu G

    2015-01-01

    This paper presents a method to evaluate the hand-eye coordination of the master-slave surgical robot by measuring the activation of the intraparietal sulcus in users brain activity during controlling virtual manipulation. The objective is to examine the changes in activity of the intraparietal sulcus when the user's visual or somatic feedback is passed through or intercepted. The hypothesis is that the intraparietal sulcus activates significantly when both the visual and somatic sense pass feedback, but deactivates when either visual or somatic is intercepted. The brain activity of three subjects was measured by the functional near-infrared spectroscopic-topography brain imaging while they used a hand controller to move a virtual arm of a surgical simulator. The experiment was performed several times with three conditions: (i) the user controlled the virtual arm naturally under both visual and somatic feedback passed, (ii) the user moved with closed eyes under only somatic feedback passed, (iii) the user only gazed at the screen under only visual feedback passed. Brain activity showed significantly better control of the virtual arm naturally (p<;0.05) when compared with moving with closed eyes or only gazing among all participants. In conclusion, the brain can activate according to visual and somatic sensory feedback agreement.

  7. Teleoperation with virtual force feedback

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, R.J.

    1993-08-01

    In this paper we describe an algorithm for generating virtual forces in a bilateral teleoperator system. The virtual forces are generated from a world model and are used to provide real-time obstacle avoidance and guidance capabilities. The algorithm requires that the slaves tool and every object in the environment be decomposed into convex polyhedral Primitives. Intrusion distance and extraction vectors are then derived at every time step by applying Gilbert`s polyhedra distance algorithm, which has been adapted for the task. This information is then used to determine the compression and location of nonlinear virtual spring-dampers whose total force is summedmore » and applied to the manipulator/teleoperator system. Experimental results validate the whole approach, showing that it is possible to compute the algorithm and generate realistic, useful psuedo forces for a bilateral teleoperator system using standard VME bus hardware.« less

  8. A Comparison of Concrete and Virtual Manipulative Use in Third- and Fourth-Grade Mathematics

    ERIC Educational Resources Information Center

    Burns, Barbara A.; Hamm, Ellen M.

    2011-01-01

    The primary purpose of this classroom experiment was to examine the effectiveness of concrete (hands-on) manipulatives as compared with virtual (computer-based) manipulatives on student review of fraction concepts in third grade and introduction of symmetry concepts in fourth grade. A pretest-posttest design was employed with a sample of 91…

  9. Novel graphical environment for virtual and real-world operations of tracked mobile manipulators

    NASA Astrophysics Data System (ADS)

    Chen, ChuXin; Trivedi, Mohan M.; Azam, Mir; Lassiter, Nils T.

    1993-08-01

    A simulation, animation, visualization and interactive control (SAVIC) environment has been developed for the design and operation of an integrated mobile manipulator system. This unique system possesses the abilities for (1) multi-sensor simulation, (2) kinematics and locomotion animation, (3) dynamic motion and manipulation animation, (4) transformation between real and virtual modes within the same graphics system, (5) ease in exchanging software modules and hardware devices between real and virtual world operations, and (6) interfacing with a real robotic system. This paper describes a working system and illustrates the concepts by presenting the simulation, animation and control methodologies for a unique mobile robot with articulated tracks, a manipulator, and sensory modules.

  10. Motor resources in peripersonal space are intrinsic to spatial encoding: evidence from motor interference.

    PubMed

    Iachini, Tina; Ruggiero, Gennaro; Ruotolo, Francesco; Vinciguerra, Michela

    2014-11-01

    The aim of this study was to explore the role of motor resources in peripersonal space encoding: are they intrinsic to spatial processes or due to action potentiality of objects? To answer this question, we disentangled the effects of motor resources on object manipulability and spatial processing in peripersonal and extrapersonal spaces. Participants had to localize manipulable and non-manipulable 3-D stimuli presented within peripersonal or extrapersonal spaces of an immersive virtual reality scenario. To assess the contribution of motor resources to the spatial task a motor interference paradigm was used. In Experiment 1, localization judgments were provided with the left hand while the right dominant arm could be free or blocked. Results showed that participants were faster and more accurate in localizing both manipulable and non-manipulable stimuli in peripersonal space with their arms free. On the other hand, in extrapersonal space there was no significant effect of motor interference. Experiment 2 replicated these results by using alternatively both hands to give the response and controlling the possible effect of the orientation of object handles. Overall, the pattern of results suggests that the encoding of peripersonal space involves motor processes per se, and not because of the presence of manipulable stimuli. It is argued that this motor grounding reflects the adaptive need of anticipating what may happen near the body and preparing to react in time. Copyright © 2014. Published by Elsevier B.V.

  11. PyMOL mControl: Manipulating molecular visualization with mobile devices.

    PubMed

    Lam, Wendy W T; Siu, Shirley W I

    2017-01-02

    Viewing and manipulating three-dimensional (3D) structures in molecular graphics software are essential tasks for researchers and students to understand the functions of molecules. Currently, the way to manipulate a 3D molecular object is mainly based on mouse-and-keyboard control that is usually difficult and tedious to learn. While gesture-based and touch-based interactions are increasingly popular in interactive software systems, their suitability in handling molecular graphics has not yet been sufficiently explored. Here, we designed the gesture-based and touch-based interaction methods to manipulate virtual objects in PyMOL utilizing the motion and touch sensors in a mobile device. Three fundamental viewing controls-zooming, translation and rotation-and frequently used functions were implemented. Results from a pilot user study reveal that task performances on viewing controls using a mobile device are slightly reduced as compared to mouse-and-keyboard method. However, it is considered to be more suitable for oral presentations and equally suitable for education scenarios such as school classes. Overall, PyMOL mControl provides an alternative way to manipulate objects in molecular graphic software with new user experiences. The software is freely available at http://cbbio.cis.umac.mo/mcontrol.html. © 2016 by The International Union of Biochemistry and Molecular Biology, 45(1):76-83, 2017. © 2016 The International Union of Biochemistry and Molecular Biology.

  12. Blending Physical and Virtual Manipulatives: An Effort to Improve Students' Conceptual Understanding through Science Laboratory Experimentation

    ERIC Educational Resources Information Center

    Olympiou, Georgios; Zacharia, Zacharias C.

    2012-01-01

    This study aimed to investigate the effect of experimenting with physical manipulatives (PM), virtual manipulatives (VM), and a blended combination of PM and VM on undergraduate students' understanding of concepts in the domain of "Light and Color." A pre-post comparison study design was used for the purposes of this study that involved 70…

  13. Effects of Experimenting with Physical and Virtual Manipulatives on Students' Conceptual Understanding in Heat and Temperature

    ERIC Educational Resources Information Center

    Zacharia, Zacharias C.; Olympiou, Georgios; Papaevripidou, Marios

    2008-01-01

    This study aimed to investigate the comparative value of experimenting with physical manipulatives (PM) in a sequential combination with virtual manipulatives (VM), with the use of PM preceding the use of VM, and of experimenting with PM alone, with respect to changes in students' conceptual understanding in the domain of heat and temperature. A…

  14. Colloidal assembly directed by virtual magnetic moulds

    NASA Astrophysics Data System (ADS)

    Demirörs, Ahmet F.; Pillai, Pramod P.; Kowalczyk, Bartlomiej; Grzybowski, Bartosz A.

    2013-11-01

    Interest in assemblies of colloidal particles has long been motivated by their applications in photonics, electronics, sensors and microlenses. Existing assembly schemes can position colloids of one type relatively flexibly into a range of desired structures, but it remains challenging to produce multicomponent lattices, clusters with precisely controlled symmetries and three-dimensional assemblies. A few schemes can efficiently produce complex colloidal structures, but they require system-specific procedures. Here we show that magnetic field microgradients established in a paramagnetic fluid can serve as `virtual moulds' to act as templates for the assembly of large numbers (~108) of both non-magnetic and magnetic colloidal particles with micrometre precision and typical yields of 80 to 90 per cent. We illustrate the versatility of this approach by producing single-component and multicomponent colloidal arrays, complex three-dimensional structures and a variety of colloidal molecules from polymeric particles, silica particles and live bacteria and by showing that all of these structures can be made permanent. In addition, although our magnetic moulds currently resemble optical traps in that they are limited to the manipulation of micrometre-sized objects, they are massively parallel and can manipulate non-magnetic and magnetic objects simultaneously in two and three dimensions.

  15. Sensory Agreement Guides Kinetic Energy Optimization of Arm Movements during Object Manipulation.

    PubMed

    Farshchiansadegh, Ali; Melendez-Calderon, Alejandro; Ranganathan, Rajiv; Murphey, Todd D; Mussa-Ivaldi, Ferdinando A

    2016-04-01

    The laws of physics establish the energetic efficiency of our movements. In some cases, like locomotion, the mechanics of the body dominate in determining the energetically optimal course of action. In other tasks, such as manipulation, energetic costs depend critically upon the variable properties of objects in the environment. Can the brain identify and follow energy-optimal motions when these motions require moving along unfamiliar trajectories? What feedback information is required for such optimal behavior to occur? To answer these questions, we asked participants to move their dominant hand between different positions while holding a virtual mechanical system with complex dynamics (a planar double pendulum). In this task, trajectories of minimum kinetic energy were along curvilinear paths. Our findings demonstrate that participants were capable of finding the energy-optimal paths, but only when provided with veridical visual and haptic information pertaining to the object, lacking which the trajectories were executed along rectilinear paths.

  16. Predictability and Robustness in the Manipulation of Dynamically Complex Objects

    PubMed Central

    Hasson, Christopher J.

    2017-01-01

    Manipulation of complex objects and tools is a hallmark of many activities of daily living, but how the human neuromotor control system interacts with such objects is not well understood. Even the seemingly simple task of transporting a cup of coffee without spilling creates complex interaction forces that humans need to compensate for. Predicting the behavior of an underactuated object with nonlinear fluid dynamics based on an internal model appears daunting. Hence, this research tests the hypothesis that humans learn strategies that make interactions predictable and robust to inaccuracies in neural representations of object dynamics. The task of moving a cup of coffee is modeled with a cart-and-pendulum system that is rendered in a virtual environment, where subjects interact with a virtual cup with a rolling ball inside using a robotic manipulandum. To gain insight into human control strategies, we operationalize predictability and robustness to permit quantitative theory-based assessment. Predictability is quantified by the mutual information between the applied force and the object dynamics; robustness is quantified by the energy margin away from failure. Three studies are reviewed that show how with practice subjects develop movement strategies that are predictable and robust. Alternative criteria, common for free movement, such as maximization of smoothness and minimization of force, do not account for the observed data. As manual dexterity is compromised in many individuals with neurological disorders, the experimental paradigm and its analyses are a promising platform to gain insights into neurological diseases, such as dystonia and multiple sclerosis, as well as healthy aging. PMID:28035560

  17. The effects of substitute multisensory feedback on task performance and the sense of presence in a virtual reality environment

    PubMed Central

    Milella, Ferdinando; Pinto, Carlo; Cant, Iain; White, Mark; Meyer, Georg

    2018-01-01

    Objective and subjective measures of performance in virtual reality environments increase as more sensory cues are delivered and as simulation fidelity increases. Some cues (colour or sound) are easier to present than others (object weight, vestibular cues) so that substitute cues can be used to enhance informational content in a simulation at the expense of simulation fidelity. This study evaluates how substituting cues in one modality by alternative cues in another modality affects subjective and objective performance measures in a highly immersive virtual reality environment. Participants performed a wheel change in a virtual reality (VR) environment. Auditory, haptic and visual cues, signalling critical events in the simulation, were manipulated in a factorial design. Subjective ratings were recorded via questionnaires. The time taken to complete the task was used as an objective performance measure. The results show that participants performed best and felt an increased sense of immersion and involvement, collectively referred to as ‘presence’, when substitute multimodal sensory feedback was provided. Significant main effects of audio and tactile cues on task performance and on participants' subjective ratings were found. A significant negative relationship was found between the objective (overall completion times) and subjective (ratings of presence) performance measures. We conclude that increasing informational content, even if it disrupts fidelity, enhances performance and user’s overall experience. On this basis we advocate the use of substitute cues in VR environments as an efficient method to enhance performance and user experience. PMID:29390023

  18. The effects of substitute multisensory feedback on task performance and the sense of presence in a virtual reality environment.

    PubMed

    Cooper, Natalia; Milella, Ferdinando; Pinto, Carlo; Cant, Iain; White, Mark; Meyer, Georg

    2018-01-01

    Objective and subjective measures of performance in virtual reality environments increase as more sensory cues are delivered and as simulation fidelity increases. Some cues (colour or sound) are easier to present than others (object weight, vestibular cues) so that substitute cues can be used to enhance informational content in a simulation at the expense of simulation fidelity. This study evaluates how substituting cues in one modality by alternative cues in another modality affects subjective and objective performance measures in a highly immersive virtual reality environment. Participants performed a wheel change in a virtual reality (VR) environment. Auditory, haptic and visual cues, signalling critical events in the simulation, were manipulated in a factorial design. Subjective ratings were recorded via questionnaires. The time taken to complete the task was used as an objective performance measure. The results show that participants performed best and felt an increased sense of immersion and involvement, collectively referred to as 'presence', when substitute multimodal sensory feedback was provided. Significant main effects of audio and tactile cues on task performance and on participants' subjective ratings were found. A significant negative relationship was found between the objective (overall completion times) and subjective (ratings of presence) performance measures. We conclude that increasing informational content, even if it disrupts fidelity, enhances performance and user's overall experience. On this basis we advocate the use of substitute cues in VR environments as an efficient method to enhance performance and user experience.

  19. Virtual- and real-world operation of mobile robotic manipulators: integrated simulation, visualization, and control environment

    NASA Astrophysics Data System (ADS)

    Chen, ChuXin; Trivedi, Mohan M.

    1992-03-01

    This research is focused on enhancing the overall productivity of an integrated human-robot system. A simulation, animation, visualization, and interactive control (SAVIC) environment has been developed for the design and operation of an integrated robotic manipulator system. This unique system possesses the abilities for multisensor simulation, kinematics and locomotion animation, dynamic motion and manipulation animation, transformation between real and virtual modes within the same graphics system, ease in exchanging software modules and hardware devices between real and virtual world operations, and interfacing with a real robotic system. This paper describes a working system and illustrates the concepts by presenting the simulation, animation, and control methodologies for a unique mobile robot with articulated tracks, a manipulator, and sensory modules.

  20. Stability effects of singularities in force-controlled robotic assist devices

    NASA Astrophysics Data System (ADS)

    Luecke, Greg R.

    2002-02-01

    Force feedback is being used as an interface between humans and material handling equipment to provide an intuitive method to control large and bulky payloads. Powered actuation in the lift assist device compensates for the inertial characteristics of the manipulator and the payload to provide effortless control and handling of manufacturing parts, components, and assemblies. The use of these Intelligent Assist Devices (IAD) is being explored to prevent worker injury, enhance material handling performance, and increase productivity in the workplace. The IAD also provides the capability to shape and control motion in the workspace during routine operations. Virtual barriers can be developed to protect fixed objects in the workspace, and regions can be programmed that attract the work piece to a certain position and orientation. However, the robot is still under complete control of the human operator, with the trajectory being determined and commanded using the judgment of the operator to complete a given task. In many cases, the IAD is built in a configuration that may have singular points inside the workspace. These singularities can cause problems when the unstructured trajectory commands from the human cause interaction between the IAD and the virtual walls and fixtures at positions close to these singularities. The research presented here explores the stability effects of the interactions between the powered manipulator and the virtual surfaces when controlled by the operator. Because of the flexible nature of the human decisions determining the real time work piece paths, manipulator singularities that occur in conjunction with the virtual surfaces raise stability issues in the performance around these singularities. We examine these stability issues in the context of a particular IAD configuration, and present analytic results for the performance and stability of these systems in response to the real-time trajectory modification of the human operator.

  1. The Impacts of Virtual Manipulatives and Prior Knowledge on Geometry Learning Performance in Junior High School

    ERIC Educational Resources Information Center

    Lee, Chun-Yi; Chen, Ming-Jang

    2014-01-01

    Previous studies on the effects of virtual and physical manipulatives have failed to consider the impact of prior knowledge on the efficacy of manipulatives. This study focuses on the learning of plane geometry in junior high schools, including the sum of interior angles in polygons, the sum of exterior angles in polygons, and the properties of…

  2. Body ownership and agency: task-dependent effects of the virtual hand illusion on proprioceptive drift.

    PubMed

    Shibuya, Satoshi; Unenaka, Satoshi; Ohki, Yukari

    2017-01-01

    Body ownership and agency are fundamental to self-consciousness. These bodily experiences have been intensively investigated using the rubber hand illusion, wherein participants perceive a fake hand as their own. After presentation of the illusion, the position of the participant's hand then shifts toward the location of the fake hand (proprioceptive drift). However, it remains controversial whether proprioceptive drift is able to provide an objective measurement of body ownership, and whether agency also affects drift. Using the virtual hand illusion (VHI), the current study examined the effects of body ownership and agency on proprioceptive drift, with three different visuo-motor tasks. Twenty healthy adults (29.6 ± 9.2 years old) completed VH manipulations using their right hand under a 2 × 2 factorial design (active vs. passive manipulation, and congruent vs. incongruent virtual hand). Prior to and after VH manipulation, three different tasks were performed to assess proprioceptive drift, in which participants were unable to see their real hands. The effects of the VHI on proprioceptive drift were task-dependent. When participants were required to judge the position of their right hand using a ruler, or by reaching toward a visual target, both body ownership and agency modulated proprioceptive drift. Comparatively, when participants aligned both hands, drift was influenced by ownership but not agency. These results suggest that body ownership and agency might differentially modulate various body representations in the brain.

  3. Manipulation of near-wall turbulence by surface slip and permeability

    NASA Astrophysics Data System (ADS)

    Gómez-de-Segura, G.; Fairhall, C. T.; MacDonald, M.; Chung, D.; García-Mayoral, R.

    2018-04-01

    We study the effect on near-wall turbulence of tangential slip and wall-normal transpiration, typically produced by textured surfaces and other surface manipulations. For this, we conduct direct numerical simulations (DNSs) with different virtual origins for the different velocity components. The different origins result in a relative wall-normal displacement of the near-wall, quasi-streamwise vortices with respect to the mean flow, which in turn produces a change in drag. The objective of this work is to extend the existing understanding on how these virtual origins affect the flow. In the literature, the virtual origins for the tangential velocities are typically characterised by slip boundary conditions, while the wall-normal velocity is assumed to be zero at the boundary plane. Here we explore different techniques to define and implement the three virtual origins, with special emphasis on the wall-normal one. We investigate impedance conditions relating the wall-normal velocity to the pressure, and linear relations between the velocity components and their wall-normal gradients, as is typically done to impose slip conditions. These models are first tested to represent a smooth wall below the boundary plane, with all virtual origins equal, and later for different tangential and wall-normal origins. Our results confirm that the change in drag is determined by the offset between the origins perceived by mean flow and the quasi-streamwise vortices or, more generally, the near-wall turbulent cycle. The origin for the latter, however, is not set by the spanwise virtual origin alone, as previously proposed, but by a combination of the spanwise and wall-normal origins, and mainly determined by the shallowest of the two. These observations allow us to extend the existing expression to predict the change in drag, accounting for the wall-normal effect when the transpiration is not negligible.

  4. Kindergarten Children's Interactions with Touchscreen Mathematics Virtual Manipulatives: An Innovative Mixed Methods Analysis

    ERIC Educational Resources Information Center

    Tucker, Stephen I.; Lommatsch, Christina W.; Moyer-Packenham, Patricia S.; Anderson-Pence, Katie L.; Symanzik, Jürgen

    2017-01-01

    The purpose of this study was to examine patterns of mathematical practices evident during children's interactions with touchscreen mathematics virtual manipulatives. Researchers analyzed 33 Kindergarten children's interactions during activities involving apps featuring mathematical content of early number sense or quantity in base ten, recorded…

  5. Virtual Manipulatives: Tools for Teaching Mathematics to Students with Learning Disabilities

    ERIC Educational Resources Information Center

    Shin, Mikyung; Bryant, Diane P.; Bryant, Brian R.; McKenna, John W.; Hou, Fangjuan; Ok, Min Wook

    2017-01-01

    Many students with learning disabilities demonstrate difficulty in developing a conceptual understanding of mathematical topics. Researchers recommend using visual models to support student learning of the concepts and skills necessary to complete abstract and symbolic mathematical problems. Virtual manipulatives (i.e., interactive visual models)…

  6. Effects of Virtual Manipulatives with Different Approaches on Students' Knowledge of Slope

    ERIC Educational Resources Information Center

    Demir, Mustafa

    2018-01-01

    Virtual Manipulatives (VMs) are computer-based, dynamic, and visual representations of mathematical concepts, provide interactive learning environments to advance mathematics instruction (Moyer et al., 2002). Despite their broad use, few research explored the integration of VMs into mathematics instruction (Moyer-Packenham & Westenskow, 2013).…

  7. Virtual reality systems

    NASA Technical Reports Server (NTRS)

    Johnson, David W.

    1992-01-01

    Virtual realities are a type of human-computer interface (HCI) and as such may be understood from a historical perspective. In the earliest era, the computer was a very simple, straightforward machine. Interaction was human manipulation of an inanimate object, little more than the provision of an explicit instruction set to be carried out without deviation. In short, control resided with the user. In the second era of HCI, some level of intelligence and control was imparted to the system to enable a dialogue with the user. Simple context sensitive help systems are early examples, while more sophisticated expert system designs typify this era. Control was shared more equally. In this, the third era of the HCI, the constructed system emulates a particular environment, constructed with rules and knowledge about 'reality'. Control is, in part, outside the realm of the human-computer dialogue. Virtual reality systems are discussed.

  8. Concrete and App-Based Manipulatives to Support Students with Disabilities with Subtraction

    ERIC Educational Resources Information Center

    Bouck, Emily C.; Chamberlain, Courtney; Park, Jiyoon

    2017-01-01

    Manipulatives support students with and without disabilities in mathematics. However, as students age, concrete manipulatives can be limiting and potentially not age appropriate (Satsangi, 2015). An alternative is virtual manipulatives, including app-based manipulatives. This study compared the use of app-based manipulatives to concrete…

  9. Collaborative Aerial-Drawing System for Supporting Co-Creative Communication

    NASA Astrophysics Data System (ADS)

    Osaki, Akihiro; Taniguchi, Hiroyuki; Miwa, Yoshiyuki

    This paper describes the collaborative augmented reality (AR) system with which multiple users can handwrite 3D lines in the air simultaneously and manipulate the lines directly in the real world. In addition, we propose a new technique for co-creative communication utilizing the 3D drawing activity. Up to now, the various 3D user interfaces have been proposed. Although most of them aim to solve the specific problems in the virtual environments, the possibility of the 3D drawing expression has not been explored yet. Accordingly, we paid special attention to the interaction with the real objects in daily life, and considered to manipulate real objects and 3D lines without any distinctions by the same action. The developed AR system consists of a stereoscopic head-mounted display, a drawing tool, 6DOF sensors measuring three-dimensional position and Euler angles, and the 3D user interface, which enables to push, grasp and pitch 3D lines directly by use of the drawing tool. Additionally users can pick up desired color from either a landscape or a virtual line through the direct interaction with this tool. For sharing 3D lines among multiple users at the same place, the distributed-type AR system has been developed that mutually sends and receives drawn data between systems. With the developed system, users can proceed to design jointly in the real space through arranging each 3D drawing by direct manipulation. Moreover, a new application to the entertainment has become possible to play sports like catch, fencing match, or the like.

  10. The Complexity of the Affordance-Ability Relationship When Second-Grade Children Interact with Mathematics Virtual Manipulative Apps

    ERIC Educational Resources Information Center

    Tucker, Stephen I.; Moyer-Packenham, Patricia S.; Westenskow, Arla; Jordan, Kerry E.

    2016-01-01

    The purpose of this study was to explore relationships between app affordances and user abilities in second graders' interactions with mathematics virtual manipulative touchscreen tablet apps. The research questions focused on varying manifestations of affordance-ability relationships during children's interactions with mathematics virtual…

  11. Predictors of Achievement When Virtual Manipulatives Are Used for Mathematics Instruction

    ERIC Educational Resources Information Center

    Moyer-Packenham, Patricia S.; Baker, Joseph; Westenskow, Arla; Anderson-Pence, Katie L.; Shumway, Jessica F.; Jordan, Kerry E.

    2014-01-01

    The purpose of this study was to determine variables that predict performance when virtual manipulatives are used for mathematics instruction. This study used a quasi-experimental design. This design was used to determine variables that predict student performance on tests of fraction knowledge for third- and fourth-grade students in two treatment…

  12. Supporting Teachers' Technological Pedagogical Content Knowledge of Fractions through Co-Designing a Virtual Manipulative

    ERIC Educational Resources Information Center

    Hansen, Alice; Mavrikis, Manolis; Geraniou, Eirini

    2016-01-01

    This study explores the impact that co-designing a virtual manipulative, Fractions Lab, had on teachers' professional development. Tapping into an existing community of practice of mathematics specialist teachers, the study identifies how a cooperative enquiry approach utilising workshops and school-based visits challenged 23 competent primary…

  13. Learning Mathematics with Technology: The Influence of Virtual Manipulatives on Different Achievement Groups

    ERIC Educational Resources Information Center

    Moyer-Packenham, Patricia; Suh, Jennifer

    2012-01-01

    This study examined the influence of virtual manipulatives on different achievement groups during a teaching experiment in four fifth-grade classrooms. During a two-week unit focusing on two rational number concepts (fraction equivalence and fraction addition with unlike denominators) one low achieving, two average achieving, and one high…

  14. Grip Forces During Object Manipulation: Experiment, Mathematical Model & Validation

    PubMed Central

    Slota, Gregory P.; Latash, Mark L.; Zatsiorsky, Vladimir M.

    2011-01-01

    When people transport handheld objects, they change the grip force with the object movement. Circular movement patterns were tested within three planes at two different rates (1.0, 1.5 Hz), and two diameters (20, 40 cm). Subjects performed the task reasonably well, matching frequencies and dynamic ranges of accelerations within expectations. A mathematical model was designed to predict the applied normal forces from kinematic data. The model is based on two hypotheses: (a) the grip force changes during movements along complex trajectories can be represented as the sum of effects of two basic commands associated with the parallel and orthogonal manipulation, respectively; (b) different central commands are sent to the thumb and virtual finger (Vf- four fingers combined). The model predicted the actual normal forces with a total variance accounted for of better than 98%. The effects of the two components of acceleration—along the normal axis and the resultant acceleration within the shear plane—on the digit normal forces are additive. PMID:21735245

  15. Tangible imaging systems

    NASA Astrophysics Data System (ADS)

    Ferwerda, James A.

    2013-03-01

    We are developing tangible imaging systems1-4 that enable natural interaction with virtual objects. Tangible imaging systems are based on consumer mobile devices that incorporate electronic displays, graphics hardware, accelerometers, gyroscopes, and digital cameras, in laptop or tablet-shaped form-factors. Custom software allows the orientation of a device and the position of the observer to be tracked in real-time. Using this information, realistic images of threedimensional objects with complex textures and material properties are rendered to the screen, and tilting or moving in front of the device produces realistic changes in surface lighting and material appearance. Tangible imaging systems thus allow virtual objects to be observed and manipulated as naturally as real ones with the added benefit that object properties can be modified under user control. In this paper we describe four tangible imaging systems we have developed: the tangiBook - our first implementation on a laptop computer; tangiView - a more refined implementation on a tablet device; tangiPaint - a tangible digital painting application; and phantoView - an application that takes the tangible imaging concept into stereoscopic 3D.

  16. Motivating the Learning of Science Topics in Secondary School: A Constructivist Edutainment Setting for Studying Chaos

    ERIC Educational Resources Information Center

    Bertacchini, Francesca; Bilotta, Eleonora; Pantano, Pietro; Tavernise, Assunta

    2012-01-01

    In this paper, we present an Edutainment (education plus entertainment) secondary school setting based on the construction of artifacts and manipulation of virtual contents (images, sound, and music) connected to Chaos. This interactive learning environment also foresees the use of a virtual theatre, by which students can manipulate 3D contents…

  17. The Effects of Two Generative Activities on Learner Comprehension of Part-Whole Meaning of Rational Numbers Using Virtual Manipulatives

    ERIC Educational Resources Information Center

    Trespalacios, Jesus

    2010-01-01

    This study investigated the effects of two generative learning activities on students' academic achievement of the part-whole meaning of rational numbers while using virtual manipulatives. Third-grade students were divided randomly in two groups to evaluate the effects of two generative learning activities: answering-questions and…

  18. An Exploratory Study of Fifth-Grade Students' Reasoning about the Relationship between Fractions and Decimals When Using Number Line-Based Virtual Manipulatives

    ERIC Educational Resources Information Center

    Smith, Scott

    2017-01-01

    Understanding the relationship between fractions and decimals is an important step in developing an overall understanding of rational numbers. Research has demonstrated the feasibility of technology in the form of virtual manipulatives for facilitating students' meaningful understanding of rational number concepts. This exploratory dissertation…

  19. Enriching Project-Based Learning Environments with Virtual Manipulatives: A Comparative Study

    ERIC Educational Resources Information Center

    Çakiroglu, Ünal

    2014-01-01

    Problem statement: Although there is agreement on the potential of project based learning (PBL) and virtual manipulatives (VMs), their positive impact depends on how they are used. This study was based on supporting the use of online PBL environments and improving the efficacy of the instructional practices in PBL by combining the potentials of…

  20. Comparing the Effectiveness of Virtual and Concrete Manipulatives to Teach Algebra to Secondary Students with Learning Disabilities

    ERIC Educational Resources Information Center

    Satsangi, Rajiv; Bouck, Emily C.; Taber-Doughty, Teresa; Bofferding, Laura; Roberts, Carly A.

    2016-01-01

    A sizable body of literature exists studying various technologies and pedagogical practices for teaching secondary mathematics curriculum to students with a learning disability in mathematics. However, with the growing footprint of computer-based technologies in today's classrooms, some areas of study, such as the use of virtual manipulatives,…

  1. Conflict between object structural and functional affordances in peripersonal space.

    PubMed

    Kalénine, Solène; Wamain, Yannick; Decroix, Jérémy; Coello, Yann

    2016-10-01

    Recent studies indicate that competition between conflicting action representations slows down planning of object-directed actions. The present study aims to assess whether similar conflict effects exist during manipulable object perception. Twenty-six young adults performed reach-to-grasp and semantic judgements on conflictual objects (with competing structural and functional gestures) and non-conflictual objects (with similar structural and functional gestures) presented at difference distances in a 3D virtual environment. Results highlight a space-dependent conflict between structural and functional affordances. Perceptual judgments on conflictual objects were slower that perceptual judgments on non-conflictual objects, but only when objects were presented within reach. Findings demonstrate that competition between structural and functional affordances during object perception induces a processing cost, and further show that object position in space can bias affordance competition. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Getting a handle on virtual tools: An examination of the neuronal activity associated with virtual tool use.

    PubMed

    Rallis, Austin; Fercho, Kelene A; Bosch, Taylor J; Baugh, Lee A

    2018-01-31

    Tool use is associated with three visual streams-dorso-dorsal, ventro-dorsal, and ventral visual streams. These streams are involved in processing online motor planning, action semantics, and tool semantics features, respectively. Little is known about the way in which the brain represents virtual tools. To directly assess this question, a virtual tool paradigm was created that provided the ability to manipulate tool components in isolation of one another. During functional magnetic resonance imaging (fMRI), adult participants performed a series of virtual tool manipulation tasks in which vision and movement kinematics of the tool were manipulated. Reaction time and hand movement direction were monitored while the tasks were performed. Functional imaging revealed that activity within all three visual streams was present, in a similar pattern to what would be expected with physical tool use. However, a previously unreported network of right-hemisphere activity was found including right inferior parietal lobule, middle and superior temporal gyri and supramarginal gyrus - regions well known to be associated with tool processing within the left hemisphere. These results provide evidence that both virtual and physical tools are processed within the same brain regions, though virtual tools recruit bilateral tool processing regions to a greater extent than physical tools. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Tactile feedback for relief of deafferentation pain using virtual reality system: a pilot study.

    PubMed

    Sano, Yuko; Wake, Naoki; Ichinose, Akimichi; Osumi, Michihiro; Oya, Reishi; Sumitani, Masahiko; Kumagaya, Shin-Ichiro; Kuniyoshi, Yasuo

    2016-06-28

    Previous studies have tried to relieve deafferentation pain (DP) by using virtual reality rehabilitation systems. However, the effectiveness of multimodal sensory feedback was not validated. The objective of this study is to relieve DP by neurorehabilitation using a virtual reality system with multimodal sensory feedback and to validate the efficacy of tactile feedback on immediate pain reduction. We have developed a virtual reality rehabilitation system with multimodal sensory feedback and applied it to seven patients with DP caused by brachial plexus avulsion or arm amputation. The patients executed a reaching task using the virtual phantom limb manipulated by their real intact limb. The reaching task was conducted under two conditions: one with tactile feedback on the intact hand and one without. The pain intensity was evaluated through a questionnaire. We found that the task with the tactile feedback reduced DP more (41.8 ± 19.8 %) than the task without the tactile feedback (28.2 ± 29.5 %), which was supported by a Wilcoxon signed-rank test result (p < 0.05). Overall, our findings indicate that the tactile feedback improves the immediate pain intensity through rehabilitation using our virtual reality system.

  4. Human-computer interface glove using flexible piezoelectric sensors

    NASA Astrophysics Data System (ADS)

    Cha, Youngsu; Seo, Jeonggyu; Kim, Jun-Sik; Park, Jung-Min

    2017-05-01

    In this note, we propose a human-computer interface glove based on flexible piezoelectric sensors. We select polyvinylidene fluoride as the piezoelectric material for the sensors because of advantages such as a steady piezoelectric characteristic and good flexibility. The sensors are installed in a fabric glove by means of pockets and Velcro bands. We detect changes in the angles of the finger joints from the outputs of the sensors, and use them for controlling a virtual hand that is utilized in virtual object manipulation. To assess the sensing ability of the piezoelectric sensors, we compare the processed angles from the sensor outputs with the real angles from a camera recoding. With good agreement between the processed and real angles, we successfully demonstrate the user interaction system with the virtual hand and interface glove based on the flexible piezoelectric sensors, for four hand motions: fist clenching, pinching, touching, and grasping.

  5. Augmented Reality versus Virtual Reality for 3D Object Manipulation.

    PubMed

    Krichenbauer, Max; Yamamoto, Goshiro; Taketom, Takafumi; Sandor, Christian; Kato, Hirokazu

    2018-02-01

    Virtual Reality (VR) Head-Mounted Displays (HMDs) are on the verge of becoming commodity hardware available to the average user and feasible to use as a tool for 3D work. Some HMDs include front-facing cameras, enabling Augmented Reality (AR) functionality. Apart from avoiding collisions with the environment, interaction with virtual objects may also be affected by seeing the real environment. However, whether these effects are positive or negative has not yet been studied extensively. For most tasks it is unknown whether AR has any advantage over VR. In this work we present the results of a user study in which we compared user performance measured in task completion time on a 9 degrees of freedom object selection and transformation task performed either in AR or VR, both with a 3D input device and a mouse. Our results show faster task completion time in AR over VR. When using a 3D input device, a purely VR environment increased task completion time by 22.5 percent on average compared to AR ( ). Surprisingly, a similar effect occurred when using a mouse: users were about 17.3 percent slower in VR than in AR ( ). Mouse and 3D input device produced similar task completion times in each condition (AR or VR) respectively. We further found no differences in reported comfort.

  6. Using virtual reality to test the regularity priors used by the human visual system

    NASA Astrophysics Data System (ADS)

    Palmer, Eric; Kwon, TaeKyu; Pizlo, Zygmunt

    2017-09-01

    Virtual reality applications provide an opportunity to test human vision in well-controlled scenarios that would be difficult to generate in real physical spaces. This paper presents a study intended to evaluate the importance of the regularity priors used by the human visual system. Using a CAVE simulation, subjects viewed virtual objects in a variety of experimental manipulations. In the first experiment, the subject was asked to count the objects in a scene that was viewed either right-side-up or upside-down for 4 seconds. The subject counted more accurately in the right-side-up condition regardless of the presence of binocular disparity or color. In the second experiment, the subject was asked to reconstruct the scene from a different viewpoint. Reconstructions were accurate, but the position and orientation error was twice as high when the scene was rotated by 45°, compared to 22.5°. Similarly to the first experiment, there was little difference between monocular and binocular viewing. In the third experiment, the subject was asked to adjust the position of one object to match the depth extent to the frontal extent among three objects. Performance was best with symmetrical objects and became poorer with asymmetrical objects and poorest with only small circular markers on the floor. Finally, in the fourth experiment, we demonstrated reliable performance in monocular and binocular recovery of 3D shapes of objects standing naturally on the simulated horizontal floor. Based on these results, we conclude that gravity, horizontal ground, and symmetry priors play an important role in veridical perception of scenes.

  7. The Role of Affordances in Children's Learning Performance and Efficiency When Using Virtual Manipulative Mathematics Touch-Screen Apps

    ERIC Educational Resources Information Center

    Moyer-Packenham, Patricia S.; Bullock, Emma K.; Shumway, Jessica F.; Tucker, Stephen I.; Watts, Christina M.; Westenskow, Arla; Anderson-Pence, Katie L.; Maahs-Fladung, Cathy; Boyer-Thurgood, Jennifer; Gulkilik, Hilal; Jordan, Kerry

    2016-01-01

    This paper focuses on understanding the role that affordances played in children's learning performance and efficiency during clinical interviews of their interactions with mathematics apps on touch-screen devices. One hundred children, ages 3 to 8, each used six different virtual manipulative mathematics apps during 30-40-min interviews. The…

  8. An instrumented glove for grasp specification in virtual-reality-based point-and-direct telerobotics.

    PubMed

    Yun, M H; Cannon, D; Freivalds, A; Thomas, G

    1997-10-01

    Hand posture and force, which define aspects of the way an object is grasped, are features of robotic manipulation. A means for specifying these grasping "flavors" has been developed that uses an instrumented glove equipped with joint and force sensors. The new grasp specification system will be used at the Pennsylvania State University (Penn State) in a Virtual Reality based Point-and-Direct (VR-PAD) robotics implementation. Here, an operator gives directives to a robot in the same natural way that human may direct another. Phrases such as "put that there" cause the robot to define a grasping strategy and motion strategy to complete the task on its own. In the VR-PAD concept, pointing is done using virtual tools such that an operator can appear to graphically grasp real items in live video. Rather than requiring full duplication of forces and kinesthetic movement throughout a task as is required in manual telemanipulation, hand posture and force are now specified only once. The grasp parameters then become object flavors. The robot maintains the specified force and hand posture flavors for an object throughout the task in handling the real workpiece or item of interest. In the Computer integrated Manufacturing (CIM) Laboratory at Penn State, hand posture and force data were collected for manipulating bricks and other items that require varying amounts of force at multiple pressure points. The feasibility of measuring desired grasp characteristics was demonstrated for a modified Cyberglove impregnated with Force-Sensitive Resistor (FSR) (pressure sensors in the fingertips. A joint/force model relating the parameters of finger articulation and pressure to various lifting tasks was validated for the instrumented "wired" glove. Operators using such a modified glove may ultimately be able to configure robot grasping tasks in environments involving hazardous waste remediation, flexible manufacturing, space operations and other flexible robotics applications. In each case, the VR-PAD approach will finesse the computational and delay problems of real-time multiple-degree-of-freedom force feedback telemanipulation.

  9. Virtual Labs and Virtual Worlds

    NASA Astrophysics Data System (ADS)

    Boehler, Ted

    2006-12-01

    Virtual Labs and Virtual Worlds Coastline Community College has under development several virtual lab simulations and activities that range from biology, to language labs, to virtual discussion environments. Imagine a virtual world that students enter online, by logging onto their computer from home or anywhere they have web access. Upon entering this world they select a personalized identity represented by a digitized character (avatar) that can freely move about, interact with the environment, and communicate with other characters. In these virtual worlds, buildings, gathering places, conference rooms, labs, science rooms, and a variety of other “real world” elements are evident. When characters move about and encounter other people (players) they may freely communicate. They can examine things, manipulate objects, read signs, watch video clips, hear sounds, and jump to other locations. Goals of critical thinking, social interaction, peer collaboration, group support, and enhanced learning can be achieved in surprising new ways with this innovative approach to peer-to-peer communication in a virtual discussion world. In this presentation, short demos will be given of several online learning environments including a virtual biology lab, a marine science module, a Spanish lab, and a virtual discussion world. Coastline College has been a leader in the development of distance learning and media-based education for nearly 30 years and currently offers courses through PDA, Internet, DVD, CD-ROM, TV, and Videoconferencing technologies. Its distance learning program serves over 20,000 students every year. sponsor Jerry Meisner

  10. Integration of computer-assisted fracture reduction system and a hybrid 3-DOF-RPS mechanism for assisting the orthopedic surgery

    NASA Astrophysics Data System (ADS)

    Irwansyah; Sinh, N. P.; Lai, J. Y.; Essomba, T.; Asbar, R.; Lee, P. Y.

    2018-02-01

    In this paper, we present study to integrate virtual fracture bone reduction simulation tool with a novel hybrid 3-DOF-RPS external fixator to relocate back bone fragments into their anatomically original position. A 3D model of fractured bone was reconstructed and manipulated using 3D design and modeling software, PhysiGuide. The virtual reduction system was applied to reduce a bilateral femoral shaft fracture type 32-A3. Measurement data from fracture reduction and fixation stages were implemented to manipulate the manipulator pose in patient’s clinical case. The experimental result presents that by merging both of those techniques will give more possibilities to reduce virtual bone reduction time, improve facial and shortest healing treatment.

  11. Motion-Capture-Enabled Software for Gestural Control of 3D Models

    NASA Technical Reports Server (NTRS)

    Norris, Jeffrey S.; Luo, Victor; Crockett, Thomas M.; Shams, Khawaja S.; Powell, Mark W.; Valderrama, Anthony

    2012-01-01

    Current state-of-the-art systems use general-purpose input devices such as a keyboard, mouse, or joystick that map to tasks in unintuitive ways. This software enables a person to control intuitively the position, size, and orientation of synthetic objects in a 3D virtual environment. It makes possible the simultaneous control of the 3D position, scale, and orientation of 3D objects using natural gestures. Enabling the control of 3D objects using a commercial motion-capture system allows for natural mapping of the many degrees of freedom of the human body to the manipulation of the 3D objects. It reduces training time for this kind of task, and eliminates the need to create an expensive, special-purpose controller.

  12. Assessing the use of immersive virtual reality, mouse and touchscreen in pointing and dragging-and-dropping tasks among young, middle-aged and older adults.

    PubMed

    Chen, Jiayin; Or, Calvin

    2017-11-01

    This study assessed the use of an immersive virtual reality (VR), a mouse and a touchscreen for one-directional pointing, multi-directional pointing, and dragging-and-dropping tasks involving targets of smaller and larger widths by young (n = 18; 18-30 years), middle-aged (n = 18; 40-55 years) and older adults (n = 18; 65-75 years). A three-way, mixed-factorial design was used for data collection. The dependent variables were the movement time required and the error rate. Our main findings were that the participants took more time and made more errors in using the VR input interface than in using the mouse or the touchscreen. This pattern applied in all three age groups in all tasks, except for multi-directional pointing with a larger target width among the older group. Overall, older adults took longer to complete the tasks and made more errors than young or middle-aged adults. Larger target widths yielded shorter movement times and lower error rates in pointing tasks, but larger targets yielded higher rates of error in dragging-and-dropping tasks. Our study indicated that any other virtual environments that are similar to those we tested may be more suitable for displaying scenes than for manipulating objects that are small and require fine control. Although interacting with VR is relatively difficult, especially for older adults, there is still potential for older adults to adapt to that interface. Furthermore, adjusting the width of objects according to the type of manipulation required might be an effective way to promote performance. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. The detection of 'virtual' objects using echoes by humans: Spectral cues.

    PubMed

    Rowan, Daniel; Papadopoulos, Timos; Archer, Lauren; Goodhew, Amanda; Cozens, Hayley; Lopez, Ricardo Guzman; Edwards, David; Holmes, Hannah; Allen, Robert

    2017-07-01

    Some blind people use echoes to detect discrete, silent objects to support their spatial orientation/navigation, independence, safety and wellbeing. The acoustical features that people use for this are not well understood. Listening to changes in spectral shape due to the presence of an object could be important for object detection and avoidance, especially at short range, although it is currently not known whether it is possible with echolocation-related sounds. Bands of noise were convolved with recordings of binaural impulse responses of objects in an anechoic chamber to create 'virtual objects', which were analysed and played to sighted and blind listeners inexperienced in echolocation. The sounds were also manipulated to remove cues unrelated to spectral shape. Most listeners could accurately detect hard flat objects using changes in spectral shape. The useful spectral changes for object detection occurred above approximately 3 kHz, as with object localisation. However, energy in the sounds below 3 kHz was required to exploit changes in spectral shape for object detection, whereas energy below 3 kHz impaired object localisation. Further recordings showed that the spectral changes were diminished by room reverberation. While good high-frequency hearing is generally important for echolocation, the optimal echo-generating stimulus will probably depend on the task. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  14. Study on Collaborative Object Manipulation in Virtual Environment

    NASA Astrophysics Data System (ADS)

    Mayangsari, Maria Niken; Yong-Moo, Kwon

    This paper presents comparative study on network collaboration performance in different immersion. Especially, the relationship between user collaboration performance and degree of immersion provided by the system is addressed and compared based on several experiments. The user tests on our system include several cases: 1) Comparison between non-haptics and haptics collaborative interaction over LAN, 2) Comparison between non-haptics and haptics collaborative interaction over Internet, and 3) Analysis of collaborative interaction between non-immersive and immersive display environments.

  15. Comparing the influence of physical and virtual manipulatives in the context of the Physics by Inquiry curriculum: The case of undergraduate students' conceptual understanding of heat and temperature

    NASA Astrophysics Data System (ADS)

    Zacharia, Zacharias C.; Constantinou, Constantinos P.

    2008-04-01

    We compare the effect of experimenting with physical or virtual manipulatives on undergraduate students' conceptual understanding of heat and temperature. A pre-post comparison study design was used to replicate all aspects of a guided inquiry classroom except the mode in which students performed their experiments. This study is the first on physical and virtual manipulative experimentation in physics in which the curriculum, method of instruction, and resource capabilities were explicitly controlled. The participants were 68 undergraduates in an introductory course and were randomly assigned to an experimental or a control group. Conceptual tests were administered to both groups to assess students' understanding before, during, and after instruction. The result indicates that both modes of experimentation are equally effective in enhancing students' conceptual understanding. This result is discussed in the context of an ongoing debate on the relative importance of virtual and real laboratory work in physics education.

  16. Force Sensitive Handles and Capacitive Touch Sensor for Driving a Flexible Haptic-Based Immersive System

    PubMed Central

    Covarrubias, Mario; Bordegoni, Monica; Cugini, Umberto

    2013-01-01

    In this article, we present an approach that uses both two force sensitive handles (FSH) and a flexible capacitive touch sensor (FCTS) to drive a haptic-based immersive system. The immersive system has been developed as part of a multimodal interface for product design. The haptic interface consists of a strip that can be used by product designers to evaluate the quality of a 3D virtual shape by using touch, vision and hearing and, also, to interactively change the shape of the virtual object. Specifically, the user interacts with the FSH to move the virtual object and to appropriately position the haptic interface for retrieving the six degrees of freedom required for both manipulation and modification modalities. The FCTS allows the system to track the movement and position of the user's fingers on the strip, which is used for rendering visual and sound feedback. Two evaluation experiments are described, which involve both the evaluation and the modification of a 3D shape. Results show that the use of the haptic strip for the evaluation of aesthetic shapes is effective and supports product designers in the appreciation of the aesthetic qualities of the shape. PMID:24113680

  17. Force sensitive handles and capacitive touch sensor for driving a flexible haptic-based immersive system.

    PubMed

    Covarrubias, Mario; Bordegoni, Monica; Cugini, Umberto

    2013-10-09

    In this article, we present an approach that uses both two force sensitive handles (FSH) and a flexible capacitive touch sensor (FCTS) to drive a haptic-based immersive system. The immersive system has been developed as part of a multimodal interface for product design. The haptic interface consists of a strip that can be used by product designers to evaluate the quality of a 3D virtual shape by using touch, vision and hearing and, also, to interactively change the shape of the virtual object. Specifically, the user interacts with the FSH to move the virtual object and to appropriately position the haptic interface for retrieving the six degrees of freedom required for both manipulation and modification modalities. The FCTS allows the system to track the movement and position of the user's fingers on the strip, which is used for rendering visual and sound feedback. Two evaluation experiments are described, which involve both the evaluation and the modification of a 3D shape. Results show that the use of the haptic strip for the evaluation of aesthetic shapes is effective and supports product designers in the appreciation of the aesthetic qualities of the shape.

  18. Characteristics of manipulative in mathematics laboratory

    NASA Astrophysics Data System (ADS)

    Istiandaru, A.; Istihapsari, V.; Prahmana, R. C. I.; Setyawan, F.; Hendroanto, A.

    2017-12-01

    A manipulative is a teaching aid designed such that students could understand mathematical concepts by manipulating it. This article aims to provide an insight to the characteristics of manipulatives produced in the mathematics laboratory of Universitas Ahmad Dahlan, Indonesia. A case study was conducted to observe the existing manipulatives produced during the latest three years and classified the manipulatives based on the characteristics found. There are four kinds of manipulatives: constructivism manipulative, virtual manipulative, informative manipulative, and game-based manipulative. Each kinds of manipulative has different characteristics and impact towards the mathematics learning.

  19. Employing immersive virtual environments for innovative experiments in health care communication.

    PubMed

    Persky, Susan

    2011-03-01

    This report reviews the literature for studies that employ immersive virtual environment technology methods to conduct experimental studies in health care communication. Advantages and challenges of using these tools for research in this area are also discussed. A literature search was conducted using the Scopus database. Results were hand searched to identify the body of studies, conducted since 1995, that are related to the report objective. The review identified four relevant studies that stem from two unique projects. One project focused on the impact of a clinician's characteristics and behavior on health care communication, the other focused on the characteristics of the patient. Both projects illustrate key methodological advantages conferred by immersive virtual environments, including, ability to maintain simultaneously high experimental control and realism, ability to manipulate variables in new ways, and unique behavioral measurement opportunities. Though implementation challenges exist for immersive virtual environment-based research methods, given the technology's unique capabilities, benefits can outweigh the costs in many instances. Immersive virtual environments may therefore prove an important addition to the array of tools available for advancing our understanding of communication in health care. Published by Elsevier Ireland Ltd.

  20. Virtual Technologies to Develop Visual-Spatial Ability in Engineering Students

    ERIC Educational Resources Information Center

    Roca-González, Cristina; Martin-Gutierrez, Jorge; García-Dominguez, Melchor; Carrodeguas, Mª del Carmen Mato

    2017-01-01

    The present study assessed a short training experiment to improve spatial abilities using two tools based on virtual technologies: one focused on manipulation of specific geometric virtual pieces, and the other consisting of virtual orienteering game. The two tools can help improve spatial abilities required for many engineering problem-solving…

  1. Virtual reality as a new trend in mechanical and electrical engineering education

    NASA Astrophysics Data System (ADS)

    Kamińska, Dorota; Sapiński, Tomasz; Aitken, Nicola; Rocca, Andreas Della; Barańska, Maja; Wietsma, Remco

    2017-12-01

    In their daily practice, academics frequently face lack of access to modern equipment and devices, which are currently in use on the market. Moreover, many students have problems with understanding issues connected to mechanical and electrical engineering due to the complexity, necessity of abstract thinking and the fact that those concepts are not fully tangible. Many studies indicate that virtual reality can be successfully used as a training tool in various domains, such as development, health-care, the military or school education. In this paper, an interactive training strategy for mechanical and electrical engineering education shall be proposed. The prototype of the software consists of a simple interface, meaning it is easy for comprehension and use. Additionally, the main part of the prototype allows the user to virtually manipulate a 3D object that should be analyzed and studied. Initial studies indicate that the use of virtual reality can contribute to improving the quality and efficiency of higher education, as well as qualifications, competencies and the skills of graduates, and increase their competitiveness in the labour market.

  2. Virtual knotting in proteins and other open curves

    NASA Astrophysics Data System (ADS)

    Alexander, Keith; Taylor, Alexander; Dennis, Mark

    Long filaments naturally knot, from string to long-chain molecules. Knotting in a filament affects its properties, and may be very stable or disappear under slight manipulation. Knotting has been identified in protein backbones for which these mechanical constraints are of fundamental importance to their function, although they are open curves in which knots are not mathematically well defined; knotting can only be identified by closing the ends of the chain. We introduce a new method for resolving knotting in open curves using virtual knots, a wider class of topological objects that do not use a classical closure, capturing the topological ambiguity of open curves. Having analysed all proteins in the Protein Data Bank by this new scheme, we recover and extend previous knotting results, and identify topological interest in some new cases. The statistics of virtual knots in proteins are compared with those of Hamiltonian subchains on cubic lattices, identifying a regime of open curves in which the virtual knotting description is likely to be important. This work was supported by the Leverhulme Trust Programme Grant ``Scientific Properties of Complex Knots'' and the EPSRC.

  3. The Effect of Manipulatives on Achievement Scores in the Middle School Mathematics Class

    ERIC Educational Resources Information Center

    Doias, Elaine D.

    2013-01-01

    When applied to mathematics education, manipulatives help students to visualize mathematical concepts and apply them to everyday situations. Interest in mathematics instruction has increased dramatically over the past two decades with the introduction of virtual manipulatives, as opposed to the concrete manipulatives that have been employed for…

  4. Recombinant Enaction: Manipulatives Generate New Procedures in the Imagination, by Extending and Recombining Action Spaces

    ERIC Educational Resources Information Center

    Rahaman, Jeenath; Agrawal, Harshit; Srivastava, Nisheeth; Chandrasekharan, Sanjay

    2018-01-01

    Manipulation of physical models such as tangrams and tiles is a popular approach to teaching early mathematics concepts. This pedagogical approach is extended by new computational media, where mathematical entities such as equations and vectors can be virtually manipulated. The cognitive and neural mechanisms supporting such manipulation-based…

  5. Small Business Innovations

    NASA Technical Reports Server (NTRS)

    1995-01-01

    The PER-Force Handcontroller was originally developed for the International Space Station under a Johnson Space Center Small Business Innovation Research (SBIR) contract. Produced by Cybernet Systems Corporation, the unit is a force-reflecting system that manipulates robots or objects by "feel." The Handcontroller moves in six degrees of freedom, with real and virtual reality forces simulated by a 3-D molecular modeling software package. It is used in molecular modeling in metallurgy applications, satellite docking research, and in research on military unmanned ground vehicles.

  6. Intuitive operability evaluation of surgical robot using brain activity measurement to determine immersive reality.

    PubMed

    Miura, Satoshi; Kobayashi, Yo; Kawamura, Kazuya; Seki, Masatoshi; Nakashima, Yasutaka; Noguchi, Takehiko; Kasuya, Masahiro; Yokoo, Yuki; Fujie, Masakatsu G

    2012-01-01

    Surgical robots have improved considerably in recent years, but intuitive operability, which represents user inter-operability, has not been quantitatively evaluated. Therefore, for design of a robot with intuitive operability, we propose a method to measure brain activity to determine intuitive operability. The objective of this paper is to determine the master configuration against the monitor that allows users to perceive the manipulator as part of their own body. We assume that the master configuration produces an immersive reality experience for the user of putting his own arm into the monitor. In our experiments, as subjects controlled the hand controller to position the tip of the virtual slave manipulator on a target in a surgical simulator, we measured brain activity through brain-imaging devices. We performed our experiments for a variety of master manipulator configurations with the monitor position fixed. For all test subjects, we found that brain activity was stimulated significantly when the master manipulator was located behind the monitor. We conclude that this master configuration produces immersive reality through the body image, which is related to visual and somatic sense feedback.

  7. Was it less painful for knights? Influence of appearance on pain perception.

    PubMed

    Weeth, A; Mühlberger, A; Shiban, Y

    2017-11-01

    Pain perception is a subjective experience shaped by different factors. In this study, we investigated the influence of a visually manipulated appearance of a virtual arm on pain perception. Specifically, we investigated how pain perception and vegetative skin responses were modified by inducing a virtual protection on the right arm by a virtual armour. Participants (n = 32) immersed in virtual reality embodied a virtual arm, which appeared in three different versions (uncovered, neutral or protected). During the virtual reality simulation, the participants received electrical stimulations of varying intensities. Skin conductance level (SCL) was analysed for the phase anticipation (from the moment the arm appeared until the electric stimulation) and perception of pain (after the electric stimulation). Pain ratings were acquired after the painful stimuli occurred. The sense of embodiment was positive for the unprotected and neutral condition and lower for the protected than for the neutral arm. Pain ratings were significantly decreased in the protected arm condition compared with both the unprotected arm and the neutral arm conditions. The SCL measurements showed no significant differences for the three arm types. According to the pain ratings, participants felt significantly less pain in the covered arm condition compared with the unprotected and the neutral arm condition. Subjective pain perception was decreased by a virtual protection of the arm in VR. The simplicity of the manipulation suggests possible practical uses in pain therapy by strengthening the patients' own capacities to influence their pain using simple cognitive manipulations via virtual reality. A virtual, covered arm causes differences in reported pain ratings. Physiological measurements do not confirm the findings. Visual information about body protection can have an impact on pain perception. © 2017 European Pain Federation - EFIC®.

  8. Algorithms for Haptic Rendering of 3D Objects

    NASA Technical Reports Server (NTRS)

    Basdogan, Cagatay; Ho, Chih-Hao; Srinavasan, Mandayam

    2003-01-01

    Algorithms have been developed to provide haptic rendering of three-dimensional (3D) objects in virtual (that is, computationally simulated) environments. The goal of haptic rendering is to generate tactual displays of the shapes, hardnesses, surface textures, and frictional properties of 3D objects in real time. Haptic rendering is a major element of the emerging field of computer haptics, which invites comparison with computer graphics. We have already seen various applications of computer haptics in the areas of medicine (surgical simulation, telemedicine, haptic user interfaces for blind people, and rehabilitation of patients with neurological disorders), entertainment (3D painting, character animation, morphing, and sculpting), mechanical design (path planning and assembly sequencing), and scientific visualization (geophysical data analysis and molecular manipulation).

  9. Commercialization of JPL Virtual Reality calibration and redundant manipulator control technologies

    NASA Technical Reports Server (NTRS)

    Kim, Won S.; Seraji, Homayoun; Fiorini, Paolo; Brown, Robert; Christensen, Brian; Beale, Chris; Karlen, James; Eismann, Paul

    1994-01-01

    Within NASA's recent thrust for industrial collaboration, JPL (Jet Propulsion Laboratory) has recently established two technology cooperation agreements in the robotics area: one on virtual reality (VR) calibration with Deneb Robotics, Inc., and the other on redundant manipulator control with Robotics Research Corporation (RRC). These technology transfer cooperation tasks will enable both Deneb and RRC to commercialize enhanced versions of their products that will greatly benefit both space and terrestrial telerobotic applications.

  10. Comprehensive modelling and simulation of cylindrical nanoparticles manipulation by using a virtual reality environment.

    PubMed

    Korayem, Moharam Habibnejad; Hoshiar, Ali Kafash; Ghofrani, Maedeh

    2017-08-01

    With the expansion of nanotechnology, robots based on atomic force microscope (AFM) have been widely used as effective tools for displacing nanoparticles and constructing nanostructures. One of the most limiting factors in AFM-based manipulation procedures is the inability of simultaneously observing the controlled pushing and displacing of nanoparticles while performing the operation. To deal with this limitation, a virtual reality environment has been used in this paper for observing the manipulation operation. In the simulations performed in this paper, first, the images acquired by the atomic force microscope have been processed and the positions and dimensions of nanoparticles have been determined. Then, by dynamically modelling the transfer of nanoparticles and simulating the critical force-time diagrams, a controlled displacement of nanoparticles has been accomplished. The simulations have been further developed for the use of rectangular, V-shape and dagger-shape cantilevers. The established virtual reality environment has made it possible to simulate the manipulation of biological particles in a liquid medium. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Revisiting Mathematics Manipulative Materials

    ERIC Educational Resources Information Center

    Swan, Paul; Marshall, Linda

    2010-01-01

    It is over 12 years since "APMC" published Bob Perry and Peter Howard's research on the use of mathematics manipulative materials in primary mathematics classrooms. Since then the availability of virtual manipulatives and associated access to computers and interactive whiteboards have caused educators to rethink the use of mathematics…

  12. Interactive Immersive Virtualmuseum: Digital Documentation for Virtual Interaction

    NASA Astrophysics Data System (ADS)

    Clini, P.; Ruggeri, L.; Angeloni, R.; Sasso, M.

    2018-05-01

    Thanks to their playful and educational approach Virtual Museum systems are very effective for the communication of Cultural Heritage. Among the latest technologies Immersive Virtual Reality is probably the most appealing and potentially effective to serve this purpose; nevertheless, due to a poor user-system interaction, caused by an incomplete maturity of a specific technology for museum applications, it is still quite uncommon to find immersive installations in museums. This paper explore the possibilities offered by this technology and presents a workflow that, starting from digital documentation, makes possible an interaction with archaeological finds or any other cultural heritage inside different kinds of immersive virtual reality spaces. Two different cases studies are presented: the National Archaeological Museum of Marche in Ancona and the 3D reconstruction of the Roman Forum of Fanum Fortunae. Two different approaches not only conceptually but also in contents; while the Archaeological Museum is represented in the application simply using spherical panoramas to give the perception of the third dimension, the Roman Forum is a 3D model that allows visitors to move in the virtual space as in the real one. In both cases, the acquisition phase of the artefacts is central; artefacts are digitized with the photogrammetric technique Structure for Motion then they are integrated inside the immersive virtual space using a PC with a HTC Vive system that allows the user to interact with the 3D models turning the manipulation of objects into a fun and exciting experience. The challenge, taking advantage of the latest opportunities made available by photogrammetry and ICT, is to enrich visitors' experience in Real Museum making possible the interaction with perishable, damaged or lost objects and the public access to inaccessible or no longer existing places promoting in this way the preservation of fragile sites.

  13. Research on The Construction of Flexible Multi-body Dynamics Model based on Virtual Components

    NASA Astrophysics Data System (ADS)

    Dong, Z. H.; Ye, X.; Yang, F.

    2018-05-01

    Focus on the harsh operation condition of space manipulator, which cannot afford relative large collision momentum, this paper proposes a new concept and technology, called soft-contact technology. In order to solve the problem of collision dynamics of flexible multi-body system caused by this technology, this paper also proposes the concepts of virtual components and virtual hinges, and constructs flexible dynamic model based on virtual components, and also studies on its solutions. On this basis, this paper uses NX to carry out model and comparison simulation for space manipulator in 3 different modes. The results show that using the model of multi-rigid body + flexible body hinge + controllable damping can make effective control on amplitude for the force and torque caused by target satellite collision.

  14. Towards Virtual FLS: Development of a Peg Transfer Simulator

    PubMed Central

    Arikatla, Venkata S; Ahn, Woojin; Sankaranarayanan, Ganesh; De, Suvranu

    2014-01-01

    Background Peg transfer is one of five tasks in the Fundamentals of Laparoscopic Surgery (FLS), program. We report the development and validation of a Virtual Basic Laparoscopic Skill Trainer-Peg Transfer (VBLaST-PT©) simulator for automatic real-time scoring and objective quantification of performance. Methods We have introduced new techniques in order to allow bi-manual manipulation of pegs and automatic scoring/evaluation while maintaining high quality of simulation. We performed a preliminary face and construct validation study with 22 subjects divided into two groups: experts (PGY 4–5, fellow and practicing surgeons) and novice (PGY 1–3). Results Face validation shows high scores for all the aspects of the simulation. A two-tailed Mann-Whitney U-test scores showed significant difference between the two groups on completion time (p=0.003), FLS score (p=0.002) and the VBLaST-PT© score (p=0.006). Conclusions VBLaST-PT© is a high quality virtual simulator that showed both face and construct validity. PMID:24030904

  15. Safety margins in older adults increase with improved control of a dynamic object

    PubMed Central

    Hasson, Christopher J.; Sternad, Dagmar

    2014-01-01

    Older adults face decreasing motor capabilities due to pervasive neuromuscular degradations. As a consequence, errors in movement control increase. Thus, older individuals should maintain larger safety margins than younger adults. While this has been shown for object manipulation tasks, several reports on whole-body activities, such as posture and locomotion, demonstrate age-related reductions in safety margins. This is despite increased costs for control errors, such as a fall. We posit that this paradox could be explained by the dynamic challenge presented by the body or also an external object, and that age-related reductions in safety margins are in part due to a decreased ability to control dynamics. To test this conjecture we used a virtual ball-in-cup task that had challenging dynamics, yet afforded an explicit rendering of the physics and safety margin. The hypotheses were: (1) When manipulating an object with challenging dynamics, older adults have smaller safety margins than younger adults. (2) Older adults increase their safety margins with practice. Nine young and 10 healthy older adults practiced moving the virtual ball-in-cup to a target location in exactly 2 s. The accuracy and precision of the timing error quantified skill, and the ball energy relative to an escape threshold quantified the safety margin. Compared to the young adults, older adults had increased timing errors, greater variability, and decreased safety margins. With practice, both young and older adults improved their ability to control the object with decreased timing errors and variability, and increased their safety margins. These results suggest that safety margins are related to the ability to control dynamics, and may explain why in tasks with simple dynamics older adults use adequate safety margins, but in more complex tasks, safety margins may be inadequate. Further, the results indicate that task-specific training may improve safety margins in older adults. PMID:25071566

  16. Effects of 3D virtual haptics force feedback on brand personality perception: the mediating role of physical presence in advergames.

    PubMed

    Jin, Seung-A Annie

    2010-06-01

    This study gauged the effects of force feedback in the Novint Falcon haptics system on the sensory and cognitive dimensions of a virtual test-driving experience. First, in order to explore the effects of tactile stimuli with force feedback on users' sensory experience, feelings of physical presence (the extent to which virtual physical objects are experienced as actual physical objects) were measured after participants used the haptics interface. Second, to evaluate the effects of force feedback on the cognitive dimension of consumers' virtual experience, this study investigated brand personality perception. The experiment utilized the Novint Falcon haptics controller to induce immersive virtual test-driving through tactile stimuli. The author designed a two-group (haptics stimuli with force feedback versus no force feedback) comparison experiment (N = 238) by manipulating the level of force feedback. Users in the force feedback condition were exposed to tactile stimuli involving various force feedback effects (e.g., terrain effects, acceleration, and lateral forces) while test-driving a rally car. In contrast, users in the control condition test-drove the rally car using the Novint Falcon but were not given any force feedback. Results of ANOVAs indicated that (a) users exposed to force feedback felt stronger physical presence than those in the no force feedback condition, and (b) users exposed to haptics stimuli with force feedback perceived the brand personality of the car to be more rugged than those in the control condition. Managerial implications of the study for product trial in the business world are discussed.

  17. The Fidelity of ’Feel’: Emotional Affordance in Virtual Environments

    DTIC Science & Technology

    2005-07-01

    The Fidelity of “Feel”: Emotional Affordance in Virtual Environments Jacquelyn Ford Morie, Josh Williams, Aimee Dozois, Donat-Pierre Luigi... environment but also the participant. We do this with the focus on what emotional affordances this manipulation will provide. Our first evaluation scenario...emotionally affective VEs. Keywords: Immersive Environments , Virtual Environments , VEs, Virtual Reality, emotion , affordance, fidelity, presence

  18. Using Virtual Manipulatives with Pre-Service Mathematics Teachers to Create Representational Models

    ERIC Educational Resources Information Center

    Cooper, Thomas E.

    2012-01-01

    In mathematics education, physical manipulatives such as algebra tiles, pattern blocks, and two-colour counters are commonly used to provide concrete models of abstract concepts. With these traditional manipulatives, people can communicate with the tools only in one another's presence. This limitation poses difficulties concerning assessment and…

  19. Virtual anthropology.

    PubMed

    Weber, Gerhard W

    2015-02-01

    Comparative morphology, dealing with the diversity of form and shape, and functional morphology, the study of the relationship between the structure and the function of an organism's parts, are both important subdisciplines in biological research. Virtual anthropology (VA) contributes to comparative morphology by taking advantage of technological innovations, and it also offers new opportunities for functional analyses. It exploits digital technologies and pools experts from different domains such as anthropology, primatology, medicine, paleontology, mathematics, statistics, computer science, and engineering. VA as a technical term was coined in the late 1990s from the perspective of anthropologists with the intent of being mostly applied to biological questions concerning recent and fossil hominoids. More generally, however, there are advanced methods to study shape and size or to manipulate data digitally suitable for application to all kinds of primates, mammals, other vertebrates, and invertebrates or to issues regarding plants, tools, or other objects. In this sense, we could also call the field "virtual morphology." The approach yields permanently available virtual copies of specimens and data that comprehensively quantify geometry, including previously neglected anatomical regions. It applies advanced statistical methods, supports the reconstruction of specimens based on reproducible manipulations, and promotes the acquisition of larger samples by data sharing via electronic archives. Finally, it can help identify new, hidden traits, which is particularly important in paleoanthropology, where the scarcity of material demands extracting information from fragmentary remains. This contribution presents a current view of the six main work steps of VA: digitize, expose, compare, reconstruct, materialize, and share. The VA machinery has also been successfully used in biomechanical studies which simulate the stress and strains appearing in structures. Although methodological issues remain to be solved before results from the two domains can be fully integrated, the various overlaps and cross-fertilizations suggest the widespread appearance of a "virtual functional morphology" in the near future. © 2014 American Association of Physical Anthropologists.

  20. The Importance of Postural Cues for Determining Eye Height in Immersive Virtual Reality

    PubMed Central

    Leyrer, Markus; Linkenauger, Sally A.; Bülthoff, Heinrich H.; Mohler, Betty J.

    2015-01-01

    In human perception, the ability to determine eye height is essential, because eye height is used to scale heights of objects, velocities, affordances and distances, all of which allow for successful environmental interaction. It is well understood that eye height is fundamental to determine many of these percepts. Yet, how eye height itself is provided is still largely unknown. While the information potentially specifying eye height in the real world is naturally coincident in an environment with a regular ground surface, these sources of information can be easily divergent in similar and common virtual reality scenarios. Thus, we conducted virtual reality experiments where we manipulated the virtual eye height in a distance perception task to investigate how eye height might be determined in such a scenario. We found that humans rely more on their postural cues for determining their eye height if there is a conflict between visual and postural information and little opportunity for perceptual-motor calibration is provided. This is demonstrated by the predictable variations in their distance estimates. Our results suggest that the eye height in such circumstances is informed by postural cues when estimating egocentric distances in virtual reality and consequently, does not depend on an internalized value for eye height. PMID:25993274

  1. A novel augmented reality system for displaying inferior alveolar nerve bundles in maxillofacial surgery

    PubMed Central

    Zhu, Ming; Liu, Fei; Chai, Gang; Pan, Jun J.; Jiang, Taoran; Lin, Li; Xin, Yu; Zhang, Yan; Li, Qingfeng

    2017-01-01

    Augmented reality systems can combine virtual images with a real environment to ensure accurate surgery with lower risk. This study aimed to develop a novel registration and tracking technique to establish a navigation system based on augmented reality for maxillofacial surgery. Specifically, a virtual image is reconstructed from CT data using 3D software. The real environment is tracked by the augmented reality (AR) software. The novel registration strategy that we created uses an occlusal splint compounded with a fiducial marker (OSM) to establish a relationship between the virtual image and the real object. After the fiducial marker is recognized, the virtual image is superimposed onto the real environment, forming the “integrated image” on semi-transparent glass. Via the registration process, the integral image, which combines the virtual image with the real scene, is successfully presented on the semi-transparent helmet. The position error of this navigation system is 0.96 ± 0.51 mm. This augmented reality system was applied in the clinic and good surgical outcomes were obtained. The augmented reality system that we established for maxillofacial surgery has the advantages of easy manipulation and high accuracy, which can improve surgical outcomes. Thus, this system exhibits significant potential in clinical applications. PMID:28198442

  2. A novel augmented reality system for displaying inferior alveolar nerve bundles in maxillofacial surgery.

    PubMed

    Zhu, Ming; Liu, Fei; Chai, Gang; Pan, Jun J; Jiang, Taoran; Lin, Li; Xin, Yu; Zhang, Yan; Li, Qingfeng

    2017-02-15

    Augmented reality systems can combine virtual images with a real environment to ensure accurate surgery with lower risk. This study aimed to develop a novel registration and tracking technique to establish a navigation system based on augmented reality for maxillofacial surgery. Specifically, a virtual image is reconstructed from CT data using 3D software. The real environment is tracked by the augmented reality (AR) software. The novel registration strategy that we created uses an occlusal splint compounded with a fiducial marker (OSM) to establish a relationship between the virtual image and the real object. After the fiducial marker is recognized, the virtual image is superimposed onto the real environment, forming the "integrated image" on semi-transparent glass. Via the registration process, the integral image, which combines the virtual image with the real scene, is successfully presented on the semi-transparent helmet. The position error of this navigation system is 0.96 ± 0.51 mm. This augmented reality system was applied in the clinic and good surgical outcomes were obtained. The augmented reality system that we established for maxillofacial surgery has the advantages of easy manipulation and high accuracy, which can improve surgical outcomes. Thus, this system exhibits significant potential in clinical applications.

  3. The importance of postural cues for determining eye height in immersive virtual reality.

    PubMed

    Leyrer, Markus; Linkenauger, Sally A; Bülthoff, Heinrich H; Mohler, Betty J

    2015-01-01

    In human perception, the ability to determine eye height is essential, because eye height is used to scale heights of objects, velocities, affordances and distances, all of which allow for successful environmental interaction. It is well understood that eye height is fundamental to determine many of these percepts. Yet, how eye height itself is provided is still largely unknown. While the information potentially specifying eye height in the real world is naturally coincident in an environment with a regular ground surface, these sources of information can be easily divergent in similar and common virtual reality scenarios. Thus, we conducted virtual reality experiments where we manipulated the virtual eye height in a distance perception task to investigate how eye height might be determined in such a scenario. We found that humans rely more on their postural cues for determining their eye height if there is a conflict between visual and postural information and little opportunity for perceptual-motor calibration is provided. This is demonstrated by the predictable variations in their distance estimates. Our results suggest that the eye height in such circumstances is informed by postural cues when estimating egocentric distances in virtual reality and consequently, does not depend on an internalized value for eye height.

  4. Impossible spaces: maximizing natural walking in virtual environments with self-overlapping architecture.

    PubMed

    Suma, Evan A; Lipps, Zachary; Finkelstein, Samantha; Krum, David M; Bolas, Mark

    2012-04-01

    Walking is only possible within immersive virtual environments that fit inside the boundaries of the user's physical workspace. To reduce the severity of the restrictions imposed by limited physical area, we introduce "impossible spaces," a new design mechanic for virtual environments that wish to maximize the size of the virtual environment that can be explored with natural locomotion. Such environments make use of self-overlapping architectural layouts, effectively compressing comparatively large interior environments into smaller physical areas. We conducted two formal user studies to explore the perception and experience of impossible spaces. In the first experiment, we showed that reasonably small virtual rooms may overlap by as much as 56% before users begin to detect that they are in an impossible space, and that the larger virtual rooms that expanded to maximally fill our available 9.14 m x 9.14 m workspace may overlap by up to 31%. Our results also demonstrate that users perceive distances to objects in adjacent overlapping rooms as if the overall space was uncompressed, even at overlap levels that were overtly noticeable. In our second experiment, we combined several well-known redirection techniques to string together a chain of impossible spaces in an expansive outdoor scene. We then conducted an exploratory analysis of users' verbal feedback during exploration, which indicated that impossible spaces provide an even more powerful illusion when users are naive to the manipulation.

  5. Two-Time Scale Virtual Sensor Design for Vibration Observation of a Translational Flexible-Link Manipulator Based on Singular Perturbation and Differential Games

    PubMed Central

    Ju, Jinyong; Li, Wei; Wang, Yuqiao; Fan, Mengbao; Yang, Xuefeng

    2016-01-01

    Effective feedback control requires all state variable information of the system. However, in the translational flexible-link manipulator (TFM) system, it is unrealistic to measure the vibration signals and their time derivative of any points of the TFM by infinite sensors. With the rigid-flexible coupling between the global motion of the rigid base and the elastic vibration of the flexible-link manipulator considered, a two-time scale virtual sensor, which includes the speed observer and the vibration observer, is designed to achieve the estimation for the vibration signals and their time derivative of the TFM, as well as the speed observer and the vibration observer are separately designed for the slow and fast subsystems, which are decomposed from the dynamic model of the TFM by the singular perturbation. Additionally, based on the linear-quadratic differential games, the observer gains of the two-time scale virtual sensor are optimized, which aims to minimize the estimation error while keeping the observer stable. Finally, the numerical calculation and experiment verify the efficiency of the designed two-time scale virtual sensor. PMID:27801840

  6. Characterizing first and third person viewpoints and their alternation for embodied interaction in virtual reality.

    PubMed

    Galvan Debarba, Henrique; Bovet, Sidney; Salomon, Roy; Blanke, Olaf; Herbelin, Bruno; Boulic, Ronan

    2017-01-01

    Empirical research on the bodily self has shown that the body representation is malleable, and prone to manipulation when conflicting sensory stimuli are presented. Using Virtual Reality (VR) we assessed the effects of manipulating multisensory feedback (full body control and visuo-tactile congruence) and visual perspective (first and third person perspective) on the sense of embodying a virtual body that was exposed to a virtual threat. We also investigated how subjects behave when the possibility of alternating between first and third person perspective at will was presented. Our results support that illusory ownership of a virtual body can be achieved in both first and third person perspectives under congruent visuo-motor-tactile condition. However, subjective body ownership and reaction to threat were generally stronger for first person perspective and alternating condition than for third person perspective. This suggests that the possibility of alternating perspective is compatible with a strong sense of embodiment, which is meaningful for the design of new embodied VR experiences.

  7. Characterizing first and third person viewpoints and their alternation for embodied interaction in virtual reality

    PubMed Central

    Bovet, Sidney; Salomon, Roy; Blanke, Olaf; Herbelin, Bruno; Boulic, Ronan

    2017-01-01

    Empirical research on the bodily self has shown that the body representation is malleable, and prone to manipulation when conflicting sensory stimuli are presented. Using Virtual Reality (VR) we assessed the effects of manipulating multisensory feedback (full body control and visuo-tactile congruence) and visual perspective (first and third person perspective) on the sense of embodying a virtual body that was exposed to a virtual threat. We also investigated how subjects behave when the possibility of alternating between first and third person perspective at will was presented. Our results support that illusory ownership of a virtual body can be achieved in both first and third person perspectives under congruent visuo-motor-tactile condition. However, subjective body ownership and reaction to threat were generally stronger for first person perspective and alternating condition than for third person perspective. This suggests that the possibility of alternating perspective is compatible with a strong sense of embodiment, which is meaningful for the design of new embodied VR experiences. PMID:29281736

  8. Proteins analysed as virtual knots

    NASA Astrophysics Data System (ADS)

    Alexander, Keith; Taylor, Alexander J.; Dennis, Mark R.

    2017-02-01

    Long, flexible physical filaments are naturally tangled and knotted, from macroscopic string down to long-chain molecules. The existence of knotting in a filament naturally affects its configuration and properties, and may be very stable or disappear rapidly under manipulation and interaction. Knotting has been previously identified in protein backbone chains, for which these mechanical constraints are of fundamental importance to their molecular functionality, despite their being open curves in which the knots are not mathematically well defined; knotting can only be identified by closing the termini of the chain somehow. We introduce a new method for resolving knotting in open curves using virtual knots, which are a wider class of topological objects that do not require a classical closure and so naturally capture the topological ambiguity inherent in open curves. We describe the results of analysing proteins in the Protein Data Bank by this new scheme, recovering and extending previous knotting results, and identifying topological interest in some new cases. The statistics of virtual knots in protein chains are compared with those of open random walks and Hamiltonian subchains on cubic lattices, identifying a regime of open curves in which the virtual knotting description is likely to be important.

  9. Proteins analysed as virtual knots

    PubMed Central

    Alexander, Keith; Taylor, Alexander J.; Dennis, Mark R.

    2017-01-01

    Long, flexible physical filaments are naturally tangled and knotted, from macroscopic string down to long-chain molecules. The existence of knotting in a filament naturally affects its configuration and properties, and may be very stable or disappear rapidly under manipulation and interaction. Knotting has been previously identified in protein backbone chains, for which these mechanical constraints are of fundamental importance to their molecular functionality, despite their being open curves in which the knots are not mathematically well defined; knotting can only be identified by closing the termini of the chain somehow. We introduce a new method for resolving knotting in open curves using virtual knots, which are a wider class of topological objects that do not require a classical closure and so naturally capture the topological ambiguity inherent in open curves. We describe the results of analysing proteins in the Protein Data Bank by this new scheme, recovering and extending previous knotting results, and identifying topological interest in some new cases. The statistics of virtual knots in protein chains are compared with those of open random walks and Hamiltonian subchains on cubic lattices, identifying a regime of open curves in which the virtual knotting description is likely to be important. PMID:28205562

  10. Nomad devices for interactions in immersive virtual environments

    NASA Astrophysics Data System (ADS)

    George, Paul; Kemeny, Andras; Merienne, Frédéric; Chardonnet, Jean-Rémy; Thouvenin, Indira Mouttapa; Posselt, Javier; Icart, Emmanuel

    2013-03-01

    Renault is currently setting up a new CAVE™, a 5 rear-projected wall virtual reality room with a combined 3D resolution of 100 Mpixels, distributed over sixteen 4k projectors and two 2k projector as well as an additional 3D HD collaborative powerwall. Renault's CAVE™ aims at answering needs of the various vehicle conception steps [1]. Starting from vehicle Design, through the subsequent Engineering steps, Ergonomic evaluation and perceived quality control, Renault has built up a list of use-cases and carried out an early software evaluation in the four sided CAVE™ of Institute Image, called MOVE. One goal of the project is to study interactions in a CAVE™, especially with nomad devices such as IPhone or IPad to manipulate virtual objects and to develop visualization possibilities. Inspired by nomad devices current uses (multi-touch gestures, IPhone UI look'n'feel and AR applications), we have implemented an early feature set taking advantage of these popular input devices. In this paper, we present its performance through measurement data collected in our test platform, a 4-sided homemade low-cost virtual reality room, powered by ultra-short-range and standard HD home projectors.

  11. Virtual experiments in electronics: beyond logistics, budgets, and the art of the possible

    NASA Astrophysics Data System (ADS)

    Chapman, Brian

    1999-09-01

    It is common and correct to suppose that computers support flexible delivery of educational resources by offering virtual experiments that replicate and substitute for experiments traditionally offered in conventional teaching laboratories. However, traditional methods are limited by logistics, costs, and what is physically possible to accomplish on a laboratory bench. Virtual experiments allow experimental approaches to teaching and learning to transcend these limits. This paper analyses recent and current developments in educational software for 1st- year physics, 2nd-year electronics engineering and 3rd-year communication engineering, based on three criteria: (1)Is the virtual experiment possible in a real laboratory? (2)How direct is the link between the experimental manipulation and the reinforcement of theoretical learning? (3) What impact might the virtual experiment have on the learner's acquisition of practical measurement skills? Virtual experiments allow more flexibility in the directness of the link between experimental manipulation and the theoretical message. However, increasing the directness of this link may reduce or even abolish the measurement processes associated with traditional experiments. Virtual experiments thus pose educational challenges: (a) expanding the design of experimentally based curricula beyond traditional boundaries and (b) ensuring that the learner acquires sufficient experience in making practical measurements.

  12. Allocentric information is used for memory-guided reaching in depth: A virtual reality study.

    PubMed

    Klinghammer, Mathias; Schütz, Immo; Blohm, Gunnar; Fiehler, Katja

    2016-12-01

    Previous research has demonstrated that humans use allocentric information when reaching to remembered visual targets, but most of the studies are limited to 2D space. Here, we study allocentric coding of memorized reach targets in 3D virtual reality. In particular, we investigated the use of allocentric information for memory-guided reaching in depth and the role of binocular and monocular (object size) depth cues for coding object locations in 3D space. To this end, we presented a scene with objects on a table which were located at different distances from the observer and served as reach targets or allocentric cues. After free visual exploration of this scene and a short delay the scene reappeared, but with one object missing (=reach target). In addition, the remaining objects were shifted horizontally or in depth. When objects were shifted in depth, we also independently manipulated object size by either magnifying or reducing their size. After the scene vanished, participants reached to the remembered target location on the blank table. Reaching endpoints deviated systematically in the direction of object shifts, similar to our previous results from 2D presentations. This deviation was stronger for object shifts in depth than in the horizontal plane and independent of observer-target-distance. Reaching endpoints systematically varied with changes in object size. Our results suggest that allocentric information is used for coding targets for memory-guided reaching in depth. Thereby, retinal disparity and vergence as well as object size provide important binocular and monocular depth cues. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. A convertor and user interface to import CAD files into worldtoolkit virtual reality systems

    NASA Technical Reports Server (NTRS)

    Wang, Peter Hor-Ching

    1996-01-01

    Virtual Reality (VR) is a rapidly developing human-to-computer interface technology. VR can be considered as a three-dimensional computer-generated Virtual World (VW) which can sense particular aspects of a user's behavior, allow the user to manipulate the objects interactively, and render the VW at real-time accordingly. The user is totally immersed in the virtual world and feel the sense of transforming into that VW. NASA/MSFC Computer Application Virtual Environments (CAVE) has been developing the space-related VR applications since 1990. The VR systems in CAVE lab are based on VPL RB2 system which consists of a VPL RB2 control tower, an LX eyephone, an Isotrak polhemus sensor, two Fastrak polhemus sensors, a folk of Bird sensor, and two VPL DG2 DataGloves. A dynamics animator called Body Electric from VPL is used as the control system to interface with all the input/output devices and to provide the network communications as well as VR programming environment. The RB2 Swivel 3D is used as the modelling program to construct the VW's. A severe limitation of the VPL VR system is the use of RB2 Swivel 3D, which restricts the files to a maximum of 1020 objects and doesn't have the advanced graphics texture mapping. The other limitation is that the VPL VR system is a turn-key system which does not provide the flexibility for user to add new sensors and C language interface. Recently, NASA/MSFC CAVE lab provides VR systems built on Sense8 WorldToolKit (WTK) which is a C library for creating VR development environments. WTK provides device drivers for most of the sensors and eyephones available on the VR market. WTK accepts several CAD file formats, such as Sense8 Neutral File Format, AutoCAD DXF and 3D Studio file format, Wave Front OBJ file format, VideoScape GEO file format, Intergraph EMS stereolithographics and CATIA Stereolithographics STL file formats. WTK functions are object-oriented in their naming convention, are grouped into classes, and provide easy C language interface. Using a CAD or modelling program to build a VW for WTK VR applications, we typically construct the stationary universe with all the geometric objects except the dynamic objects, and create each dynamic object in an individual file.

  14. The role of affordances in children's learning performance and efficiency when using virtual manipulative mathematics touch-screen apps

    NASA Astrophysics Data System (ADS)

    Moyer-Packenham, Patricia S.; Bullock, Emma K.; Shumway, Jessica F.; Tucker, Stephen I.; Watts, Christina M.; Westenskow, Arla; Anderson-Pence, Katie L.; Maahs-Fladung, Cathy; Boyer-Thurgood, Jennifer; Gulkilik, Hilal; Jordan, Kerry

    2016-03-01

    This paper focuses on understanding the role that affordances played in children's learning performance and efficiency during clinical interviews of their interactions with mathematics apps on touch-screen devices. One hundred children, ages 3 to 8, each used six different virtual manipulative mathematics apps during 30-40-min interviews. The study used a convergent mixed methods design, in which quantitative and qualitative data were collected concurrently to answer the research questions (Creswell and Plano Clark 2011). Videos were used to capture each child's interactions with the virtual manipulative mathematics apps, document learning performance and efficiency, and record children's interactions with the affordances within the apps. Quantitized video data answered the research question on differences in children's learning performance and efficiency between pre- and post-assessments. A Wilcoxon matched pairs signed-rank test was used to explore these data. Qualitative video data was used to identify affordance access by children when using each app, identifying 95 potential helping and hindering affordances among the 18 apps. The results showed that there were changes in children's learning performance and efficiency when children accessed a helping or a hindering affordance. Helping affordances were more likely to be accessed by children who progressed between the pre- and post-assessments, and the same affordances had helping and hindering effects for different children. These results have important implications for the design of virtual manipulative mathematics learning apps.

  15. Logistic Model to Support Service Modularity for the Promotion of Reusability in a Web Objects-Enabled IoT Environment.

    PubMed

    Kibria, Muhammad Golam; Ali, Sajjad; Jarwar, Muhammad Aslam; Kumar, Sunil; Chong, Ilyoung

    2017-09-22

    Due to a very large number of connected virtual objects in the surrounding environment, intelligent service features in the Internet of Things requires the reuse of existing virtual objects and composite virtual objects. If a new virtual object is created for each new service request, then the number of virtual object would increase exponentially. The Web of Objects applies the principle of service modularity in terms of virtual objects and composite virtual objects. Service modularity is a key concept in the Web Objects-Enabled Internet of Things (IoT) environment which allows for the reuse of existing virtual objects and composite virtual objects in heterogeneous ontologies. In the case of similar service requests occurring at the same, or different locations, the already-instantiated virtual objects and their composites that exist in the same, or different ontologies can be reused. In this case, similar types of virtual objects and composite virtual objects are searched and matched. Their reuse avoids duplication under similar circumstances, and reduces the time it takes to search and instantiate them from their repositories, where similar functionalities are provided by similar types of virtual objects and their composites. Controlling and maintaining a virtual object means controlling and maintaining a real-world object in the real world. Even though the functional costs of virtual objects are just a fraction of those for deploying and maintaining real-world objects, this article focuses on reusing virtual objects and composite virtual objects, as well as discusses similarity matching of virtual objects and composite virtual objects. This article proposes a logistic model that supports service modularity for the promotion of reusability in the Web Objects-enabled IoT environment. Necessary functional components and a flowchart of an algorithm for reusing composite virtual objects are discussed. Also, to realize the service modularity, a use case scenario is studied and implemented.

  16. Logistic Model to Support Service Modularity for the Promotion of Reusability in a Web Objects-Enabled IoT Environment

    PubMed Central

    Chong, Ilyoung

    2017-01-01

    Due to a very large number of connected virtual objects in the surrounding environment, intelligent service features in the Internet of Things requires the reuse of existing virtual objects and composite virtual objects. If a new virtual object is created for each new service request, then the number of virtual object would increase exponentially. The Web of Objects applies the principle of service modularity in terms of virtual objects and composite virtual objects. Service modularity is a key concept in the Web Objects-Enabled Internet of Things (IoT) environment which allows for the reuse of existing virtual objects and composite virtual objects in heterogeneous ontologies. In the case of similar service requests occurring at the same, or different locations, the already-instantiated virtual objects and their composites that exist in the same, or different ontologies can be reused. In this case, similar types of virtual objects and composite virtual objects are searched and matched. Their reuse avoids duplication under similar circumstances, and reduces the time it takes to search and instantiate them from their repositories, where similar functionalities are provided by similar types of virtual objects and their composites. Controlling and maintaining a virtual object means controlling and maintaining a real-world object in the real world. Even though the functional costs of virtual objects are just a fraction of those for deploying and maintaining real-world objects, this article focuses on reusing virtual objects and composite virtual objects, as well as discusses similarity matching of virtual objects and composite virtual objects. This article proposes a logistic model that supports service modularity for the promotion of reusability in the Web Objects-enabled IoT environment. Necessary functional components and a flowchart of an algorithm for reusing composite virtual objects are discussed. Also, to realize the service modularity, a use case scenario is studied and implemented. PMID:28937590

  17. Probabilistic motor sequence learning in a virtual reality serial reaction time task.

    PubMed

    Sense, Florian; van Rijn, Hedderik

    2018-01-01

    The serial reaction time task is widely used to study learning and memory. The task is traditionally administered by showing target positions on a computer screen and collecting responses using a button box or keyboard. By comparing response times to random or sequenced items or by using different transition probabilities, various forms of learning can be studied. However, this traditional laboratory setting limits the number of possible experimental manipulations. Here, we present a virtual reality version of the serial reaction time task and show that learning effects emerge as expected despite the novel way in which responses are collected. We also show that response times are distributed as expected. The current experiment was conducted in a blank virtual reality room to verify these basic principles. For future applications, the technology can be used to modify the virtual reality environment in any conceivable way, permitting a wide range of previously impossible experimental manipulations.

  18. Virtual reality in rhinology-a new dimension of clinical experience.

    PubMed

    Klapan, Ivica; Raos, Pero; Galeta, Tomislav; Kubat, Goranka

    2016-07-01

    There is often a need to more precisely identify the extent of pathology and the fine elements of intracranial anatomic features during the diagnostic process and during many operations in the nose, sinus, orbit, and skull base region. In two case reports, we describe the methods used in the diagnostic workup and surgical therapy in the nose and paranasal sinus region. Besides baseline x-ray, multislice computed tomography, and magnetic resonance imaging, operative field imaging was performed via a rapid prototyping model, virtual endoscopy, and 3-D imaging. Different head tissues were visualized in different colors, showing their anatomic interrelations and the extent of pathologic tissue within the operative field. This approach has not yet been used as a standard preoperative or intraoperative procedure in otorhinolaryngology. In this way, we tried to understand the new, visualized "world of anatomic relations within the patient's head" by creating an impression of perception (virtual perception) of the given position of all elements in a particular anatomic region of the head, which does not exist in the real world (virtual world). This approach was aimed at upgrading the diagnostic workup and surgical therapy by ensuring a faster, safer and, above all, simpler operative procedure. In conclusion, any ENT specialist can provide virtual reality support in implementing surgical procedures, with additional control of risks and within the limits of normal tissue, without additional trauma to the surrounding tissue in the anatomic region. At the same time, the virtual reality support provides an impression of the virtual world as the specialist navigates through it and manipulates virtual objects.

  19. Tangible display systems: direct interfaces for computer-based studies of surface appearance

    NASA Astrophysics Data System (ADS)

    Darling, Benjamin A.; Ferwerda, James A.

    2010-02-01

    When evaluating the surface appearance of real objects, observers engage in complex behaviors involving active manipulation and dynamic viewpoint changes that allow them to observe the changing patterns of surface reflections. We are developing a class of tangible display systems to provide these natural modes of interaction in computer-based studies of material perception. A first-generation tangible display was created from an off-the-shelf laptop computer containing an accelerometer and webcam as standard components. Using these devices, custom software estimated the orientation of the display and the user's viewing position. This information was integrated with a 3D rendering module so that rotating the display or moving in front of the screen would produce realistic changes in the appearance of virtual objects. In this paper, we consider the design of a second-generation system to improve the fidelity of the virtual surfaces rendered to the screen. With a high-quality display screen and enhanced tracking and rendering capabilities, a secondgeneration system will be better able to support a range of appearance perception applications.

  20. Virtual Reality as a Tool in the Education

    ERIC Educational Resources Information Center

    Piovesan, Sandra Dutra; Passerino, Liliana Maria; Pereira, Adriana Soares

    2012-01-01

    The virtual reality is being more and more used in the education, enabling the student to find out, to explore and to build his own knowledge. This paper presents an Educational Software for presence or distance education, for subjects of Formal Language, where the student can manipulate virtually the target that must be explored, analyzed and…

  1. Vision Algorithms to Determine Shape and Distance for Manipulation of Unmodeled Objects

    NASA Technical Reports Server (NTRS)

    Montes, Leticia; Bowers, David; Lumia, Ron

    1998-01-01

    This paper discusses the development of a robotic system for general use in an unstructured environment. This is illustrated through pick and place of randomly positioned, un-modeled objects. There are many applications for this project, including rock collection for the Mars Surveyor Program. This system is demonstrated with a Puma560 robot, Barrett hand, Cognex vision system, and Cimetrix simulation and control, all running on a PC. The demonstration consists of two processes: vision system and robotics. The vision system determines the size and location of the unknown objects. The robotics part consists of moving the robot to the object, configuring the hand based on the information from the vision system, then performing the pick/place operation. This work enhances and is a part of the Low Cost Virtual Collaborative Environment which provides remote simulation and control of equipment.

  2. Age-Related Differences and Cognitive Correlates of Self-Reported and Direct Navigation Performance: The Effect of Real and Virtual Test Conditions Manipulation

    PubMed Central

    Taillade, Mathieu; N'Kaoua, Bernard; Sauzéon, Hélène

    2016-01-01

    The present study investigated the effect of aging on direct navigation measures and self-reported ones according to the real-virtual test manipulation. Navigation (wayfinding tasks) and spatial memory (paper-pencil tasks) performances, obtained either in real-world or in virtual-laboratory test conditions, were compared between young (n = 32) and older (n = 32) adults who had self-rated their everyday navigation behavior (SBSOD scale). Real age-related differences were observed in navigation tasks as well as in paper-pencil tasks, which investigated spatial learning relative to the distinction between survey-route knowledge. The manipulation of test conditions (real vs. virtual) did not change these age-related differences, which are mostly explained by age-related decline in both spatial abilities and executive functioning (measured with neuropsychological tests). In contrast, elderly adults did not differ from young adults in their self-reporting relative to everyday navigation, suggesting some underestimation of navigation difficulties by elderly adults. Also, spatial abilities in young participants had a mediating effect on the relations between actual and self-reported navigation performance, but not for older participants. So, it is assumed that the older adults carried out the navigation task with fewer available spatial abilities compared to young adults, resulting in inaccurate self-estimates. PMID:26834666

  3. Age-Related Differences and Cognitive Correlates of Self-Reported and Direct Navigation Performance: The Effect of Real and Virtual Test Conditions Manipulation.

    PubMed

    Taillade, Mathieu; N'Kaoua, Bernard; Sauzéon, Hélène

    2015-01-01

    The present study investigated the effect of aging on direct navigation measures and self-reported ones according to the real-virtual test manipulation. Navigation (wayfinding tasks) and spatial memory (paper-pencil tasks) performances, obtained either in real-world or in virtual-laboratory test conditions, were compared between young (n = 32) and older (n = 32) adults who had self-rated their everyday navigation behavior (SBSOD scale). Real age-related differences were observed in navigation tasks as well as in paper-pencil tasks, which investigated spatial learning relative to the distinction between survey-route knowledge. The manipulation of test conditions (real vs. virtual) did not change these age-related differences, which are mostly explained by age-related decline in both spatial abilities and executive functioning (measured with neuropsychological tests). In contrast, elderly adults did not differ from young adults in their self-reporting relative to everyday navigation, suggesting some underestimation of navigation difficulties by elderly adults. Also, spatial abilities in young participants had a mediating effect on the relations between actual and self-reported navigation performance, but not for older participants. So, it is assumed that the older adults carried out the navigation task with fewer available spatial abilities compared to young adults, resulting in inaccurate self-estimates.

  4. Smart glove: hand master using magnetorheological fluid actuators

    NASA Astrophysics Data System (ADS)

    Nam, Y. J.; Park, M. K.; Yamane, R.

    2007-12-01

    In this study, a hand master using five miniature magneto-rheological (MR) actuators, which is called 'the smart glove', is introduced. This hand master is intended to display haptic feedback to the fingertip of the human user interacting with any virtual objects in virtual environment. For the smart glove, two effective approaches are proposed: (i) by using the MR actuator which can be considered as a passive actuator, the smart glove is made simple in structure, high in power, low in inertia, safe in interface and stable in haptic feedback, and (ii) with a novel flexible link mechanism designed for the position-force transmission between the fingertips and the actuators, the number of the actuator and the weight of the smart glove can be reduced. These features lead to the improvement in the manipulability and portability of the smart glove. The feasibility of the constructed smart glove is verified through basic performance evaluation.

  5. Short Term Motor-Skill Acquisition Improves with Size of Self-Controlled Virtual Hands

    PubMed Central

    Ossmy, Ori; Mukamel, Roy

    2017-01-01

    Visual feedback in general, and from the body in particular, is known to influence the performance of motor skills in humans. However, it is unclear how the acquisition of motor skills depends on specific visual feedback parameters such as the size of performing effector. Here, 21 healthy subjects physically trained to perform sequences of finger movements with their right hand. Through the use of 3D Virtual Reality devices, visual feedback during training consisted of virtual hands presented on the screen, tracking subject’s hand movements in real time. Importantly, the setup allowed us to manipulate the size of the displayed virtual hands across experimental conditions. We found that performance gains increase with the size of virtual hands. In contrast, when subjects trained by mere observation (i.e., in the absence of physical movement), manipulating the size of the virtual hand did not significantly affect subsequent performance gains. These results demonstrate that when it comes to short-term motor skill learning, the size of visual feedback matters. Furthermore, these results suggest that highest performance gains in individual subjects are achieved when the size of the virtual hand matches their real hand size. These results may have implications for optimizing motor training schemes. PMID:28056023

  6. DigBody®: A new 3D modeling tool for nasal virtual surgery.

    PubMed

    Burgos, M A; Sanmiguel-Rojas, E; Singh, Narinder; Esteban-Ortega, F

    2018-07-01

    Recent studies have demonstrated that a significant number of surgical procedures for nasal airway obstruction (NAO) have a high rate of surgical failure. In part, this problem is due to the lack of reliable objective clinical parameters to aid surgeons during preoperative planning. Modeling tools that allow virtual surgery to be performed do exist, but all require direct manipulation of computed tomography (CT) or magnetic resonance imaging (MRI) data. Specialists in Rhinology have criticized these tools for their complex user interface, and have requested more intuitive, user-friendly and powerful software to make virtual surgery more accessible and realistic. In this paper we present a new virtual surgery software tool, DigBody ® . This new surgery module is integrated into the computational fluid dynamics (CFD) program MeComLand ® , which was developed exclusively to analyze nasal airflow. DigBody ® works directly with a 3D nasal model that mimics real surgery. Furthermore, this surgery module permits direct assessment of the operated cavity following virtual surgery by CFD simulation. The effectiveness of DigBody ® has been demonstrated by real surgery on two patients based on prior virtual operation results. Both subjects experienced excellent surgical outcomes with no residual nasal obstruction. This tool has great potential to aid surgeons in modeling potential surgical maneuvers, minimizing complications, and being confident that patients will receive optimal postoperative outcomes, validated by personalized CFD testing. Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. Virtual reality visual feedback for hand-controlled scanning probe microscopy manipulation of single molecules.

    PubMed

    Leinen, Philipp; Green, Matthew F B; Esat, Taner; Wagner, Christian; Tautz, F Stefan; Temirov, Ruslan

    2015-01-01

    Controlled manipulation of single molecules is an important step towards the fabrication of single molecule devices and nanoscale molecular machines. Currently, scanning probe microscopy (SPM) is the only technique that facilitates direct imaging and manipulations of nanometer-sized molecular compounds on surfaces. The technique of hand-controlled manipulation (HCM) introduced recently in Beilstein J. Nanotechnol. 2014, 5, 1926-1932 simplifies the identification of successful manipulation protocols in situations when the interaction pattern of the manipulated molecule with its environment is not fully known. Here we present a further technical development that substantially improves the effectiveness of HCM. By adding Oculus Rift virtual reality goggles to our HCM set-up we provide the experimentalist with 3D visual feedback that displays the currently executed trajectory and the position of the SPM tip during manipulation in real time, while simultaneously plotting the experimentally measured frequency shift (Δf) of the non-contact atomic force microscope (NC-AFM) tuning fork sensor as well as the magnitude of the electric current (I) flowing between the tip and the surface. The advantages of the set-up are demonstrated by applying it to the model problem of the extraction of an individual PTCDA molecule from its hydrogen-bonded monolayer grown on Ag(111) surface.

  8. Perception of Virtual Audiences.

    PubMed

    Chollet, Mathieu; Scherer, Stefan

    2017-01-01

    A growing body of evidence shows that virtual audiences are a valuable tool in the treatment of social anxiety, and recent works show that it also a useful in public-speaking training programs. However, little research has focused on how such audiences are perceived and on how the behavior of virtual audiences can be manipulated to create various types of stimuli. The authors used a crowdsourcing methodology to create a virtual audience nonverbal behavior model and, with it, created a dataset of videos with virtual audiences containing varying behaviors. Using this dataset, they investigated how virtual audiences are perceived and which factors affect this perception.

  9. Sound-localization experiments with barn owls in virtual space: influence of broadband interaural level different on head-turning behavior.

    PubMed

    Poganiatz, I; Wagner, H

    2001-04-01

    Interaural level differences play an important role for elevational sound localization in barn owls. The changes of this cue with sound location are complex and frequency dependent. We exploited the opportunities offered by the virtual space technique to investigate the behavioral relevance of the overall interaural level difference by fixing this parameter in virtual stimuli to a constant value or introducing additional broadband level differences to normal virtual stimuli. Frequency-specific monaural cues in the stimuli were not manipulated. We observed an influence of the broadband interaural level differences on elevational, but not on azimuthal sound localization. Since results obtained with our manipulations explained only part of the variance in elevational turning angle, we conclude that frequency-specific cues are also important. The behavioral consequences of changes of the overall interaural level difference in a virtual sound depended on the combined interaural time difference contained in the stimulus, indicating an indirect influence of temporal cues on elevational sound localization as well. Thus, elevational sound localization is influenced by a combination of many spatial cues including frequency-dependent and temporal features.

  10. Perception and Haptic Rendering of Friction Moments.

    PubMed

    Kawasaki, H; Ohtuka, Y; Koide, S; Mouri, T

    2011-01-01

    This paper considers moments due to friction forces on the human fingertip. A computational technique called the friction moment arc method is presented. The method computes the static and/or dynamic friction moment independent of a friction force calculation. In addition, a new finger holder to display friction moment is presented. This device incorporates a small brushless motor and disk, and connects the human's finger to an interface finger of the five-fingered haptic interface robot HIRO II. Subjects' perception of friction moment while wearing the finger holder, as well as perceptions during object manipulation in a virtual reality environment, were evaluated experimentally.

  11. Ants: the supreme soil manipulators

    USDA-ARS?s Scientific Manuscript database

    This review focuses on the semiochemical interactions between ants and their soil environment. Ants occupy virtually every ecological niche and have evolved mechanisms to not just cope with, but also manipulate soil organisms. The metapleural gland, specific to ants was thought to be the major sourc...

  12. Bending the Curve: Sensitivity to Bending of Curved Paths and Application in Room-Scale VR.

    PubMed

    Langbehn, Eike; Lubos, Paul; Bruder, Gerd; Steinicke, Frank

    2017-04-01

    Redirected walking (RDW) promises to allow near-natural walking in an infinitely large virtual environment (VE) by subtle manipulations of the virtual camera. Previous experiments analyzed the human sensitivity to RDW manipulations by focusing on the worst-case scenario, in which users walk perfectly straight ahead in the VE, whereas they are redirected on a circular path in the real world. The results showed that a physical radius of at least 22 meters is required for undetectable RDW. However, users do not always walk exactly straight in a VE. So far, it has not been investigated how much a physical path can be bent in situations in which users walk a virtual curved path instead of a straight one. Such curved walking paths can be often observed, for example, when users walk on virtual trails, through bent corridors, or when circling around obstacles. In such situations the question is not, whether or not the physical path can be bent, but how much the bending of the physical path may vary from the bending of the virtual path. In this article, we analyze this question and present redirection by means of bending gains that describe the discrepancy between the bending of curved paths in the real and virtual environment. Furthermore, we report the psychophysical experiments in which we analyzed the human sensitivity to these gains. The results reveal encouragingly wider detection thresholds than for straightforward walking. Based on our findings, we discuss the potential of curved walking and present a first approach to leverage bent paths in a way that can provide undetectable RDW manipulations even in room-scale VR.

  13. Vision-Based Haptic Feedback for Remote Micromanipulation in-SEM Environment

    NASA Astrophysics Data System (ADS)

    Bolopion, Aude; Dahmen, Christian; Stolle, Christian; Haliyo, Sinan; Régnier, Stéphane; Fatikow, Sergej

    2012-07-01

    This article presents an intuitive environment for remote micromanipulation composed of both haptic feedback and virtual reconstruction of the scene. To enable nonexpert users to perform complex teleoperated micromanipulation tasks, it is of utmost importance to provide them with information about the 3-D relative positions of the objects and the tools. Haptic feedback is an intuitive way to transmit such information. Since position sensors are not available at this scale, visual feedback is used to derive information about the scene. In this work, three different techniques are implemented, evaluated, and compared to derive the object positions from scanning electron microscope images. The modified correlation matching with generated template algorithm is accurate and provides reliable detection of objects. To track the tool, a marker-based approach is chosen since fast detection is required for stable haptic feedback. Information derived from these algorithms is used to propose an intuitive remote manipulation system that enables users situated in geographically distant sites to benefit from specific equipments, such as SEMs. Stability of the haptic feedback is ensured by the minimization of the delays, the computational efficiency of vision algorithms, and the proper tuning of the haptic coupling. Virtual guides are proposed to avoid any involuntary collisions between the tool and the objects. This approach is validated by a teleoperation involving melamine microspheres with a diameter of less than 2 μ m between Paris, France and Oldenburg, Germany.

  14. Mapping the zone of eye-height utility for seated and standing observers

    NASA Technical Reports Server (NTRS)

    Wraga, M.; Proffitt, D. R.; Kaiser, M. K. (Principal Investigator)

    2000-01-01

    In a series of experiments, we delimited a region within the vertical axis of space in which eye height (EH) information is used maximally to scale object heights, referred to as the "zone of eye height utility" (Wraga, 1999b Journal of Experimental Psychology, Human Perception and Performance 25 518-530). To test the lower limit of the zone, linear perspective (on the floor) was varied via introduction of a false perspective (FP) gradient while all sources of EH information except linear perspective were held constant. For seated (experiment 1a) observers, the FP gradient produced overestimations of height for rectangular objects up to 0.15 EH tall. This value was taken to be just outside the lower limit of the zone. This finding was replicated in a virtual environment, for both seated (experiment 1b) and standing (experiment 2) observers. For the upper limit of the zone, EH information itself was manipulated by lowering observers' center of projection in a virtual scene. Lowering the effective EH of standing (experiment 3) and seated (experiment 4) observers produced corresponding overestimations of height for objects up to about 2.5 EH. This zone of approximately 0.20-2.5 EH suggests that the human visual system weights size information differentially, depending on its efficacy.

  15. Training to acquire psychomotor skills for endoscopic endonasal surgery using a personal webcam trainer.

    PubMed

    Hirayama, Ryuichi; Fujimoto, Yasunori; Umegaki, Masao; Kagawa, Naoki; Kinoshita, Manabu; Hashimoto, Naoya; Yoshimine, Toshiki

    2013-05-01

    Existing training methods for neuroendoscopic surgery have mainly emphasized the acquisition of anatomical knowledge and procedures for operating an endoscope and instruments. For laparoscopic surgery, various training systems have been developed to teach handling of an endoscope as well as the manipulation of instruments for speedy and precise endoscopic performance using both hands. In endoscopic endonasal surgery (EES), especially using a binostril approach to the skull base and intradural lesions, the learning of more meticulous manipulation of instruments is mandatory, and it may be necessary to develop another type of training method for acquiring psychomotor skills for EES. Authors of the present study developed an inexpensive, portable personal trainer using a webcam and objectively evaluated its utility. Twenty-five neurosurgeons volunteered for this study and were divided into 2 groups, a novice group (19 neurosurgeons) and an experienced group (6 neurosurgeons). Before and after the exercises of set tasks with a webcam box trainer, the basic endoscopic skills of each participant were objectively assessed using the virtual reality simulator (LapSim) while executing 2 virtual tasks: grasping and instrument navigation. Scores for the following 11 performance variables were recorded: instrument time, instrument misses, instrument path length, and instrument angular path (all of which were measured in both hands), as well as tissue damage, max damage, and finally overall score. Instrument time was indicated as movement speed; instrument path length and instrument angular path as movement efficiency; and instrument misses, tissue damage, and max damage as movement precision. In the novice group, movement speed and efficiency were significantly improved after the training. In the experienced group, significant improvement was not shown in the majority of virtual tasks. Before the training, significantly greater movement speed and efficiency were demonstrated in the experienced group, but no difference in movement precision was shown between the 2 groups. After the training, no significant differences were shown between the 2 groups in the majority of the virtual tasks. Analysis revealed that the webcam trainer improved the basic skills of the novices, increasing movement speed and efficiency without sacrificing movement precision. Novices using this unique webcam trainer showed improvement in psychomotor skills for EES. The authors believe that training in terms of basic endoscopic skills is meaningful and that the webcam training system can play a role in daily off-the-job training for EES.

  16. Virtual Reality Exploration and Planning for Precision Colorectal Surgery.

    PubMed

    Guerriero, Ludovica; Quero, Giuseppe; Diana, Michele; Soler, Luc; Agnus, Vincent; Marescaux, Jacques; Corcione, Francesco

    2018-06-01

    Medical software can build a digital clone of the patient with 3-dimensional reconstruction of Digital Imaging and Communication in Medicine images. The virtual clone can be manipulated (rotations, zooms, etc), and the various organs can be selectively displayed or hidden to facilitate a virtual reality preoperative surgical exploration and planning. We present preliminary cases showing the potential interest of virtual reality in colorectal surgery for both cases of diverticular disease and colonic neoplasms. This was a single-center feasibility study. The study was conducted at a tertiary care institution. Two patients underwent a laparoscopic left hemicolectomy for diverticular disease, and 1 patient underwent a laparoscopic right hemicolectomy for cancer. The 3-dimensional virtual models were obtained from preoperative CT scans. The virtual model was used to perform preoperative exploration and planning. Intraoperatively, one of the surgeons was manipulating the virtual reality model, using the touch screen of a tablet, which was interactively displayed to the surgical team. The main outcome was evaluation of the precision of virtual reality in colorectal surgery planning and exploration. In 1 patient undergoing laparoscopic left hemicolectomy, an abnormal origin of the left colic artery beginning as an extremely short common trunk from the inferior mesenteric artery was clearly seen in the virtual reality model. This finding was missed by the radiologist on CT scan. The precise identification of this vascular variant granted a safe and adequate surgery. In the remaining cases, the virtual reality model helped to precisely estimate the vascular anatomy, providing key landmarks for a safer dissection. A larger sample size would be necessary to definitively assess the efficacy of virtual reality in colorectal surgery. Virtual reality can provide an enhanced understanding of crucial anatomical details, both preoperatively and intraoperatively, which could contribute to improve safety in colorectal surgery.

  17. Emergence of Virtual Reality as a Tool for Upper Limb Rehabilitation: Incorporation of Motor Control and Motor Learning Principles

    PubMed Central

    Weiss, Patrice L.; Keshner, Emily A.

    2015-01-01

    The primary focus of rehabilitation for individuals with loss of upper limb movement as a result of acquired brain injury is the relearning of specific motor skills and daily tasks. This relearning is essential because the loss of upper limb movement often results in a reduced quality of life. Although rehabilitation strives to take advantage of neuroplastic processes during recovery, results of traditional approaches to upper limb rehabilitation have not entirely met this goal. In contrast, enriched training tasks, simulated with a wide range of low- to high-end virtual reality–based simulations, can be used to provide meaningful, repetitive practice together with salient feedback, thereby maximizing neuroplastic processes via motor learning and motor recovery. Such enriched virtual environments have the potential to optimize motor learning by manipulating practice conditions that explicitly engage motivational, cognitive, motor control, and sensory feedback–based learning mechanisms. The objectives of this article are to review motor control and motor learning principles, to discuss how they can be exploited by virtual reality training environments, and to provide evidence concerning current applications for upper limb motor recovery. The limitations of the current technologies with respect to their effectiveness and transfer of learning to daily life tasks also are discussed. PMID:25212522

  18. The Virtual Peg Insertion Test as an assessment of upper limb coordination in ARSACS patients: a pilot study.

    PubMed

    Gagnon, Cynthia; Lavoie, Caroline; Lessard, Isabelle; Mathieu, Jean; Brais, Bernard; Bouchard, Jean-Pierre; Fluet, Marie-Christine; Gassert, Roger; Lambercy, Olivier

    2014-12-15

    This paper introduces a novel assessment tool to provide clinicians with quantitative and more objective measures of upper limb coordination in patients suffering from Autosomal Recessive Spastic Ataxia of Charlevoix-Saguenay (ARSACS). The Virtual Peg Insertion Test (VPIT) involves manipulating an instrumented handle in order to move nine pegs into nine holes displayed in a virtual environment. The main outcome measures were the number of zero-crossings of the hand acceleration vector, as a measure of movement coordination and the total time required to complete the insertion of the nine pegs, as a measure of overall upper limb performance. 8\\9 patients with ARSACS were able to complete five repetitions with the VPIT. Patients were found to be significantly less coordinated and slower than age-matched healthy subjects (p<0.01). Performance of ARSACS patients was positively correlated with the Nine-Hole Peg Test (r=0.85, p<0.01) and with age (r=0.93, p<0.01), indicative of the degenerative nature of the disease. This study presents preliminary results on the use of a robotics and virtual reality assessment tool with ARSACS patients. Results highlight its potential to assess impaired coordination and monitor its progression over time. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. Engagement of neural circuits underlying 2D spatial navigation in a rodent virtual reality system

    PubMed Central

    Aronov, Dmitriy; Tank, David W.

    2015-01-01

    SUMMARY Virtual reality (VR) enables precise control of an animal’s environment and otherwise impossible experimental manipulations. Neural activity in navigating rodents has been studied on virtual linear tracks. However, the spatial navigation system’s engagement in complete two-dimensional environments has not been shown. We describe a VR setup for rats, including control software and a large-scale electrophysiology system, which supports 2D navigation by allowing animals to rotate and walk in any direction. The entorhinal-hippocampal circuit, including place cells, grid cells, head direction cells and border cells, showed 2D activity patterns in VR similar to those in the real world. Hippocampal neurons exhibited various remapping responses to changes in the appearance or the shape of the virtual environment, including a novel form in which a VR-induced cue conflict caused remapping to lock to geometry rather than salient cues. These results suggest a general-purpose tool for novel types of experimental manipulations in navigating rats. PMID:25374363

  20. Virtual Glovebox (VGX) Aids Astronauts in Pre-Flight Training

    NASA Technical Reports Server (NTRS)

    2003-01-01

    NASA's Virtual Glovebox (VGX) was developed to allow astronauts on Earth to train for complex biology research tasks in space. The astronauts may reach into the virtual environment, naturally manipulating specimens, tools, equipment, and accessories in a simulated microgravity environment as they would do in space. Such virtual reality technology also provides engineers and space operations staff with rapid prototyping, planning, and human performance modeling capabilities. Other Earth based applications being explored for this technology include biomedical procedural training and training for disarming bio-terrorism weapons.

  1. Functional Analysis in Virtual Environments

    ERIC Educational Resources Information Center

    Vasquez, Eleazar, III; Marino, Matthew T.; Donehower, Claire; Koch, Aaron

    2017-01-01

    Functional analysis (FA) is an assessment procedure involving the systematic manipulation of an individual's environment to determine why a target behavior is occurring. An analog FA provides practitioners the opportunity to manipulate variables in a controlled environment and formulate a hypothesis for the function of a behavior. In previous…

  2. Multisensory Integration in the Virtual Hand Illusion with Active Movement

    PubMed Central

    Satoh, Satoru; Hachimura, Kozaburo

    2016-01-01

    Improving the sense of immersion is one of the core issues in virtual reality. Perceptual illusions of ownership can be perceived over a virtual body in a multisensory virtual reality environment. Rubber Hand and Virtual Hand Illusions showed that body ownership can be manipulated by applying suitable visual and tactile stimulation. In this study, we investigate the effects of multisensory integration in the Virtual Hand Illusion with active movement. A virtual xylophone playing system which can interactively provide synchronous visual, tactile, and auditory stimulation was constructed. We conducted two experiments regarding different movement conditions and different sensory stimulations. Our results demonstrate that multisensory integration with free active movement can improve the sense of immersion in virtual reality. PMID:27847822

  3. The effect of virtual reality on gait variability.

    PubMed

    Katsavelis, Dimitrios; Mukherjee, Mukul; Decker, Leslie; Stergiou, Nicholas

    2010-07-01

    Optic Flow (OF) plays an important role in human locomotion and manipulation of OF characteristics can cause changes in locomotion patterns. The purpose of the study was to investigate the effect of the velocity of optic flow on the amount and structure of gait variability. Each subject underwent four conditions of treadmill walking at their self-selected pace. In three conditions the subjects walked in an endless virtual corridor, while a fourth control condition was also included. The three virtual conditions differed in the speed of the optic flow displayed as follows--same speed (OFn), faster (OFf), and slower (OFs) than that of the treadmill. Gait kinematics were tracked with an optical motion capture system. Gait variability measures of the hip, knee and ankle range of motion and stride interval were analyzed. Amount of variability was evaluated with linear measures of variability--coefficient of variation, while structure of variability i.e., its organization over time, were measured with nonlinear measures--approximate entropy and detrended fluctuation analysis. The linear measures of variability, CV, did not show significant differences between Non-VR and VR conditions while nonlinear measures of variability identified significant differences at the hip, ankle, and in stride interval. In response to manipulation of the optic flow, significant differences were observed between the three virtual conditions in the following order: OFn greater than OFf greater than OFs. Measures of structure of variability are more sensitive to changes in gait due to manipulation of visual cues, whereas measures of the amount of variability may be concealed by adaptive mechanisms. Visual cues increase the complexity of gait variability and may increase the degrees of freedom available to the subject. Further exploration of the effects of optic flow manipulation on locomotion may provide us with an effective tool for rehabilitation of subjects with sensorimotor issues.

  4. Studying social interactions through immersive virtual environment technology: virtues, pitfalls, and future challenges

    PubMed Central

    Bombari, Dario; Schmid Mast, Marianne; Canadas, Elena; Bachmann, Manuel

    2015-01-01

    The goal of the present review is to explain how immersive virtual environment technology (IVET) can be used for the study of social interactions and how the use of virtual humans in immersive virtual environments can advance research and application in many different fields. Researchers studying individual differences in social interactions are typically interested in keeping the behavior and the appearance of the interaction partner constant across participants. With IVET researchers have full control over the interaction partners, can standardize them while still keeping the simulation realistic. Virtual simulations are valid: growing evidence shows that indeed studies conducted with IVET can replicate some well-known findings of social psychology. Moreover, IVET allows researchers to subtly manipulate characteristics of the environment (e.g., visual cues to prime participants) or of the social partner (e.g., his/her race) to investigate their influences on participants’ behavior and cognition. Furthermore, manipulations that would be difficult or impossible in real life (e.g., changing participants’ height) can be easily obtained with IVET. Beside the advantages for theoretical research, we explore the most recent training and clinical applications of IVET, its integration with other technologies (e.g., social sensing) and future challenges for researchers (e.g., making the communication between virtual humans and participants smoother). PMID:26157414

  5. Studying social interactions through immersive virtual environment technology: virtues, pitfalls, and future challenges.

    PubMed

    Bombari, Dario; Schmid Mast, Marianne; Canadas, Elena; Bachmann, Manuel

    2015-01-01

    The goal of the present review is to explain how immersive virtual environment technology (IVET) can be used for the study of social interactions and how the use of virtual humans in immersive virtual environments can advance research and application in many different fields. Researchers studying individual differences in social interactions are typically interested in keeping the behavior and the appearance of the interaction partner constant across participants. With IVET researchers have full control over the interaction partners, can standardize them while still keeping the simulation realistic. Virtual simulations are valid: growing evidence shows that indeed studies conducted with IVET can replicate some well-known findings of social psychology. Moreover, IVET allows researchers to subtly manipulate characteristics of the environment (e.g., visual cues to prime participants) or of the social partner (e.g., his/her race) to investigate their influences on participants' behavior and cognition. Furthermore, manipulations that would be difficult or impossible in real life (e.g., changing participants' height) can be easily obtained with IVET. Beside the advantages for theoretical research, we explore the most recent training and clinical applications of IVET, its integration with other technologies (e.g., social sensing) and future challenges for researchers (e.g., making the communication between virtual humans and participants smoother).

  6. Virtual reality simulators: valuable surgical skills trainers or video games?

    PubMed

    Willis, Ross E; Gomez, Pedro Pablo; Ivatury, Srinivas J; Mitra, Hari S; Van Sickle, Kent R

    2014-01-01

    Virtual reality (VR) and physical model (PM) simulators differ in terms of whether the trainee is manipulating actual 3-dimensional objects (PM) or computer-generated 3-dimensional objects (VR). Much like video games (VG), VR simulators utilize computer-generated graphics. These differences may have profound effects on the utility of VR and PM training platforms. In this study, we aimed to determine whether a relationship exists between VR, PM, and VG platforms. VR and PM simulators for laparoscopic camera navigation ([LCN], experiment 1) and flexible endoscopy ([FE] experiment 2) were used in this study. In experiment 1, 20 laparoscopic novices played VG and performed 0° and 30° LCN exercises on VR and PM simulators. In experiment 2, 20 FE novices played VG and performed colonoscopy exercises on VR and PM simulators. In both experiments, VG performance was correlated with VR performance but not with PM performance. Performance on VR simulators did not correlate with performance on respective PM models. VR environments may be more like VG than previously thought. © 2013 Published by Association of Program Directors in Surgery on behalf of Association of Program Directors in Surgery.

  7. The Impact of Using Synchronous Collaborative Virtual Tangram in Children's Geometric

    ERIC Educational Resources Information Center

    Lin, Chiu-Pin; Shao, Yin-juan; Wong, Lung-Hsiang; Li, Yin-Jen; Niramitranon, Jitti

    2011-01-01

    This study aimed to develop a collaborative and manipulative virtual Tangram puzzle to facilitate children to learn geometry in the computer-supported collaborative learning environment with Tablet PCs. In promoting peer interactions and stimulating students' higher-order thinking and creativity toward geometric problem-solving, we designed a…

  8. Technology's Impact on Fraction Learning: An Experimental Comparison of Virtual and Physical Manipulatives

    ERIC Educational Resources Information Center

    Mendiburo, Maria; Hasselbring, Ted

    2011-01-01

    Fractions are among the most difficult mathematical concepts for elementary school students to master (Behr, Harel, Post, & Lesh, 1992; Bezuk & Cramer, 1989; Moss & Case, 1999). Research indicates that manipulatives (e.g. fractions circles, fractions strips) positively impact students' conceptual and procedural understanding of…

  9. Surgical planning for microsurgical excision of cerebral arterio-venous malformations using virtual reality technology.

    PubMed

    Ng, Ivan; Hwang, Peter Y K; Kumar, Dinesh; Lee, Cheng Kiang; Kockro, Ralf A; Sitoh, Y Y

    2009-05-01

    To evaluate the feasibility of surgical planning using a virtual reality platform workstation in the treatment of cerebral arterio-venous malformations (AVMs) Patient-specific data of multiple imaging modalities were co-registered, fused and displayed as a 3D stereoscopic object on the Dextroscope, a virtual reality surgical planning platform. This system allows for manipulation of 3D data and for the user to evaluate and appreciate the angio-architecture of the nidus with regards to position and spatial relationships of critical feeders and draining veins. We evaluated the ability of the Dextroscope to influence surgical planning by providing a better understanding of the angio-architecture as well as its impact on the surgeon's pre- and intra-operative confidence and ability to tackle these lesions. Twenty four patients were studied. The mean age was 29.65 years. Following pre-surgical planning on the Dextroscope, 23 patients underwent microsurgical resection after pre-surgical virtual reality planning, during which all had documented complete resection of the AVM. Planning on the virtual reality platform allowed for identification of critical feeders and draining vessels in all patients. The appreciation of the complex patient specific angio-architecture to establish a surgical plan was found to be invaluable in the conduct of the procedure and was found to enhance the surgeon's confidence significantly. Surgical planning of resection of an AVM with a virtual reality system allowed detailed and comprehensive analysis of 3D multi-modality imaging data and, in our experience, proved very helpful in establishing a good surgical strategy, enhancing intra-operative spatial orientation and increasing surgeon's confidence.

  10. The sense of body ownership relaxes temporal constraints for multisensory integration.

    PubMed

    Maselli, Antonella; Kilteni, Konstantina; López-Moliner, Joan; Slater, Mel

    2016-08-03

    Experimental work on body ownership illusions showed how simple multisensory manipulation can generate the illusory experience of an artificial limb as being part of the own-body. This work highlighted how own-body perception relies on a plastic brain representation emerging from multisensory integration. The flexibility of this representation is reflected in the short-term modulations of physiological states and perceptual processing observed during these illusions. Here, we explore the impact of ownership illusions on the temporal dimension of multisensory integration. We show that, during the illusion, the temporal window for integrating touch on the physical body with touch seen on a virtual body representation, increases with respect to integration with visual events seen close but separated from the virtual body. We show that this effect is mediated by the ownership illusion. Crucially, the temporal window for visuotactile integration was positively correlated with participants' scores rating the illusory experience of owning the virtual body and touching the object seen in contact with it. Our results corroborate the recently proposed causal inference mechanism for illusory body ownership. As a novelty, they show that the ensuing illusory causal binding between stimuli from the real and fake body relaxes constraints for the integration of bodily signals.

  11. Virtual reality in surgical training.

    PubMed

    Lange, T; Indelicato, D J; Rosen, J M

    2000-01-01

    Virtual reality in surgery and, more specifically, in surgical training, faces a number of challenges in the future. These challenges are building realistic models of the human body, creating interface tools to view, hear, touch, feel, and manipulate these human body models, and integrating virtual reality systems into medical education and treatment. A final system would encompass simulators specifically for surgery, performance machines, telemedicine, and telesurgery. Each of these areas will need significant improvement for virtual reality to impact medicine successfully in the next century. This article gives an overview of, and the challenges faced by, current systems in the fast-changing field of virtual reality technology, and provides a set of specific milestones for a truly realistic virtual human body.

  12. Height, social comparison, and paranoia: An immersive virtual reality experimental study

    PubMed Central

    Freeman, Daniel; Evans, Nicole; Lister, Rachel; Antley, Angus; Dunn, Graham; Slater, Mel

    2014-01-01

    Mistrust of others may build upon perceptions of the self as vulnerable, consistent with an association of paranoia with perceived lower social rank. Height is a marker of social status and authority. Therefore we tested the effect of manipulating height, as a proxy for social rank, on paranoia. Height was manipulated within an immersive virtual reality simulation. Sixty females who reported paranoia experienced a virtual reality train ride twice: at their normal and reduced height. Paranoia and social comparison were assessed. Reducing a person's height resulted in more negative views of the self in comparison with other people and increased levels of paranoia. The increase in paranoia was fully mediated by changes in social comparison. The study provides the first demonstration that reducing height in a social situation increases the occurrence of paranoia. The findings indicate that negative social comparison is a cause of mistrust. PMID:24924485

  13. Positioning the endoscope in laparoscopic surgery by foot: Influential factors on surgeons' performance in virtual trainer.

    PubMed

    Abdi, Elahe; Bouri, Mohamed; Burdet, Etienne; Himidan, Sharifa; Bleuler, Hannes

    2017-07-01

    We have investigated how surgeons can use the foot to position a laparoscopic endoscope, a task that normally requires an extra assistant. Surgeons need to train in order to exploit the possibilities offered by this new technique and safely manipulate the endoscope together with the hands movements. A realistic abdominal cavity has been developed as training simulator to investigate this multi-arm manipulation. In this virtual environment, the surgeon's biological hands are modelled as laparoscopic graspers while the viewpoint is controlled by the dominant foot. 23 surgeons and medical students performed single-handed and bimanual manipulation in this environment. The results show that residents had superior performance compared to both medical students and more experienced surgeons, suggesting that residency is an ideal period for this training. Performing the single-handed task improves the performance in the bimanual task, whereas the converse was not true.

  14. Effective Student Learning of Fractions with an Interactive Simulation

    ERIC Educational Resources Information Center

    Hensberry, Karina K. R.; Moore, Emily B.; Perkins, Katherine K.

    2015-01-01

    Computer technology, when coupled with reform-based teaching practices, has been shown to be an effective way to support student learning of mathematics. The quality of the technology itself, as well as how it is used, impacts how much students learn. Interactive simulations are dynamic virtual environments similar to virtual manipulatives that…

  15. Virtual Reality: An Experiential Tool for Clinical Psychology

    ERIC Educational Resources Information Center

    Riva, Giuseppe

    2009-01-01

    Several Virtual Reality (VR) applications for the understanding, assessment and treatment of mental health problems have been developed in the last 15 years. Typically, in VR the patient learns to manipulate problematic situations related to his/her problem. In fact, VR can be described as an advanced form of human-computer interface that is able…

  16. Analysis of Peer Learning Behaviors Using Multiple Representations in Virtual Reality and Their Impacts on Geometry Problem Solving

    ERIC Educational Resources Information Center

    Hwang, Wu-Yuin; Hu, Shih-Shin

    2013-01-01

    Learning geometry emphasizes the importance of exploring different representations such as virtual manipulatives, written math formulas, and verbal explanations, which help students build math concepts and develop critical thinking. Besides helping individuals construct math knowledge, peer interaction also plays a crucial role in promoting an…

  17. Embodiment: A New Perspective for Evaluating Physicality in Learning

    ERIC Educational Resources Information Center

    Han, Insook

    2013-01-01

    The purpose of this study is to provide a new perspective for evaluating physicality in learning with a preliminary experimental study based on embodied cognition. While there are studies showing no superiority of physical manipulation over virtual manipulation, there are also studies that seem to advocate adding more physicality in simulations…

  18. The Effects of Virtual Versus Physical Lab Manipulatives on Inquiry Skill Acquisition and Conceptual Understanding of Density

    NASA Astrophysics Data System (ADS)

    Brinson, James R.

    The current study compared the effects of virtual versus physical laboratory manipulatives on 84 undergraduate non-science majors' (a) conceptual understanding of density and (b) density-related inquiry skill acquisition. A pre-post comparison study design was used, which incorporated all components of an inquiry-guided classroom, except experimental mode, and which controlled for curriculum, instructor, instructional method, time spent on task, and availability of reference resources. Participants were randomly assigned to either a physical or virtual lab group. Pre- and post-assessments of conceptual understanding and inquiry skills were administered to both groups. Paired-samples t tests revealed a significant mean percent correct score increase for conceptual understanding in both the physical lab group (M = .103, SD = .168), t(38) = -3.82, p < .001, r = .53, two-tailed, and the virtual lab group (M = .084, SD = .177), t(44) = -3.20, p = .003, r = .43, two-tailed. However, a one-way ANCOVA (using pretest scores as the covariate) revealed that the main effect of lab group on conceptual learning gains was not significant, F(1, 81) = 0.081, p = .776, two-tailed. An omnibus test of model coefficients within hierarchical logistic regression revealed that a correct response on inquiry pretest scores was not a significant predictor of a correct post-test response, chi 2(1, N = 84) = 1.68, p = .195, and that when lab mode was added to the model, it did not significantly increase the model's predictive ability, chi2(2, N = 84) = 1.95, p = .377. Thus, the data in the current study revealed no significant difference in the effect of physical versus virtual manipulatives when used to teach conceptual understanding and inquiry skills related to density.

  19. Advanced Maintenance Simulation by Means of Hand-Based Haptic Interfaces

    NASA Astrophysics Data System (ADS)

    Nappi, Michele; Paolino, Luca; Ricciardi, Stefano; Sebillo, Monica; Vitiello, Giuliana

    Aerospace industry has been involved in virtual simulation for design and testing since the birth of virtual reality. Today this industry is showing a growing interest in the development of haptic-based maintenance training applications, which represent the most advanced way to simulate maintenance and repair tasks within a virtual environment by means of a visual-haptic approach. The goal is to allow the trainee to experiment the service procedures not only as a workflow reproduced at a visual level but also in terms of the kinaesthetic feedback involved with the manipulation of tools and components. This study, conducted in collaboration with aerospace industry specialists, is aimed to the development of an immersive virtual capable of immerging the trainees into a virtual environment where mechanics and technicians can perform maintenance simulation or training tasks by directly manipulating 3D virtual models of aircraft parts while perceiving force feedback through the haptic interface. The proposed system is based on ViRstperson, a virtual reality engine under development at the Italian Center for Aerospace Research (CIRA) to support engineering and technical activities such as design-time maintenance procedure validation, and maintenance training. This engine has been extended to support haptic-based interaction, enabling a more complete level of interaction, also in terms of impedance control, and thus fostering the development of haptic knowledge in the user. The user’s “sense of touch” within the immersive virtual environment is simulated through an Immersion CyberForce® hand-based force-feedback device. Preliminary testing of the proposed system seems encouraging.

  20. Development of a novel virtual reality gait intervention.

    PubMed

    Boone, Anna E; Foreman, Matthew H; Engsberg, Jack R

    2017-02-01

    Improving gait speed and kinematics can be a time consuming and tiresome process. We hypothesize that incorporating virtual reality videogame play into variable improvement goals will improve levels of enjoyment and motivation and lead to improved gait performance. To develop a feasible, engaging, VR gait intervention for improving gait variables. Completing this investigation involved four steps: 1) identify gait variables that could be manipulated to improve gait speed and kinematics using the Microsoft Kinect and free software, 2) identify free internet videogames that could successfully manipulate the chosen gait variables, 3) experimentally evaluate the ability of the videogames and software to manipulate the gait variables, and 4) evaluate the enjoyment and motivation from a small sample of persons without disability. The Kinect sensor was able to detect stride length, cadence, and joint angles. FAAST software was able to identify predetermined gait variable thresholds and use the thresholds to play free online videogames. Videogames that involved continuous pressing of a keyboard key were found to be most appropriate for manipulating the gait variables. Five participants without disability evaluated the effectiveness for modifying the gait variables and enjoyment and motivation during play. Participants were able to modify gait variables to permit successful videogame play. Motivation and enjoyment were high. A clinically feasible and engaging virtual intervention for improving gait speed and kinematics has been developed and initially tested. It may provide an engaging avenue for achieving thousands of repetitions necessary for neural plastic changes and improved gait. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. The contribution of virtual reality to the diagnosis of spatial navigation disorders and to the study of the role of navigational aids: A systematic literature review.

    PubMed

    Cogné, M; Taillade, M; N'Kaoua, B; Tarruella, A; Klinger, E; Larrue, F; Sauzéon, H; Joseph, P-A; Sorita, E

    2017-06-01

    Spatial navigation, which involves higher cognitive functions, is frequently implemented in daily activities, and is critical to the participation of human beings in mainstream environments. Virtual reality is an expanding tool, which enables on one hand the assessment of the cognitive functions involved in spatial navigation, and on the other the rehabilitation of patients with spatial navigation difficulties. Topographical disorientation is a frequent deficit among patients suffering from neurological diseases. The use of virtual environments enables the information incorporated into the virtual environment to be manipulated empirically. But the impact of manipulations seems differ according to their nature (quantity, occurrence, and characteristics of the stimuli) and the target population. We performed a systematic review of research on virtual spatial navigation covering the period from 2005 to 2015. We focused first on the contribution of virtual spatial navigation for patients with brain injury or schizophrenia, or in the context of ageing and dementia, and then on the impact of visual or auditory stimuli on virtual spatial navigation. On the basis of 6521 abstracts identified in 2 databases (Pubmed and Scopus) with the keywords « navigation » and « virtual », 1103 abstracts were selected by adding the keywords "ageing", "dementia", "brain injury", "stroke", "schizophrenia", "aid", "help", "stimulus" and "cue"; Among these, 63 articles were included in the present qualitative analysis. Unlike pencil-and-paper tests, virtual reality is useful to assess large-scale navigation strategies in patients with brain injury or schizophrenia, or in the context of ageing and dementia. Better knowledge about both the impact of the different aids and the cognitive processes involved is essential for the use of aids in neurorehabilitation. Copyright © 2016. Published by Elsevier Masson SAS.

  2. Switching in Feedforward Control of Grip Force During Tool-Mediated Interaction With Elastic Force Fields

    PubMed Central

    White, Olivier; Karniel, Amir; Papaxanthis, Charalambos; Barbiero, Marie; Nisky, Ilana

    2018-01-01

    Switched systems are common in artificial control systems. Here, we suggest that the brain adopts a switched feedforward control of grip forces during manipulation of objects. We measured how participants modulated grip force when interacting with soft and rigid virtual objects when stiffness varied continuously between trials. We identified a sudden phase transition between two forms of feedforward control that differed in the timing of the synchronization between the anticipated load force and the applied grip force. The switch occurred several trials after a threshold stiffness level in the range 100–200 N/m. These results suggest that in the control of grip force, the brain acts as a switching control system. This opens new research questions as to the nature of the discrete state variables that drive the switching. PMID:29930504

  3. Stereotyped behavior of severely disabled children in classroom and free-play settings.

    PubMed

    Thompson, T J; Berkson, G

    1985-05-01

    The relationships between stereotyped behavior, object manipulation, self-manipulation, teacher attention, and various developmental measures were examined in 101 severely developmentally disabled children in their classrooms and a free-play setting. Stereotyped behavior without objects was positively correlated with self-manipulation and CA and was negatively correlated with complex object manipulation, developmental age, developmental quotient, and teacher attention. Stereotyped behavior with objects was negatively correlated with complex object manipulation. Partial correlations showed that age, self-manipulation, and developmental age shared unique variance with stereotyped behavior without objects.

  4. Manually locating physical and virtual reality objects.

    PubMed

    Chen, Karen B; Kimmel, Ryan A; Bartholomew, Aaron; Ponto, Kevin; Gleicher, Michael L; Radwin, Robert G

    2014-09-01

    In this study, we compared how users locate physical and equivalent three-dimensional images of virtual objects in a cave automatic virtual environment (CAVE) using the hand to examine how human performance (accuracy, time, and approach) is affected by object size, location, and distance. Virtual reality (VR) offers the promise to flexibly simulate arbitrary environments for studying human performance. Previously, VR researchers primarily considered differences between virtual and physical distance estimation rather than reaching for close-up objects. Fourteen participants completed manual targeting tasks that involved reaching for corners on equivalent physical and virtual boxes of three different sizes. Predicted errors were calculated from a geometric model based on user interpupillary distance, eye location, distance from the eyes to the projector screen, and object. Users were 1.64 times less accurate (p < .001) and spent 1.49 times more time (p = .01) targeting virtual versus physical box corners using the hands. Predicted virtual targeting errors were on average 1.53 times (p < .05) greater than the observed errors for farther virtual targets but not significantly different for close-up virtual targets. Target size, location, and distance, in addition to binocular disparity, affected virtual object targeting inaccuracy. Observed virtual box inaccuracy was less than predicted for farther locations, suggesting possible influence of cues other than binocular vision. Human physical interaction with objects in VR for simulation, training, and prototyping involving reaching and manually handling virtual objects in a CAVE are more accurate than predicted when locating farther objects.

  5. Stereoscopic, Force-Feedback Trainer For Telerobot Operators

    NASA Technical Reports Server (NTRS)

    Kim, Won S.; Schenker, Paul S.; Bejczy, Antal K.

    1994-01-01

    Computer-controlled simulator for training technicians to operate remote robots provides both visual and kinesthetic virtual reality. Used during initial stage of training; saves time and expense, increases operational safety, and prevents damage to robots by inexperienced operators. Computes virtual contact forces and torques of compliant robot in real time, providing operator with feel of forces experienced by manipulator as well as view in any of three modes: single view, two split views, or stereoscopic view. From keyboard, user specifies force-reflection gain and stiffness of manipulator hand for three translational and three rotational axes. System offers two simulated telerobotic tasks: insertion of peg in hole in three dimensions, and removal and insertion of drawer.

  6. Closed-form dynamics of a hexarot parallel manipulator by means of the principle of virtual work

    NASA Astrophysics Data System (ADS)

    Pedrammehr, Siamak; Nahavandi, Saeid; Abdi, Hamid

    2018-04-01

    In this research, a systematic approach to solving the inverse dynamics of hexarot manipulators is addressed using the methodology of virtual work. For the first time, a closed form of the mathematical formulation of the standard dynamic model is presented for this class of mechanisms. An efficient algorithm for solving this closed-form dynamic model of the mechanism is developed and it is used to simulate the dynamics of the system for different trajectories. Validation of the proposed model is performed using SimMechanics and it is shown that the results of the proposed mathematical model match with the results obtained by the SimMechanics model.

  7. Young Children's Learning Performance and Efficiency When Using Virtual Manipulative Mathematics iPad Apps

    ERIC Educational Resources Information Center

    Moyer-Packenham, Patricia S.; Shumway, Jessica F.; Bullock, Emma; Tucker, Stephen I.; Anderson-Pence, Katie L.; Westenskow, Arla; Boyer-Thurgood, Jennifer; Maahs-Fladung, Cathy; Symanzik, Juergen; Mahamane, Salif; MacDonald, Beth; Jordan, Kerry

    2015-01-01

    Part of a larger initiation mixed methods study (Greene, Caracelli, & Graham, 1989), this paper discusses the changes in young children's learning performance and efficiency (one element of the quantitative portion of the larger study) during clinical interviews in which each child interacted with a variety of virtual manipulative…

  8. Model Manipulation and Learning: Fostering Representational Competence with Virtual and Concrete Models

    ERIC Educational Resources Information Center

    Stull, Andrew T.; Hegarty, Mary

    2016-01-01

    This study investigated the development of representational competence among organic chemistry students by using 3D (concrete and virtual) models as aids for teaching students to translate between multiple 2D diagrams. In 2 experiments, students translated between different diagrams of molecules and received verbal feedback in 1 of the following 3…

  9. A Head in Virtual Reality: Development of A Dynamic Head and Neck Model

    ERIC Educational Resources Information Center

    Nguyen, Ngan; Wilson, Timothy D.

    2009-01-01

    Advances in computer and interface technologies have made it possible to create three-dimensional (3D) computerized models of anatomical structures for visualization, manipulation, and interaction in a virtual 3D environment. In the past few decades, a multitude of digital models have been developed to facilitate complex spatial learning of the…

  10. Virtual quantum subsystems.

    PubMed

    Zanardi, P

    2001-08-13

    The physical resources available to access and manipulate the degrees of freedom of a quantum system define the set A of operationally relevant observables. The algebraic structure of A selects a preferred tensor product structure, i.e., a partition into subsystems. The notion of compoundness for quantum systems is accordingly relativized. Universal control over virtual subsystems can be achieved by using quantum noncommutative holonomies

  11. The Case for Adopting Virtual Manipulatives in Mathematics Education for Students with Disabilities

    ERIC Educational Resources Information Center

    Satsangi, Rajiv; Miller, Bridget

    2017-01-01

    The past four decades have generated significant research toward improving the academic outcomes of students with disabilities, especially in the field of mathematics. In this effort, the role of technology in the classroom, both high- and low-tech, has garnered significant attention. For students with disabilities, the use of manipulatives is a…

  12. Effects of Worked Examples Using Manipulatives on Fifth Graders' Learning Performance and Attitude toward Mathematics

    ERIC Educational Resources Information Center

    Lee, Chun-Yi; Chen, Ming-Jang

    2015-01-01

    The purpose of this study was to investigate the influence of worked examples using virtual manipulatives on the learning performance and attitudes of fifth grade students toward mathematics. The results showed that: (1) the utilization of non-routine examples could promote learning performance of equivalent fractions. (2) Learning with virtual…

  13. The Development of the Virtual Learning Media of the Sacred Object Artwork

    ERIC Educational Resources Information Center

    Nuanmeesri, Sumitra; Jamornmongkolpilai, Saran

    2018-01-01

    This research aimed to develop the virtual learning media of the sacred object artwork by applying the concept of the virtual technology in order to publicize knowledge on the cultural wisdom of the sacred object artwork. It was done by designing and developing the virtual learning media of the sacred object artwork for the virtual presentation.…

  14. Space Science

    NASA Image and Video Library

    2003-06-01

    NASA’s Virtual Glovebox (VGX) was developed to allow astronauts on Earth to train for complex biology research tasks in space. The astronauts may reach into the virtual environment, naturally manipulating specimens, tools, equipment, and accessories in a simulated microgravity environment as they would do in space. Such virtual reality technology also provides engineers and space operations staff with rapid prototyping, planning, and human performance modeling capabilities. Other Earth based applications being explored for this technology include biomedical procedural training and training for disarming bio-terrorism weapons.

  15. Innovative approaches to the rehabilitation of upper extremity hemiparesis using virtual environments

    PubMed Central

    MERIANS, A. S.; TUNIK, E.; FLUET, G. G.; QIU, Q.; ADAMOVICH, S. V.

    2017-01-01

    Aim Upper-extremity interventions for hemiparesis are a challenging aspect of stroke rehabilitation. Purpose of this paper is to report the feasibility of using virtual environments (VEs) in combination with robotics to assist recovery of hand-arm function and to present preliminary data demonstrating the potential of using sensory manipulations in VE to drive activation in targeted neural regions. Methods We trained 8 subjects for 8 three hour sessions using a library of complex VE’s integrated with robots, comparing training arm and hand separately to training arm and hand together. Instrumented gloves and hand exoskeleton were used for hand tracking and haptic effects. Haptic Master robotic arm was used for arm tracking and generating three-dimensional haptic VEs. To investigate the use of manipulations in VE to drive neural activations, we created a “virtual mirror” that subjects used while performing a unimanual task. Cortical activation was measured with functional MRI (fMRI) and transcranial magnetic stimulation. Results Both groups showed improvement in kinematics and measures of real-world function. The group trained using their arm and hand together showed greater improvement. In a stroke subject, fMRI data suggested virtual mirror feedback could activate the sensorimotor cortex contralateral to the reflected hand (ipsilateral to the moving hand) thus recruiting the lesioned hemisphere. Conclusion Gaming simulations interfaced with robotic devices provide a training medium that can modify movement patterns. In addition to showing that our VE therapies can optimize behavioral performance, we show preliminary evidence to support the potential of using specific sensory manipulations to selectively recruit targeted neural circuits. PMID:19158659

  16. Virtual Environments in Scientific Visualization

    NASA Technical Reports Server (NTRS)

    Bryson, Steve; Lisinski, T. A. (Technical Monitor)

    1994-01-01

    Virtual environment technology is a new way of approaching the interface between computers and humans. Emphasizing display and user control that conforms to the user's natural ways of perceiving and thinking about space, virtual environment technologies enhance the ability to perceive and interact with computer generated graphic information. This enhancement potentially has a major effect on the field of scientific visualization. Current examples of this technology include the Virtual Windtunnel being developed at NASA Ames Research Center. Other major institutions such as the National Center for Supercomputing Applications and SRI International are also exploring this technology. This talk will be describe several implementations of virtual environments for use in scientific visualization. Examples include the visualization of unsteady fluid flows (the virtual windtunnel), the visualization of geodesics in curved spacetime, surface manipulation, and examples developed at various laboratories.

  17. The effects of virtual experience on attitudes toward real brands.

    PubMed

    Dobrowolski, Pawel; Pochwatko, Grzegorz; Skorko, Maciek; Bielecki, Maksymilian

    2014-02-01

    Although the commercial availability and implementation of virtual reality interfaces has seen rapid growth in recent years, little research has been conducted on the potential for virtual reality to affect consumer behavior. One unaddressed issue is how our real world attitudes are affected when we have a virtual experience with the target of those attitudes. This study compared participant (N=60) attitudes toward car brands before and after a virtual test drive of those cars was provided. Results indicated that attitudes toward test brands changed after experience with virtual representations of those brands. Furthermore, manipulation of the quality of this experience (in this case modification of driving difficulty) was reflected in the direction of attitude change. We discuss these results in the context of the associative-propositional evaluation model.

  18. Investigating Preservice Teachers' Understanding of Balance Concepts Utilizing a Clinical Interview Method and a Virtual Tool

    ERIC Educational Resources Information Center

    Wilhelm, Jennifer; Matteson, Shirley; She, Xiaobo

    2013-01-01

    Our study was enacted in university mathematics education classes in the USA with preservice teachers (PSTs). This research focused on PSTs' interview responses that were used to assess their understanding of balance when challenged with tasks involving virtual manipulatives. Siegler's rules were used in analyzing PSTs' responses to…

  19. 3D virtual character reconstruction from projections: a NURBS-based approach

    NASA Astrophysics Data System (ADS)

    Triki, Olfa; Zaharia, Titus B.; Preteux, Francoise J.

    2004-05-01

    This work has been carried out within the framework of the industrial project, so-called TOON, supported by the French government. TOON aims at developing tools for automating the traditional 2D cartoon content production. This paper presents preliminary results of the TOON platform. The proposed methodology concerns the issues of 2D/3D reconstruction from a limited number of drawn projections, and 2D/3D manipulation/deformation/refinement of virtual characters. Specifically, we show that the NURBS-based modeling approach developed here offers a well-suited framework for generating deformable 3D virtual characters from incomplete 2D information. Furthermore, crucial functionalities such as animation and non-rigid deformation can be also efficiently handled and solved. Note that user interaction is enabled exclusively in 2D by achieving a multiview constraint specification method. This is fully consistent and compliant with the cartoon creator traditional practice and makes it possible to avoid the use of 3D modeling software packages which are generally complex to manipulate.

  20. Effects of Axial Torsion on Disc Height Distribution: an In Vivo Study

    PubMed Central

    Espinoza Orías, Alejandro A.; Mammoser, Nicole M.; Triano, John J.; An, Howard S.; Andersson, Gunnar B.J.; Inoue, Nozomu

    2016-01-01

    Objectives Axial rotation of the torso is commonly used during manipulation treatment of low back pain. Little is known about the effect of these positons on disc morphology. Rotation is a three-dimensional event that is inadequately represented with planar images in the clinic. True quantification of the intervertebral gap can be achieved with a disc height distribution. The objective of this study was to analyze disc height distribution patterns during torsion relevant to manipulation in vivo. Methods Eighty-one volunteers were CT-scanned both in supine and in right 50° rotation positions. Virtual models of each intervertebral gap representing the disc were created with the inferior endplate of each ‘disc’ set as the reference surface and separated into five anatomical zones: four peripheral and one central, corresponding to the footprint of the annulus fibrosus and nucleus pulposus, respectively. Whole-disc and individual anatomical zone disc height distributions were calculated in both positions, and were compared against each other with ANOVA, with significance set at p < 0.05. Results Mean neutral disc height was 7.32 (1.59) mm. With 50° rotation, a small but significant increase to 7.44 (1.52) mm (p < 0.0002) was observed. The right side showed larger separation in most levels, except at L5/S1. The posterior and right zones increased in height upon axial rotation of the spine (p < 0.0001), while the left, anterior and central decreased. Conclusions This study quantified important tensile/compressive changes disc height during torsion. The implications of these mutually opposing changes on spinal manipulation are still unknown. PMID:27059249

  1. A new electrowetting lab-on-a-chip platform based on programmable and virtual wall-less channels

    NASA Astrophysics Data System (ADS)

    Banerjee, Ananda; Kreit, Eric; Dhindsa, Manjeet; Heikenfeld, Jason; Papautsky, Ian

    2011-02-01

    Microscale liquid handling based on electrowetting has been previously demonstrated by several groups. Such liquid manipulation however is limited to control of individual droplets, aptly termed digital microfluidics. The inability to form continuous channels thus prevents conventional microfluidic sample manipulation and analysis approaches, such as electroosmosis and electrophoresis. In this paper, we discuss our recent progress on the development of electrowettingbased virtual channels. These channels can be created and reconfigured on-demand and preserve their shape without external stimulus. We also discuss recent progress towards demonstrating electroosmotic flows in such microchannels for fluid transport. This would permit a variety of basic functionalities in this new platform including sample transport and mixing between various functional areas of the chip.

  2. Sex Differences in Object Manipulation in Wild Immature Chimpanzees (Pan troglodytes schweinfurthii) and Bonobos (Pan paniscus): Preparation for Tool Use?

    PubMed

    Koops, Kathelijne; Furuichi, Takeshi; Hashimoto, Chie; van Schaik, Carel P

    2015-01-01

    Sex differences in immatures predict behavioural differences in adulthood in many mammal species. Because most studies have focused on sex differences in social interactions, little is known about possible sex differences in 'preparation' for adult life with regards to tool use skills. We investigated sex and age differences in object manipulation in immature apes. Chimpanzees use a variety of tools across numerous contexts, whereas bonobos use few tools and none in foraging. In both species, a female bias in adult tool use has been reported. We studied object manipulation in immature chimpanzees at Kalinzu (Uganda) and bonobos at Wamba (Democratic Republic of Congo). We tested predictions of the 'preparation for tool use' hypothesis. We confirmed that chimpanzees showed higher rates and more diverse types of object manipulation than bonobos. Against expectation, male chimpanzees showed higher object manipulation rates than females, whereas in bonobos no sex difference was found. However, object manipulation by male chimpanzees was play-dominated, whereas manipulation types of female chimpanzees were more diverse (e.g., bite, break, carry). Manipulation by young immatures of both species was similarly dominated by play, but only in chimpanzees did it become more diverse with age. Moreover, in chimpanzees, object types became more tool-like (i.e., sticks) with age, further suggesting preparation for tool use in adulthood. The male bias in object manipulation in immature chimpanzees, along with the late onset of tool-like object manipulation, indicates that not all (early) object manipulation (i.e., object play) in immatures prepares for subsistence tool use. Instead, given the similarity with gender differences in human children, object play may also function in motor skill practice for male-specific behaviours (e.g., dominance displays). In conclusion, even though immature behaviours almost certainly reflect preparation for adult roles, more detailed future work is needed to disentangle possible functions of object manipulation during development.

  3. Sex Differences in Object Manipulation in Wild Immature Chimpanzees (Pan troglodytes schweinfurthii) and Bonobos (Pan paniscus): Preparation for Tool Use?

    PubMed Central

    Koops, Kathelijne; Furuichi, Takeshi; Hashimoto, Chie; van Schaik, Carel P.

    2015-01-01

    Sex differences in immatures predict behavioural differences in adulthood in many mammal species. Because most studies have focused on sex differences in social interactions, little is known about possible sex differences in ‘preparation’ for adult life with regards to tool use skills. We investigated sex and age differences in object manipulation in immature apes. Chimpanzees use a variety of tools across numerous contexts, whereas bonobos use few tools and none in foraging. In both species, a female bias in adult tool use has been reported. We studied object manipulation in immature chimpanzees at Kalinzu (Uganda) and bonobos at Wamba (Democratic Republic of Congo). We tested predictions of the ‘preparation for tool use’ hypothesis. We confirmed that chimpanzees showed higher rates and more diverse types of object manipulation than bonobos. Against expectation, male chimpanzees showed higher object manipulation rates than females, whereas in bonobos no sex difference was found. However, object manipulation by male chimpanzees was play-dominated, whereas manipulation types of female chimpanzees were more diverse (e.g., bite, break, carry). Manipulation by young immatures of both species was similarly dominated by play, but only in chimpanzees did it become more diverse with age. Moreover, in chimpanzees, object types became more tool-like (i.e., sticks) with age, further suggesting preparation for tool use in adulthood. The male bias in object manipulation in immature chimpanzees, along with the late onset of tool-like object manipulation, indicates that not all (early) object manipulation (i.e., object play) in immatures prepares for subsistence tool use. Instead, given the similarity with gender differences in human children, object play may also function in motor skill practice for male-specific behaviours (e.g., dominance displays). In conclusion, even though immature behaviours almost certainly reflect preparation for adult roles, more detailed future work is needed to disentangle possible functions of object manipulation during development. PMID:26444011

  4. Captive Bottlenose Dolphins (Tursiops truncatus) Spontaneously Using Water Flow to Manipulate Objects

    PubMed Central

    Yamamoto, Chisato; Furuta, Keisuke; Taki, Michihiro; Morisaka, Tadamichi

    2014-01-01

    Several terrestrial animals and delphinids manipulate objects in a tactile manner, using parts of their bodies, such as their mouths or hands. In this paper, we report that bottlenose dolphins (Tursiops truncatus) manipulate objects not by direct bodily contact, but by spontaneous water flow. Three of four dolphins at Suma Aqualife Park performed object manipulation with food. The typical sequence of object manipulation consisted of a three step procedure. First, the dolphins released the object from the sides of their mouths while assuming a head-down posture near the floor. They then manipulated the object around their mouths and caught it. Finally, they ceased to engage in their head-down posture and started to swim. When the dolphins moved the object, they used the water current in the pool or moved their head. These results showed that dolphins manipulate objects using movements that do not directly involve contact between a body part and the object. In the event the dolphins dropped the object on the floor, they lifted it by making water flow in one of three methods: opening and closing their mouths repeatedly, moving their heads lengthwise, or making circular head motions. This result suggests that bottlenose dolphins spontaneously change their environment to manipulate objects. The reason why aquatic animals like dolphins do object manipulation by changing their environment but terrestrial animals do not may be that the viscosity of the aquatic environment is much higher than it is in terrestrial environments. This is the first report thus far of any non-human mammal engaging in object manipulation using several methods to change their environment. PMID:25250625

  5. Soft Pushing Operation with Dual Compliance Controllers Based on Estimated Torque and Visual Force

    NASA Astrophysics Data System (ADS)

    Muis, Abdul; Ohnishi, Kouhei

    Sensor fusion extends robot ability to perform more complex tasks. An interesting application in such an issue is pushing operation, in which through multi-sensor, the robot moves an object by pushing it. Generally, a pushing operation consists of “approaching, touching, and pushing"(1). However, most researches in this field are dealing with how the pushed object follows the predefined trajectory. In which, the implication as the robot body or the tool-tip hits an object is neglected. Obviously on collision, the robot momentum may crash sensor, robot's surface or even the object. For that reason, this paper proposes a soft pushing operation with dual compliance controllers. Mainly, a compliance control is a control system with trajectory compensation so that the external force may be followed. In this paper, the first compliance controller is driven by estimated external force based on reaction torque observer(2), which compensates contact sensation. The other one compensates non-contact sensation. Obviously, a contact sensation, acquired from force sensor either reaction torque observer of an object, is measurable once the robot touched the object. Therefore, a non-contact sensation is introduced before touching an object, which is realized with visual sensor in this paper. Here, instead of using visual information as command reference, the visual information such as depth, is treated as virtual force for the second compliance controller. Thus, having contact and non-contact sensation, the robot will be compliant with wider sensation. This paper considers a heavy mobile manipulator and a heavy object, which have significant momentum on touching stage. A chopstick is attached on the object side to show the effectiveness of the proposed method. Here, both compliance controllers adjust the mobile manipulator command reference to provide soft pushing operation. Finally, the experimental result shows the validity of the proposed method.

  6. The Development of Object Function and Manipulation Knowledge: Evidence from a Semantic Priming Study

    PubMed Central

    Collette, Cynthia; Bonnotte, Isabelle; Jacquemont, Charlotte; Kalénine, Solène; Bartolo, Angela

    2016-01-01

    Object semantics include object function and manipulation knowledge. Function knowledge refers to the goal attainable by using an object (e.g., the function of a key is to open or close a door) while manipulation knowledge refers to gestures one has to execute to use an object appropriately (e.g., a key is held between the thumb and the index, inserted into the door lock and then turned). To date, several studies have assessed function and manipulation knowledge in brain lesion patients as well as in healthy adult populations. In patients with left brain damage, a double dissociation between these two types of knowledge has been reported; on the other hand, behavioral studies in healthy adults show that function knowledge is processed faster than manipulation knowledge. Empirical evidence has shown that object interaction in children differs from that in adults, suggesting that the access to function and manipulation knowledge in children might also differ. To investigate the development of object function and manipulation knowledge, 51 typically developing 8-9-10 year-old children and 17 healthy young adults were tested on a naming task associated with a semantic priming paradigm (190-ms SOA; prime duration: 90 ms) in which a series of line drawings of manipulable objects were used. Target objects could be preceded by three priming contexts: related (e.g., knife-scissors for function; key-screwdriver for manipulation), unrelated but visually similar (e.g., glasses-scissors; baseball bat-screwdriver), and purely unrelated (e.g., die-scissors; tissue-screwdriver). Results showed a different developmental pattern of function and manipulation priming effects. Function priming effects were not present in children and emerged only in adults, with faster naming responses for targets preceded by objects sharing the same function. In contrast, manipulation priming effects were already present in 8-year-olds with faster naming responses for targets preceded by objects sharing the same manipulation and these decreased linearly between 8 and 10 years of age, 10-year-olds not differing from adults. Overall, results show that the access to object function and manipulation knowledge changes during development by favoring manipulation knowledge in childhood and function knowledge in adulthood. PMID:27602004

  7. Virtual reality and telerobotics applications of an Address Recalculation Pipeline

    NASA Technical Reports Server (NTRS)

    Regan, Matthew; Pose, Ronald

    1994-01-01

    The technology described in this paper was designed to reduce latency to user interactions in immersive virtual reality environments. It is also ideally suited to telerobotic applications such as interaction with remote robotic manipulators in space or in deep sea operations. in such circumstances the significant latency is observed response to user stimulus which is due to communications delays, and the disturbing jerkiness due to low and unpredictable frame rates on compressed video user feedback or computationally limited virtual worlds, can be masked by our techniques. The user is provided with highly responsive visual feedback independent of communication or computational delays in providing physical video feedback or in rendering virtual world images. Virtual and physical environments can be combined seamlessly using these techniques.

  8. Feedback traps for virtual potentials

    NASA Astrophysics Data System (ADS)

    Gavrilov, Momčilo; Bechhoefer, John

    2017-03-01

    Feedback traps are tools for trapping and manipulating single charged objects, such as molecules in solution. An alternative to optical tweezers and other single-molecule techniques, they use feedback to counteract the Brownian motion of a molecule of interest. The trap first acquires information about a molecule's position and then applies an electric feedback force to move the molecule. Since electric forces are stronger than optical forces at small scales, feedback traps are the best way to trap single molecules without `touching' them (e.g. by putting them in a small box or attaching them to a tether). Feedback traps can do more than trap molecules: they can also subject a target object to forces that are calculated to be the gradient of a desired potential function U(x). If the feedback loop is fast enough, it creates a virtual potential whose dynamics will be very close to those of a particle in an actual potential U(x). But because the dynamics are entirely a result of the feedback loop-absent the feedback, there is only an object diffusing in a fluid-we are free to specify and then manipulate in time an arbitrary potential U(x,t). Here, we review recent applications of feedback traps to studies on the fundamental connections between information and thermodynamics, a topic where feedback plays an even more fundamental role. We discuss how recursive maximum-likelihood techniques allow continuous calibration, to compensate for drifts in experiments that last for days. We consider ways to estimate work and heat, using them to measure fluctuating energies to a precision of ±0.03 kT over these long experiments. Finally, we compare work and heat measurements of the costs of information erasure, the Landauer limit of kT ln 2 per bit of information erased. We argue that, when you want to know the average heat transferred to a bath in a long protocol, you should measure instead the average work and then infer the heat using the first law of thermodynamics. This article is part of the themed issue 'Horizons of cybernetical physics'.

  9. A microbased shared virtual world prototype

    NASA Technical Reports Server (NTRS)

    Pitts, Gerald; Robinson, Mark; Strange, Steve

    1993-01-01

    Virtual reality (VR) allows sensory immersion and interaction with a computer-generated environment. The user adopts a physical interface with the computer, through Input/Output devices such as a head-mounted display, data glove, mouse, keyboard, or monitor, to experience an alternate universe. What this means is that the computer generates an environment which, in its ultimate extension, becomes indistinguishable from the real world. 'Imagine a wraparound television with three-dimensional programs, including three-dimensional sound, and solid objects that you can pick up and manipulate, even feel with your fingers and hands.... 'Imagine that you are the creator as well as the consumer of your artificial experience, with the power to use a gesture or word to remold the world you see and hear and feel. That part is not fiction... three-dimensional computer graphics, input/output devices, computer models that constitute a VR system make it possible, today, to immerse yourself in an artificial world and to reach in and reshape it.' Our research's goal was to propose a feasibility experiment in the construction of a networked virtual reality system, making use of current personal computer (PC) technology. The prototype was built using Borland C compiler, running on an IBM 486 33 MHz and a 386 33 MHz. Each game currently is represented as an IPX client on a non-dedicated Novell server. We initially posed the two questions: (1) Is there a need for networked virtual reality? (2) In what ways can the technology be made available to the most people possible?

  10. Virtual Reality in Neurointervention.

    PubMed

    Ong, Chin Siang; Deib, Gerard; Yesantharao, Pooja; Qiao, Ye; Pakpoor, Jina; Hibino, Narutoshi; Hui, Ferdinand; Garcia, Juan R

    2018-06-01

    Virtual reality (VR) allows users to experience realistic, immersive 3D virtual environments with the depth perception and binocular field of view of real 3D settings. Newer VR technology has now allowed for interaction with 3D objects within these virtual environments through the use of VR controllers. This technical note describes our preliminary experience with VR as an adjunct tool to traditional angiographic imaging in the preprocedural workup of a patient with a complex pseudoaneurysm. Angiographic MRI data was imported and segmented to create 3D meshes of bilateral carotid vasculature. The 3D meshes were then projected into VR space, allowing the operator to inspect the carotid vasculature using a 3D VR headset as well as interact with the pseudoaneurysm (handling, rotation, magnification, and sectioning) using two VR controllers. 3D segmentation of a complex pseudoaneurysm in the distal cervical segment of the right internal carotid artery was successfully performed and projected into VR. Conventional and VR visualization modes were equally effective in identifying and classifying the pathology. VR visualization allowed the operators to manipulate the dataset to achieve a greater understanding of the anatomy of the parent vessel, the angioarchitecture of the pseudoaneurysm, and the surface contours of all visualized structures. This preliminary study demonstrates the feasibility of utilizing VR for preprocedural evaluation in patients with anatomically complex neurovascular disorders. This novel visualization approach may serve as a valuable adjunct tool in deciding patient-specific treatment plans and selection of devices prior to intervention.

  11. Evaluation of navigation interfaces in virtual environments

    NASA Astrophysics Data System (ADS)

    Mestre, Daniel R.

    2014-02-01

    When users are immersed in cave-like virtual reality systems, navigational interfaces have to be used when the size of the virtual environment becomes larger than the physical extent of the cave floor. However, using navigation interfaces, physically static users experience self-motion (visually-induced vection). As a consequence, sensorial incoherence between vision (indicating self-motion) and other proprioceptive inputs (indicating immobility) can make them feel dizzy and disoriented. We tested, in two experimental studies, different locomotion interfaces. The objective was twofold: testing spatial learning and cybersickness. In a first experiment, using first-person navigation with a flystick ®, we tested the effect of sensorial aids, a spatialized sound or guiding arrows on the ground, attracting the user toward the goal of the navigation task. Results revealed that sensorial aids tended to impact negatively spatial learning. Moreover, subjects reported significant levels of cybersickness. In a second experiment, we tested whether such negative effects could be due to poorly controlled rotational motion during simulated self-motion. Subjects used a gamepad, in which rotational and translational displacements were independently controlled by two joysticks. Furthermore, we tested first- versus third-person navigation. No significant difference was observed between these two conditions. Overall, cybersickness tended to be lower, as compared to experiment 1, but the difference was not significant. Future research should evaluate further the hypothesis of the role of passively perceived optical flow in cybersickness, but manipulating the virtual environment'sperrot structure. It also seems that video-gaming experience might be involved in the user's sensitivity to cybersickness.

  12. Combining 3D structure of real video and synthetic objects

    NASA Astrophysics Data System (ADS)

    Kim, Man-Bae; Song, Mun-Sup; Kim, Do-Kyoon

    1998-04-01

    This paper presents a new approach of combining real video and synthetic objects. The purpose of this work is to use the proposed technology in the fields of advanced animation, virtual reality, games, and so forth. Computer graphics has been used in the fields previously mentioned. Recently, some applications have added real video to graphic scenes for the purpose of augmenting the realism that the computer graphics lacks in. This approach called augmented or mixed reality can produce more realistic environment that the entire use of computer graphics. Our approach differs from the virtual reality and augmented reality in the manner that computer- generated graphic objects are combined to 3D structure extracted from monocular image sequences. The extraction of the 3D structure requires the estimation of 3D depth followed by the construction of a height map. Graphic objects are then combined to the height map. The realization of our proposed approach is carried out in the following steps: (1) We derive 3D structure from test image sequences. The extraction of the 3D structure requires the estimation of depth and the construction of a height map. Due to the contents of the test sequence, the height map represents the 3D structure. (2) The height map is modeled by Delaunay triangulation or Bezier surface and each planar surface is texture-mapped. (3) Finally, graphic objects are combined to the height map. Because 3D structure of the height map is already known, Step (3) is easily manipulated. Following this procedure, we produced an animation video demonstrating the combination of the 3D structure and graphic models. Users can navigate the realistic 3D world whose associated image is rendered on the display monitor.

  13. Interaction rules for symbol-oriented graphical user interfaces

    NASA Astrophysics Data System (ADS)

    Brinkschulte, Uwe; Vogelsang, Holger; Wolf, Luc

    1999-03-01

    This work describes a way of interactive manipulation of structured objects by interaction rules. Symbols are used as graphical representation of object states. State changes lead to different visual symbol instances. The manipulation of symbols using interactive devices lead to an automatic state change of the corresponding structured object without any intervention of the application. Therefore, interaction rules are introduced. These rules describe the way a symbol may be manipulated and the effects this manipulation has on the corresponding structured object. The rules are interpreted by the visualization and interaction service. For each symbol used, a set of interaction rules can be defined. In order to be the more general as possible, all the interactions on a symbol are defined as a triple, which specifies the preconditions of all the manipulations of this symbol, the manipulations themselves, and the postconditions of all the manipulations of this symbol. A manipulation is a quintuplet, which describes the possible initial events of the manipulation, the possible places of these events, the preconditions of this manipulation, the results of this manipulation, and the postconditions of this manipulation. Finally, reflection functions map the results of a manipulation to the new state of a structured object.

  14. Coherence of structural visual cues and pictorial gravity paves the way for interceptive actions.

    PubMed

    Zago, Myrka; La Scaleia, Barbara; Miller, William L; Lacquaniti, Francesco

    2011-09-20

    Dealing with upside-down objects is difficult and takes time. Among the cues that are critical for defining object orientation, the visible influence of gravity on the object's motion has received limited attention. Here, we manipulated the alignment of visible gravity and structural visual cues between each other and relative to the orientation of the observer and physical gravity. Participants pressed a button triggering a hitter to intercept a target accelerated by a virtual gravity. A factorial design assessed the effects of scene orientation (normal or inverted) and target gravity (normal or inverted). We found that interception was significantly more successful when scene direction was concordant with target gravity direction, irrespective of whether both were upright or inverted. This was so independent of the hitter type and when performance feedback to the participants was either available (Experiment 1) or unavailable (Experiment 2). These results show that the combined influence of visible gravity and structural visual cues can outweigh both physical gravity and viewer-centered cues, leading to rely instead on the congruence of the apparent physical forces acting on people and objects in the scene.

  15. Bats' avoidance of real and virtual objects: implications for the sonar coding of object size.

    PubMed

    Goerlitz, Holger R; Genzel, Daria; Wiegrebe, Lutz

    2012-01-01

    Fast movement in complex environments requires the controlled evasion of obstacles. Sonar-based obstacle evasion involves analysing the acoustic features of object-echoes (e.g., echo amplitude) that correlate with this object's physical features (e.g., object size). Here, we investigated sonar-based obstacle evasion in bats emerging in groups from their day roost. Using video-recordings, we first show that the bats evaded a small real object (ultrasonic loudspeaker) despite the familiar flight situation. Secondly, we studied the sonar coding of object size by adding a larger virtual object. The virtual object echo was generated by real-time convolution of the bats' calls with the acoustic impulse response of a large spherical disc and played from the loudspeaker. Contrary to the real object, the virtual object did not elicit evasive flight, despite the spectro-temporal similarity of real and virtual object echoes. Yet, their spatial echo features differ: virtual object echoes lack the spread of angles of incidence from which the echoes of large objects arrive at a bat's ears (sonar aperture). We hypothesise that this mismatch of spectro-temporal and spatial echo features caused the lack of virtual object evasion and suggest that the sonar aperture of object echoscapes contributes to the sonar coding of object size. Copyright © 2011 Elsevier B.V. All rights reserved.

  16. Biological Visualization, Imaging and Simulation(Bio-VIS) at NASA Ames Research Center: Developing New Software and Technology for Astronaut Training and Biology Research in Space

    NASA Technical Reports Server (NTRS)

    Smith, Jeffrey

    2003-01-01

    The Bio- Visualization, Imaging and Simulation (BioVIS) Technology Center at NASA's Ames Research Center is dedicated to developing and applying advanced visualization, computation and simulation technologies to support NASA Space Life Sciences research and the objectives of the Fundamental Biology Program. Research ranges from high resolution 3D cell imaging and structure analysis, virtual environment simulation of fine sensory-motor tasks, computational neuroscience and biophysics to biomedical/clinical applications. Computer simulation research focuses on the development of advanced computational tools for astronaut training and education. Virtual Reality (VR) and Virtual Environment (VE) simulation systems have become important training tools in many fields from flight simulation to, more recently, surgical simulation. The type and quality of training provided by these computer-based tools ranges widely, but the value of real-time VE computer simulation as a method of preparing individuals for real-world tasks is well established. Astronauts routinely use VE systems for various training tasks, including Space Shuttle landings, robot arm manipulations and extravehicular activities (space walks). Currently, there are no VE systems to train astronauts for basic and applied research experiments which are an important part of many missions. The Virtual Glovebox (VGX) is a prototype VE system for real-time physically-based simulation of the Life Sciences Glovebox where astronauts will perform many complex tasks supporting research experiments aboard the International Space Station. The VGX consists of a physical display system utilizing duel LCD projectors and circular polarization to produce a desktop-sized 3D virtual workspace. Physically-based modeling tools (Arachi Inc.) provide real-time collision detection, rigid body dynamics, physical properties and force-based controls for objects. The human-computer interface consists of two magnetic tracking devices (Ascention Inc.) attached to instrumented gloves (Immersion Inc.) which co-locate the user's hands with hand/forearm representations in the virtual workspace. Force-feedback is possible in a work volume defined by a Phantom Desktop device (SensAble inc.). Graphics are written in OpenGL. The system runs on a 2.2 GHz Pentium 4 PC. The prototype VGX provides astronauts and support personnel with a real-time physically-based VE system to simulate basic research tasks both on Earth and in the microgravity of Space. The immersive virtual environment of the VGX also makes it a useful tool for virtual engineering applications including CAD development, procedure design and simulation of human-system systems in a desktop-sized work volume.

  17. The influence of a learning object with virtual simulation for dentistry: A randomized controlled trial.

    PubMed

    Tubelo, Rodrigo Alves; Branco, Vicente Leitune Castelo; Dahmer, Alessandra; Samuel, Susana Maria Werner; Collares, Fabrício Mezzomo

    2016-01-01

    The study aimed to evaluate the influence of virtual learning object (VLO) in the theoretical knowledge and skill practice of undergraduate dentistry students as it relates to zinc phosphate cement (ZPC). Only students enrolled in the dentistry course the course were included in the trial. Forty-six students received a live class regarding ZPC and were randomized by electronic sorting into the following 4 groups: VLO Immediate (GIVLOn=9), VLO longitudinal (GLVLOn=15) and two control groups without VLO (GICn=9 and GLCn=13). The immediate groups had access to VLO or a book for 20 min before the ability assessment, whereas the longitudinal groups had access to VLO or a book for 15 days. A pre- and posttest on theoretical knowledge and two laboratory skill tests, evaluated by blinded examiners, were performed regarding zinc phosphate cement manipulation in all groups. The students who used the VLO obtained better results in all the tests performed than the control students. The theoretical posttest showed a significant difference between the longitudinal groups, GLC (6.0 ± 1.15) and GLVLO (7.33 ± 1.43). The lower film thickness presented with a significant difference in the VLO groups: (GIC 25 ± 9.3) and GIVLO (16.24 ± 5.17); GLC (50 ± 27.08) and GLVLO (22.5±9.65). The higher setting time occurred in the VLO groups, and the immediate group showed a significant difference (GIC 896 ± 218.90) and GIVLO (1138.5 ± 177.95). The ZPC manipulated by the students who used the VLO had better mechanical properties in the laboratory tests. Therefore, the groups that used the VLO had clinical handling skills superior to its controls and greater retention of knowledge after 15 days. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  18. Lung Segmentation Refinement based on Optimal Surface Finding Utilizing a Hybrid Desktop/Virtual Reality User Interface

    PubMed Central

    Sun, Shanhui; Sonka, Milan; Beichel, Reinhard R.

    2013-01-01

    Recently, the optimal surface finding (OSF) and layered optimal graph image segmentation of multiple objects and surfaces (LOGISMOS) approaches have been reported with applications to medical image segmentation tasks. While providing high levels of performance, these approaches may locally fail in the presence of pathology or other local challenges. Due to the image data variability, finding a suitable cost function that would be applicable to all image locations may not be feasible. This paper presents a new interactive refinement approach for correcting local segmentation errors in the automated OSF-based segmentation. A hybrid desktop/virtual reality user interface was developed for efficient interaction with the segmentations utilizing state-of-the-art stereoscopic visualization technology and advanced interaction techniques. The user interface allows a natural and interactive manipulation on 3-D surfaces. The approach was evaluated on 30 test cases from 18 CT lung datasets, which showed local segmentation errors after employing an automated OSF-based lung segmentation. The performed experiments exhibited significant increase in performance in terms of mean absolute surface distance errors (2.54 ± 0.75 mm prior to refinement vs. 1.11 ± 0.43 mm post-refinement, p ≪ 0.001). Speed of the interactions is one of the most important aspects leading to the acceptance or rejection of the approach by users expecting real-time interaction experience. The average algorithm computing time per refinement iteration was 150 ms, and the average total user interaction time required for reaching complete operator satisfaction per case was about 2 min. This time was mostly spent on human-controlled manipulation of the object to identify whether additional refinement was necessary and to approve the final segmentation result. The reported principle is generally applicable to segmentation problems beyond lung segmentation in CT scans as long as the underlying segmentation utilizes the OSF framework. The two reported segmentation refinement tools were optimized for lung segmentation and might need some adaptation for other application domains. PMID:23415254

  19. The specificity of memory enhancement during interaction with a virtual environment.

    PubMed

    Brooks, B M; Attree, E A; Rose, F D; Clifford, B R; Leadbetter, A G

    1999-01-01

    Two experiments investigated differences between active and passive participation in a computer-generated virtual environment in terms of spatial memory, object memory, and object location memory. It was found that active participants, who controlled their movements in the virtual environment using a joystick, recalled the spatial layout of the virtual environment better than passive participants, who merely watched the active participants' progress. Conversely, there were no significant differences between the active and passive participants' recall or recognition of the virtual objects, nor in their recall of the correct locations of objects in the virtual environment. These findings are discussed in terms of subject-performed task research and the specificity of memory enhancement in virtual environments.

  20. Armagh Observatory - Historic Building Information Modelling for Virtual Learning in Building Conservation

    NASA Astrophysics Data System (ADS)

    Murphy, M.; Chenaux, A.; Keenaghan, G.; GIbson, V..; Butler, J.; Pybusr, C.

    2017-08-01

    In this paper the recording and design for a Virtual Reality Immersive Model of Armagh Observatory is presented, which will replicate the historic buildings and landscape with distant meridian markers and position of its principal historic instruments within a model of the night sky showing the position of bright stars. The virtual reality model can be used for educational purposes allowing the instruments within the historic building model to be manipulated within 3D space to demonstrate how the position measurements of stars were made in the 18th century. A description is given of current student and researchers activities concerning on-site recording and surveying and the virtual modelling of the buildings and landscape. This is followed by a design for a Virtual Reality Immersive Model of Armagh Observatory use game engine and virtual learning platforms and concepts.

  1. Remote laboratories for optical metrology: from the lab to the cloud

    NASA Astrophysics Data System (ADS)

    Osten, W.; Wilke, M.; Pedrini, G.

    2012-10-01

    The idea of remote and virtual metrology has been reported already in 2000 with a conceptual illustration by use of comparative digital holography, aimed at the comparison of two nominally identical but physically different objects, e.g., master and sample, in industrial inspection processes. However, the concept of remote and virtual metrology can be extended far beyond this. For example, it does not only allow for the transmission of static holograms over the Internet, but also provides an opportunity to communicate with and eventually control the physical set-up of a remote metrology system. Furthermore, the metrology system can be modeled in the environment of a 3D virtual reality using CAD or similar technology, providing a more intuitive interface to the physical setup within the virtual world. An engineer or scientist who would like to access the remote real world system can log on to the virtual system, moving and manipulating the setup through an avatar and take the desired measurements. The real metrology system responds to the interaction between the avatar and the 3D virtual representation, providing a more intuitive interface to the physical setup within the virtual world. The measurement data are stored and interpreted automatically for appropriate display within the virtual world, providing the necessary feedback to the experimenter. Such a system opens up many novel opportunities in industrial inspection such as the remote master-sample-comparison and the virtual assembling of parts that are fabricated at different places. Moreover, a multitude of new techniques can be envisaged. To them belong modern ways for documenting, efficient methods for metadata storage, the possibility for remote reviewing of experimental results, the adding of real experiments to publications by providing remote access to the metadata and to the experimental setup via Internet, the presentation of complex experiments in classrooms and lecture halls, the sharing of expensive and complex infrastructure within international collaborations, the implementation of new ways for the remote test of new devices, for their maintenance and service, and many more. The paper describes the idea of remote laboratories and illustrates the potential of the approach on selected examples with special attention to optical metrology.

  2. Perspectives on object manipulation and action grammar for percussive actions in primates

    PubMed Central

    Hayashi, Misato

    2015-01-01

    The skill of object manipulation is a common feature of primates including humans, although there are species-typical patterns of manipulation. Object manipulation can be used as a comparative scale of cognitive development, focusing on its complexity. Nut cracking in chimpanzees has the highest hierarchical complexity of tool use reported in non-human primates. An analysis of the patterns of object manipulation in naive chimpanzees after nut-cracking demonstrations revealed the cause of difficulties in learning nut-cracking behaviour. Various types of behaviours exhibited within a nut-cracking context can be examined in terms of the application of problem-solving strategies, focusing on their basis in causal understanding or insightful intentionality. Captive chimpanzees also exhibit complex forms of combinatory manipulation, which is the precursor of tool use. A new notation system of object manipulation was invented to assess grammatical rules in manipulative actions. The notation system of action grammar enabled direct comparisons to be made between primates including humans in a variety of object-manipulation tasks, including percussive-tool use. PMID:26483528

  3. Perspectives on object manipulation and action grammar for percussive actions in primates.

    PubMed

    Hayashi, Misato

    2015-11-19

    The skill of object manipulation is a common feature of primates including humans, although there are species-typical patterns of manipulation. Object manipulation can be used as a comparative scale of cognitive development, focusing on its complexity. Nut cracking in chimpanzees has the highest hierarchical complexity of tool use reported in non-human primates. An analysis of the patterns of object manipulation in naive chimpanzees after nut-cracking demonstrations revealed the cause of difficulties in learning nut-cracking behaviour. Various types of behaviours exhibited within a nut-cracking context can be examined in terms of the application of problem-solving strategies, focusing on their basis in causal understanding or insightful intentionality. Captive chimpanzees also exhibit complex forms of combinatory manipulation, which is the precursor of tool use. A new notation system of object manipulation was invented to assess grammatical rules in manipulative actions. The notation system of action grammar enabled direct comparisons to be made between primates including humans in a variety of object-manipulation tasks, including percussive-tool use. © 2015 The Author(s).

  4. Improving Geoscience Outreach Through Multimedia Enhanced Web Sites - An Example From Connecticut

    NASA Astrophysics Data System (ADS)

    Hyatt, J. A.; Coron, C. R.; Schroeder, T. J.; Fleming, T.; Drzewiecki, P. A.

    2005-12-01

    Although large governmental web sites (e.g. USGS, NASA etc.) are important resources, particularly in relation to phenomena with global to regional significance (e.g. recent Tsunami and Hurricane disasters), smaller academic web portals continue to make substantive contributions to web-based learning in the geosciences. The strength of "home-grown" web sites is that they easily can be tailored to specific classes, they often focus on local geologic content, and they potentially integrate classroom, laboratory, and field-based learning in ways that improve introductory classes. Furthermore, innovative multimedia techniques including virtual reality, image manipulations, and interactive streaming video can improve visualization and be particularly helpful for first-time geology students. This poster reports on one such web site, Learning Tools in Earth Science (LTES, http://www.easternct .edu/personal/faculty/hyattj/LTES-v2/), a site developed by geoscience faculty at two state institutions. In contrast to some large web sites with media development teams, LTES geoscientists, with strong support from media and IT service departments, are responsible for geologic content and verification, media development and editing, and web development and authoring. As such, we have considerable control over both content and design of this site. At present the main content modules for LTES include "mineral" and "virtual field trip" links. The mineral module includes an interactive mineral gallery, and a virtual mineral box of 24 unidentified samples that are identical to those used in some of our classes. Students navigate an intuitive web portal to manipulate images and view streaming video segments that explain and undertake standard mineral identification tests. New elements highlighted in our poster include links to a virtual petrographic microscope, in which users can manipulate images to simulate stage rotation in both plane- and cross-polarized light. Virtual field trips include video-based excursions to sites in Georgia, Connecticut and Greenland. New to these VFT's is the integration of "virtual walks" in which users are able to navigate through some field sites in a virtual sense. Development of this resource is ongoing, but response from students, faculty outside of Earth Science and K-12 instructors indicate that this small web site can provide useful resources for those educators utilizing web-based learning in their courses. .edu/personal/faculty/hyattj/LTES-v2/

  5. Latency and User Performance in Virtual Environments and Augmented Reality

    NASA Technical Reports Server (NTRS)

    Ellis, Stephen R.

    2009-01-01

    System rendering latency has been recognized by senior researchers, such as Professor Fredrick Brooks of UNC (Turing Award 1999), as a major factor limiting the realism and utility of head-referenced displays systems. Latency has been shown to reduce the user's sense of immersion within a virtual environment, disturb user interaction with virtual objects, and to contribute to motion sickness during some simulation tasks. Latency, however, is not just an issue for external display systems since finite nerve conduction rates and variation in transduction times in the human body's sensors also pose problems for latency management within the nervous system. Some of the phenomena arising from the brain's handling of sensory asynchrony due to latency will be discussed as a prelude to consideration of the effects of latency in interactive displays. The causes and consequences of the erroneous movement that appears in displays due to latency will be illustrated with examples of the user performance impact provided by several experiments. These experiments will review the generality of user sensitivity to latency when users judge either object or environment stability. Hardware and signal processing countermeasures will also be discussed. In particular the tuning of a simple extrapolative predictive filter not using a dynamic movement model will be presented. Results show that it is possible to adjust this filter so that the appearance of some latencies may be hidden without the introduction of perceptual artifacts such as overshoot. Several examples of the effects of user performance will be illustrated by three-dimensional tracking and tracing tasks executed in virtual environments. These experiments demonstrate classic phenomena known from work on manual control and show the need for very responsive systems if they are indented to support precise manipulation. The practical benefits of removing interfering latencies from interactive systems will be emphasized with some classic final examples from surgical telerobotics, and human-computer interaction.

  6. Comparison of the Efficacy and Efficiency of the Use of Virtual Reality Simulation With High-Fidelity Mannequins for Simulation-Based Training of Fiberoptic Bronchoscope Manipulation.

    PubMed

    Jiang, Bailin; Ju, Hui; Zhao, Ying; Yao, Lan; Feng, Yi

    2018-04-01

    This study compared the efficacy and efficiency of virtual reality simulation (VRS) with high-fidelity mannequin in the simulation-based training of fiberoptic bronchoscope manipulation in novices. Forty-six anesthesia residents with no experience in fiberoptic intubation were divided into two groups: VRS (group VRS) and mannequin (group M). After a standard didactic teaching session, group VRS trained 25 times on VRS, whereas group M performed the same process on a mannequin. After training, participants' performance was assessed on a mannequin five consecutive times. Procedure times during training were recorded as pooled data to construct learning curves. Procedure time and global rating scale scores of manipulation ability were compared between groups, as well as changes in participants' confidence after training. Plateaus in the learning curves were achieved after 19 (95% confidence interval = 15-26) practice sessions in group VRS and 24 (95% confidence interval = 20-32) in group M. There was no significant difference in procedure time [13.7 (6.6) vs. 11.9 (4.1) seconds, t' = 1.101, P = 0.278] or global rating scale [3.9 (0.4) vs. 3.8 (0.4), t = 0.791, P = 0.433] between groups. Participants' confidence increased after training [group VRS: 1.8 (0.7) vs. 3.9 (0.8), t = 8.321, P < 0.001; group M = 2.0 (0.7) vs. 4.0 (0.6), t = 13.948, P < 0.001] but did not differ significantly between groups. Virtual reality simulation is more efficient than mannequin in simulation-based training of flexible fiberoptic manipulation in novices, but similar effects can be achieved in both modalities after adequate training.

  7. You Spin my Head Right Round: Threshold of Limited Immersion for Rotation Gains in Redirected Walking.

    PubMed

    Schmitz, Patric; Hildebrandt, Julian; Valdez, Andre Calero; Kobbelt, Leif; Ziefle, Martina

    2018-04-01

    In virtual environments, the space that can be explored by real walking is limited by the size of the tracked area. To enable unimpeded walking through large virtual spaces in small real-world surroundings, redirection techniques are used. These unnoticeably manipulate the user's virtual walking trajectory. It is important to know how strongly such techniques can be applied without the user noticing the manipulation-or getting cybersick. Previously, this was estimated by measuring a detection threshold (DT) in highly-controlled psychophysical studies, which experimentally isolate the effect but do not aim for perceived immersion in the context of VR applications. While these studies suggest that only relatively low degrees of manipulation are tolerable, we claim that, besides establishing detection thresholds, it is important to know when the user's immersion breaks. We hypothesize that the degree of unnoticed manipulation is significantly different from the detection threshold when the user is immersed in a task. We conducted three studies: a) to devise an experimental paradigm to measure the threshold of limited immersion (TLI), b) to measure the TLI for slowly decreasing and increasing rotation gains, and c) to establish a baseline of cybersickness for our experimental setup. For rotation gains greater than 1.0, we found that immersion breaks quite late after the gain is detectable. However, for gains lesser than 1.0, some users reported a break of immersion even before established detection thresholds were reached. Apparently, the developed metric measures an additional quality of user experience. This article contributes to the development of effective spatial compression methods by utilizing the break of immersion as a benchmark for redirection techniques.

  8. Status Report for Remediation Decision Support Project, Task 1, Activity 1.B – Physical and Hydraulic Properties Database and Interpretation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rockhold, Mark L.

    2008-09-26

    The objective of Activity 1.B of the Remediation Decision Support (RDS) Project is to compile all available physical and hydraulic property data for sediments from the Hanford Site, to port these data into the Hanford Environmental Information System (HEIS), and to make the data web-accessible to anyone on the Hanford Local Area Network via the so-called Virtual Library. In past years efforts were made by RDS project staff to compile all available physical and hydraulic property data for Hanford sediments and to transfer these data into SoilVision{reg_sign}, a commercial geotechnical software package designed for storing, analyzing, and manipulating soils data.more » Although SoilVision{reg_sign} has proven to be useful, its access and use restrictions have been recognized as a limitation to the effective use of the physical and hydraulic property databases by the broader group of potential users involved in Hanford waste site issues. In order to make these data more widely available and useable, a decision was made to port them to HEIS and to make them web-accessible via a Virtual Library module. In FY08 the objectives of Activity 1.B of the RDS Project were to: (1) ensure traceability and defensibility of all physical and hydraulic property data currently residing in the SoilVision{reg_sign} database maintained by PNNL, (2) transfer the physical and hydraulic property data from the Microsoft Access database files used by SoilVision{reg_sign} into HEIS, which has most recently been maintained by Fluor-Hanford, Inc., (3) develop a Virtual Library module for accessing these data from HEIS, and (4) write a User's Manual for the Virtual Library module. The development of the Virtual Library module was to be performed by a third party under subcontract to Fluor. The intent of these activities is to make the available physical and hydraulic property data more readily accessible and useable by technical staff and operable unit managers involved in waste site assessments and remedial action decisions for Hanford. This status report describes the history of this development effort and progress to date.« less

  9. Spatial issues in user interface design from a graphic design perspective

    NASA Technical Reports Server (NTRS)

    Marcus, Aaron

    1989-01-01

    The user interface of a computer system is a visual display that provides information about the status of operations on data within the computer and control options to the user that enable adjustments to these operations. From the very beginning of computer technology the user interface was a spatial display, although its spatial features were not necessarily complex or explicitly recognized by the users. All text and nonverbal signs appeared in a virtual space generally thought of as a single flat plane of symbols. Current technology of high performance workstations permits any element of the display to appear as dynamic, multicolor, 3-D signs in a virtual 3-D space. The complexity of appearance and the user's interaction with the display provide significant challenges to the graphic designer of current and future user interfaces. In particular, spatial depiction provides many opportunities for effective communication of objects, structures, processes, navigation, selection, and manipulation. Issues are presented that are relevant to the graphic designer seeking to optimize the user interface's spatial attributes for effective visual communication.

  10. Influence of gait mode and body orientation on following a walking avatar.

    PubMed

    Meerhoff, L Rens A; de Poel, Harjo J; Jowett, Tim W D; Button, Chris

    2017-08-01

    Regulating distance with a moving object or person is a key component of human movement and of skillful interpersonal coordination. The current set of experiments aimed to assess the role of gait mode and body orientation on distance regulation using a cyclical locomotor tracking task in which participants followed a virtual leader. In the first experiment, participants moved in the backward-forward direction while the body orientation of the virtual leader was manipulated (i.e., facing towards, or away from the follower), hence imposing an incongruence in gait mode between leader and follower. Distance regulation was spatially less accurate when followers walked backwards. Additionally, a clear trade-off was found between spatial leader-follower accuracy and temporal synchrony. Any perceptual effects were overshadowed by the effect of one's gait mode. In the second experiment we examined lateral following. The results suggested that lateral following was also constrained strongly by perceptual information presented by the leader. Together, these findings demonstrated how locomotor tracking depends on gait mode, but also on the body orientation of whoever is being followed. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Distance Perception of Stereoscopically Presented Virtual Objects Optically Superimposed on Physical Objects by a Head-Mounted See-Through Display

    NASA Technical Reports Server (NTRS)

    Ellis, Stephen R.; Bucher, Urs J.; Statler, Irving C. (Technical Monitor)

    1994-01-01

    The influence of physically presented background stimuli on the perceived depth of optically overlaid, stereoscopic virtual images has been studied using headmounted stereoscopic, virtual image displays. These displays allow presentation of physically unrealizable stimulus combinations. Positioning of an opaque physical object either at the initial perceived depth of the virtual image or at a position substantially in front of the virtual image, causes the virtual image to perceptually move closer to the observer. In the case of objects positioned substantially in front of the virtual image, subjects often perceive the opaque object to become transparent. Evidence is presented that the apparent change of position caused by interposition of the physical object is not due to occlusion cues. According, it may have an alternative cause such as variation in the binocular vengeance position of the eyes caused by introduction of the physical object. This effect may complicate design of overlaid virtual image displays for near objects and appears to be related to the relative conspicuousness of the overlaid virtual image and the background. Consequently, it may be related to earlier analyses of John Foley which modeled open-loop pointing errors to stereoscopically presented points of light in terms of errors in determination of a reference point for interpretation of observed retinal disparities. Implications for the design of see-through displays for manufacturing will be discussed.

  12. Level of Immersion in Virtual Environments Impacts the Ability to Assess and Teach Social Skills in Autism Spectrum Disorder

    PubMed Central

    Bugnariu, Nicoleta L.

    2016-01-01

    Abstract Virtual environments (VEs) may be useful for delivering social skills interventions to individuals with autism spectrum disorder (ASD). Immersive VEs provide opportunities for individuals with ASD to learn and practice skills in a controlled replicable setting. However, not all VEs are delivered using the same technology, and the level of immersion differs across settings. We group studies into low-, moderate-, and high-immersion categories by examining five aspects of immersion. In doing so, we draw conclusions regarding the influence of this technical manipulation on the efficacy of VEs as a tool for assessing and teaching social skills. We also highlight ways in which future studies can advance our understanding of how manipulating aspects of immersion may impact intervention success. PMID:26919157

  13. Training for percutaneous renal access on a virtual reality simulator.

    PubMed

    Zhang, Yi; Yu, Cheng-fan; Liu, Jin-shun; Wang, Gang; Zhu, He; Na, Yan-qun

    2013-01-01

    The need to develop new methods of surgical training combined with advances in computing has led to the development of virtual reality surgical simulators. The PERC Mentor(TM) is designed to train the user in percutaneous renal collecting system access puncture. This study aimed to validate the use of this kind of simulator, in percutaneous renal access training. Twenty-one urologists were enrolled as trainees to learn a fluoroscopy-guided percutaneous renal accessing technique. An assigned percutaneous renal access procedure was immediately performed on the PERC Mentor(TM) after watching instruction video and an analog operation. Objective parameters were recorded by the simulator and subjective global rating scale (GRS) score were determined. Simulation training followed and consisted of 2 hours daily training sessions for 2 consecutive days. Twenty-four hours after the training session, trainees were evaluated performing the same procedure. The post-training evaluation was compared to the evaluation of the initial attempt. During the initial attempt, none of the trainees could complete the appointed procedure due to the lack of experience in fluoroscopy-guided percutaneous renal access. After the short-term training, all trainees were able to independently complete the procedure. Of the 21 trainees, 10 had primitive experience in ultrasound-guided percutaneous nephrolithotomy. Trainees were thus categorized into the group of primitive experience and inexperience. The total operating time and amount of contrast material used were significantly lower in the group of primitive experience versus the inexperience group (P = 0.03 and 0.02, respectively). The training on the virtual reality simulator, PERC Mentor(TM), can help trainees with no previous experience of fluoroscopy-guided percutaneous renal access to complete the virtual manipulation of the procedure independently. This virtual reality simulator may become an important training and evaluation tool in teaching fluoroscopy-guided percutaneous renal access.

  14. EMERGENCY RESPONSE TEAMS TRAINING IN PUBLIC HEALTH CRISIS - THE SERIOUSNESS OF SERIOUS GAMES.

    PubMed

    Stanojevic, Vojislav; Stanojevic, Cedomirka

    2016-07-01

    The rapid development of multimedia technologies in the last twenty years has lead to the emergence of new ways of learning academic and professional skills, which implies the application of multimedia technology in the form of a software -" serious computer games". Three-Dimensional Virtual Worlds. The basis of this game-platform is made of the platform of three-dimensional virtual worlds that can be described as communication systems in which participants share the same three-dimensional virtual space within which they can move, manipulate objects and communicate through their graphical representatives- avatars. Medical Education and Training. Arguments in favor of these computer tools in the learning process are accessibility, repeatability, low cost, the use of attractive graphics and a high degree of adaptation to the user. Specifically designed avatars allow students to get adapted to their roles in certain situations, especially to those which are considered rare, dangerous or unethical in real life. Drilling of major incidents, which includes the need to create environments for training, cannot be done in the real world due to high costs'and necessity to utilize the extensive resources. In addition, it is impossible to engage all the necessary health personnel at the same time. New technologies intended for conducting training, which are also called "virtual worlds", make the following possible: training at all times depending on user's commitments; simultaneous simulations on multiple levels, in several areas, in different circumstances, including dozens of unique victims; repeated scenarios and learning from mistakes; rapid feedback and the development of non-technical skills which are critical for reducing errors in dynamic, high-risk environments. Virtual worlds, which should be the subject of further research and improvements, in the field of hospital emergency response training for mass casualty incidents, certainly have a promising future.

  15. The building blocks of the full body ownership illusion

    PubMed Central

    Maselli, Antonella; Slater, Mel

    2013-01-01

    Previous work has reported that it is not difficult to give people the illusion of ownership over an artificial body, providing a powerful tool for the investigation of the neural and cognitive mechanisms underlying body perception and self consciousness. We present an experimental study that uses immersive virtual reality (IVR) focused on identifying the perceptual building blocks of this illusion. We systematically manipulated visuotactile and visual sensorimotor contingencies, visual perspective, and the appearance of the virtual body in order to assess their relative role and mutual interaction. Consistent results from subjective reports and physiological measures showed that a first person perspective over a fake humanoid body is essential for eliciting a body ownership illusion. We found that the illusion of ownership can be generated when the virtual body has a realistic skin tone and spatially substitutes the real body seen from a first person perspective. In this case there is no need for an additional contribution of congruent visuotactile or sensorimotor cues. Additionally, we found that the processing of incongruent perceptual cues can be modulated by the level of the illusion: when the illusion is strong, incongruent cues are not experienced as incorrect. Participants exposed to asynchronous visuotactile stimulation can experience the ownership illusion and perceive touch as originating from an object seen to contact the virtual body. Analogously, when the level of realism of the virtual body is not high enough and/or when there is no spatial overlap between the two bodies, then the contribution of congruent multisensory and/or sensorimotor cues is required for evoking the illusion. On the basis of these results and inspired by findings from neurophysiological recordings in the monkey, we propose a model that accounts for many of the results reported in the literature. PMID:23519597

  16. Validation of virtual learning object to support the teaching of nursing care systematization.

    PubMed

    Salvador, Pétala Tuani Candido de Oliveira; Mariz, Camila Maria Dos Santos; Vítor, Allyne Fortes; Ferreira Júnior, Marcos Antônio; Fernandes, Maria Isabel Domingues; Martins, José Carlos Amado; Santos, Viviane Euzébia Pereira

    2018-01-01

    to describe the content validation process of a Virtual Learning Object to support the teaching of nursing care systematization to nursing professionals. methodological study, with quantitative approach, developed according to the methodological reference of Pasquali's psychometry and conducted from March to July 2016, from two-stage Delphi procedure. in the Delphi 1 stage, eight judges evaluated the Virtual Object; in Delphi 2 stage, seven judges evaluated it. The seven screens of the Virtual Object were analyzed as to the suitability of its contents. The Virtual Learning Object to support the teaching of nursing care systematization was considered valid in its content, with a Total Content Validity Coefficient of 0.96. it is expected that the Virtual Object can support the teaching of nursing care systematization in light of appropriate and effective pedagogical approaches.

  17. Repetition Blindness Reveals Differences between the Representations of Manipulable and Nonmanipulable Objects

    ERIC Educational Resources Information Center

    Harris, Irina M.; Murray, Alexandra M.; Hayward, William G.; O'Callaghan, Claire; Andrews, Sally

    2012-01-01

    We used repetition blindness to investigate the nature of the representations underlying identification of manipulable objects. Observers named objects presented in rapid serial visual presentation streams containing either manipulable or nonmanipulable objects. In half the streams, 1 object was repeated. Overall accuracy was lower when streams…

  18. Augmented Reality Technology Using Microsoft HoloLens in Anatomic Pathology.

    PubMed

    Hanna, Matthew G; Ahmed, Ishtiaque; Nine, Jeffrey; Prajapati, Shyam; Pantanowitz, Liron

    2018-05-01

    Context Augmented reality (AR) devices such as the Microsoft HoloLens have not been well used in the medical field. Objective To test the HoloLens for clinical and nonclinical applications in pathology. Design A Microsoft HoloLens was tested for virtual annotation during autopsy, viewing 3D gross and microscopic pathology specimens, navigating whole slide images, telepathology, as well as real-time pathology-radiology correlation. Results Pathology residents performing an autopsy wearing the HoloLens were remotely instructed with real-time diagrams, annotations, and voice instruction. 3D-scanned gross pathology specimens could be viewed as holograms and easily manipulated. Telepathology was supported during gross examination and at the time of intraoperative consultation, allowing users to remotely access a pathologist for guidance and to virtually annotate areas of interest on specimens in real-time. The HoloLens permitted radiographs to be coregistered on gross specimens and thereby enhanced locating important pathologic findings. The HoloLens also allowed easy viewing and navigation of whole slide images, using an AR workstation, including multiple coregistered tissue sections facilitating volumetric pathology evaluation. Conclusions The HoloLens is a novel AR tool with multiple clinical and nonclinical applications in pathology. The device was comfortable to wear, easy to use, provided sufficient computing power, and supported high-resolution imaging. It was useful for autopsy, gross and microscopic examination, and ideally suited for digital pathology. Unique applications include remote supervision and annotation, 3D image viewing and manipulation, telepathology in a mixed-reality environment, and real-time pathology-radiology correlation.

  19. Web-based Three-dimensional Virtual Body Structures: W3D-VBS

    PubMed Central

    Temkin, Bharti; Acosta, Eric; Hatfield, Paul; Onal, Erhan; Tong, Alex

    2002-01-01

    Major efforts are being made to improve the teaching of human anatomy to foster cognition of visuospatial relationships. The Visible Human Project of the National Library of Medicine makes it possible to create virtual reality-based applications for teaching anatomy. Integration of traditional cadaver and illustration-based methods with Internet-based simulations brings us closer to this goal. Web-based three-dimensional Virtual Body Structures (W3D-VBS) is a next-generation immersive anatomical training system for teaching human anatomy over the Internet. It uses Visible Human data to dynamically explore, select, extract, visualize, manipulate, and stereoscopically palpate realistic virtual body structures with a haptic device. Tracking user’s progress through evaluation tools helps customize lesson plans. A self-guided “virtual tour” of the whole body allows investigation of labeled virtual dissections repetitively, at any time and place a user requires it. PMID:12223495

  20. A workout for virtual bodybuilders (design issues for embodiment in multi-actor virtual environments)

    NASA Technical Reports Server (NTRS)

    Benford, Steve; Bowers, John; Fahlen, Lennart E.; Greenhalgh, Chris; Snowdon, Dave

    1994-01-01

    This paper explores the issue of user embodiment within collaborative virtual environments. By user embodiment we mean the provision of users with appropriate body images so as to represent them to others and also to themselves. By collaborative virtual environments we mean multi-user virtual reality systems which support cooperative work (although we argue that the results of our exploration may also be applied to other kinds of collaborative systems). The main part of the paper identifies a list of embodiment design issues including: presence, location, identity, activity, availability, history of activity, viewpoint, action point, gesture, facial expression, voluntary versus involuntary expression, degree of presence, reflecting capabilities, manipulating the user's view of others, representation across multiple media, autonomous and distributed body parts, truthfulness and efficiency. Following this, we show how these issues are reflected in our own DIVE and MASSIVE prototype collaborative virtual environments.

  1. Web-based three-dimensional Virtual Body Structures: W3D-VBS.

    PubMed

    Temkin, Bharti; Acosta, Eric; Hatfield, Paul; Onal, Erhan; Tong, Alex

    2002-01-01

    Major efforts are being made to improve the teaching of human anatomy to foster cognition of visuospatial relationships. The Visible Human Project of the National Library of Medicine makes it possible to create virtual reality-based applications for teaching anatomy. Integration of traditional cadaver and illustration-based methods with Internet-based simulations brings us closer to this goal. Web-based three-dimensional Virtual Body Structures (W3D-VBS) is a next-generation immersive anatomical training system for teaching human anatomy over the Internet. It uses Visible Human data to dynamically explore, select, extract, visualize, manipulate, and stereoscopically palpate realistic virtual body structures with a haptic device. Tracking user's progress through evaluation tools helps customize lesson plans. A self-guided "virtual tour" of the whole body allows investigation of labeled virtual dissections repetitively, at any time and place a user requires it.

  2. Interactive voxel graphics in virtual reality

    NASA Astrophysics Data System (ADS)

    Brody, Bill; Chappell, Glenn G.; Hartman, Chris

    2002-06-01

    Interactive voxel graphics in virtual reality poses significant research challenges in terms of interface, file I/O, and real-time algorithms. Voxel graphics is not so new, as it is the focus of a good deal of scientific visualization. Interactive voxel creation and manipulation is a more innovative concept. Scientists are understandably reluctant to manipulate data. They collect or model data. A scientific analogy to interactive graphics is the generation of initial conditions for some model. It is used as a method to test those models. We, however, are in the business of creating new data in the form of graphical imagery. In our endeavor, science is a tool and not an end. Nevertheless, there is a whole class of interactions and associated data generation scenarios that are natural to our way of working and that are also appropriate to scientific inquiry. Annotation by sketching or painting to point to and distinguish interesting and important information is very significant for science as well as art. Annotation in 3D is difficult without a good 3D interface. Interactive graphics in virtual reality is an appropriate approach to this problem.

  3. High-performance integrated virtual environment (HIVE): a robust infrastructure for next-generation sequence data analysis

    PubMed Central

    Simonyan, Vahan; Chumakov, Konstantin; Dingerdissen, Hayley; Faison, William; Goldweber, Scott; Golikov, Anton; Gulzar, Naila; Karagiannis, Konstantinos; Vinh Nguyen Lam, Phuc; Maudru, Thomas; Muravitskaja, Olesja; Osipova, Ekaterina; Pan, Yang; Pschenichnov, Alexey; Rostovtsev, Alexandre; Santana-Quintero, Luis; Smith, Krista; Thompson, Elaine E.; Tkachenko, Valery; Torcivia-Rodriguez, John; Wan, Quan; Wang, Jing; Wu, Tsung-Jung; Wilson, Carolyn; Mazumder, Raja

    2016-01-01

    The High-performance Integrated Virtual Environment (HIVE) is a distributed storage and compute environment designed primarily to handle next-generation sequencing (NGS) data. This multicomponent cloud infrastructure provides secure web access for authorized users to deposit, retrieve, annotate and compute on NGS data, and to analyse the outcomes using web interface visual environments appropriately built in collaboration with research and regulatory scientists and other end users. Unlike many massively parallel computing environments, HIVE uses a cloud control server which virtualizes services, not processes. It is both very robust and flexible due to the abstraction layer introduced between computational requests and operating system processes. The novel paradigm of moving computations to the data, instead of moving data to computational nodes, has proven to be significantly less taxing for both hardware and network infrastructure. The honeycomb data model developed for HIVE integrates metadata into an object-oriented model. Its distinction from other object-oriented databases is in the additional implementation of a unified application program interface to search, view and manipulate data of all types. This model simplifies the introduction of new data types, thereby minimizing the need for database restructuring and streamlining the development of new integrated information systems. The honeycomb model employs a highly secure hierarchical access control and permission system, allowing determination of data access privileges in a finely granular manner without flooding the security subsystem with a multiplicity of rules. HIVE infrastructure will allow engineers and scientists to perform NGS analysis in a manner that is both efficient and secure. HIVE is actively supported in public and private domains, and project collaborations are welcomed. Database URL: https://hive.biochemistry.gwu.edu PMID:26989153

  4. High-performance integrated virtual environment (HIVE): a robust infrastructure for next-generation sequence data analysis.

    PubMed

    Simonyan, Vahan; Chumakov, Konstantin; Dingerdissen, Hayley; Faison, William; Goldweber, Scott; Golikov, Anton; Gulzar, Naila; Karagiannis, Konstantinos; Vinh Nguyen Lam, Phuc; Maudru, Thomas; Muravitskaja, Olesja; Osipova, Ekaterina; Pan, Yang; Pschenichnov, Alexey; Rostovtsev, Alexandre; Santana-Quintero, Luis; Smith, Krista; Thompson, Elaine E; Tkachenko, Valery; Torcivia-Rodriguez, John; Voskanian, Alin; Wan, Quan; Wang, Jing; Wu, Tsung-Jung; Wilson, Carolyn; Mazumder, Raja

    2016-01-01

    The High-performance Integrated Virtual Environment (HIVE) is a distributed storage and compute environment designed primarily to handle next-generation sequencing (NGS) data. This multicomponent cloud infrastructure provides secure web access for authorized users to deposit, retrieve, annotate and compute on NGS data, and to analyse the outcomes using web interface visual environments appropriately built in collaboration with research and regulatory scientists and other end users. Unlike many massively parallel computing environments, HIVE uses a cloud control server which virtualizes services, not processes. It is both very robust and flexible due to the abstraction layer introduced between computational requests and operating system processes. The novel paradigm of moving computations to the data, instead of moving data to computational nodes, has proven to be significantly less taxing for both hardware and network infrastructure.The honeycomb data model developed for HIVE integrates metadata into an object-oriented model. Its distinction from other object-oriented databases is in the additional implementation of a unified application program interface to search, view and manipulate data of all types. This model simplifies the introduction of new data types, thereby minimizing the need for database restructuring and streamlining the development of new integrated information systems. The honeycomb model employs a highly secure hierarchical access control and permission system, allowing determination of data access privileges in a finely granular manner without flooding the security subsystem with a multiplicity of rules. HIVE infrastructure will allow engineers and scientists to perform NGS analysis in a manner that is both efficient and secure. HIVE is actively supported in public and private domains, and project collaborations are welcomed. Database URL: https://hive.biochemistry.gwu.edu. © The Author(s) 2016. Published by Oxford University Press.

  5. Novel interactive virtual showcase based on 3D multitouch technology

    NASA Astrophysics Data System (ADS)

    Yang, Tao; Liu, Yue; Lu, You; Wang, Yongtian

    2009-11-01

    A new interactive virtual showcase is proposed in this paper. With the help of virtual reality technology, the user of the proposed system can watch the virtual objects floating in the air from all four sides and interact with the virtual objects by touching the four surfaces of the virtual showcase. Unlike traditional multitouch system, this system cannot only realize multi-touch on a plane to implement 2D translation, 2D scaling, and 2D rotation of the objects; it can also realize the 3D interaction of the virtual objects by recognizing and analyzing the multi-touch that can be simultaneously captured from the four planes. Experimental results show the potential of the proposed system to be applied in the exhibition of historical relics and other precious goods.

  6. Localization of Virtual Objects in the Near Visual Field (Operator Interaction with Simple Virtual Objects)

    NASA Technical Reports Server (NTRS)

    Ellis, Stephen R.; Menges, Brian M.

    1998-01-01

    Errors in the localization of nearby virtual objects presented via see-through, helmet mounted displays are examined as a function of viewing conditions and scene content in four experiments using a total of 38 subjects. Monocular, biocular or stereoscopic presentation of the virtual objects, accommodation (required focus), subjects' age, and the position of physical surfaces are examined. Nearby physical surfaces are found to introduce localization errors that differ depending upon the other experimental factors. These errors apparently arise from the occlusion of the physical background by the optically superimposed virtual objects. But they are modified by subjects' accommodative competence and specific viewing conditions. The apparent physical size and transparency of the virtual objects and physical surfaces respectively are influenced by their relative position when superimposed. The design implications of the findings are discussed in a concluding section.

  7. A standardized set of 3-D objects for virtual reality research and applications.

    PubMed

    Peeters, David

    2018-06-01

    The use of immersive virtual reality as a research tool is rapidly increasing in numerous scientific disciplines. By combining ecological validity with strict experimental control, immersive virtual reality provides the potential to develop and test scientific theories in rich environments that closely resemble everyday settings. This article introduces the first standardized database of colored three-dimensional (3-D) objects that can be used in virtual reality and augmented reality research and applications. The 147 objects have been normed for name agreement, image agreement, familiarity, visual complexity, and corresponding lexical characteristics of the modal object names. The availability of standardized 3-D objects for virtual reality research is important, because reaching valid theoretical conclusions hinges critically on the use of well-controlled experimental stimuli. Sharing standardized 3-D objects across different virtual reality labs will allow for science to move forward more quickly.

  8. Electromagnetic fasteners

    DOEpatents

    Crane, Randolph W.; Marts, Donna J.

    1994-11-01

    An electromagnetic fastener for manipulating objects in space uses the matic attraction of various metals. An end effector is attached to a robotic manipulating system having an electromagnet such that when current is supplied to the electromagnet, the object is drawn and affixed to the end effector, and when the current is withheld, the object is released. The object to be manipulated includes a multiplicity of ferromagnetic patches at various locations to provide multiple areas for the effector on the manipulator to become affixed to the object. The ferromagnetic patches are sized relative to the object's geometry and mass.

  9. Electromagnetic fasteners

    DOEpatents

    Crane, Randolph W.; Marts, Donna J.

    1994-01-01

    An electromagnetic fastener for manipulating objects in space uses the matic attraction of various metals. An end effector is attached to a robotic manipulating system having an electromagnet such that when current is supplied to the electromagnet, the object is drawn and affixed to the end effector, and when the current is withheld, the object is released. The object to be manipulated includes a multiplicity of ferromagnetic patches at various locations to provide multiple areas for the effector on the manipulator to become affixed to the object. The ferromagnetic patches are sized relative to the object's geometry and mass.

  10. Virtual modeling of robot-assisted manipulations in abdominal surgery.

    PubMed

    Berelavichus, Stanislav V; Karmazanovsky, Grigory G; Shirokov, Vadim S; Kubyshkin, Valeriy A; Kriger, Andrey G; Kondratyev, Evgeny V; Zakharova, Olga P

    2012-06-27

    To determine the effectiveness of using multidetector computed tomography (MDCT) data in preoperative planning of robot-assisted surgery. Fourteen patients indicated for surgery underwent MDCT using 64 and 256-slice MDCT. Before the examination, a specially constructed navigation net was placed on the patient's anterior abdominal wall. Processing of MDCT data was performed on a Brilliance Workspace 4 (Philips). Virtual vectors that imitate robotic and assistant ports were placed on the anterior abdominal wall of the 3D model of the patient, considering the individual anatomy of the patient and the technical capabilities of robotic arms. Sites for location of the ports were directed by projection on the roentgen-positive tags of the navigation net. There were no complications observed during surgery or in the post-operative period. We were able to reduce robotic arm interference during surgery. The surgical area was optimal for robotic and assistant manipulators without any need for reinstallation of the trocars. This method allows modeling of the main steps in robot-assisted intervention, optimizing operation of the manipulator and lowering the risk of injuries to internal organs.

  11. Avatar Impostors

    NASA Astrophysics Data System (ADS)

    Chicas, K.

    2011-12-01

    In a past two part study, participants were first ostracized in a virtual ball tossing game by individuals who either revealed or replaced their identity using virtual avatars.(image)In the second part of the study, participants had to allocate a sample of Tapatio hot sauce to one of the individuals who just ostracized them. Participants could allocate as much or as little hot sauce as they wanted, and this amount was weighed and recorded as a measure of aggression. On average individuals who were ostracized by identity replaced others allocated more than twice as much hot sauce to the ostracizer than participants who were ostracized by those who revealed their identity. In our follow up study, we were interested in expanding our knowledge of the effects and perceptions of identity manipulation. Specifically, what attributions do others make about motives behind identity replacement? The reactions and responses from those who participated helped show how others perceive people who manipulate their identity. The key motivations that we collect will be measured in future studies; this will allow us to better understand the mechanisms that lead to unique perceptions of identity manipulated others.

  12. 'Putting it on the table': direct-manipulative interaction and multi-user display technologies for semi-immersive environments and augmented reality applications.

    PubMed

    Encarnação, L Miguel; Bimber, Oliver

    2002-01-01

    Collaborative virtual environments for diagnosis and treatment planning are increasingly gaining importance in our global society. Virtual and Augmented Reality approaches promised to provide valuable means for the involved interactive data analysis, but the underlying technologies still create a cumbersome work environment that is inadequate for clinical employment. This paper addresses two of the shortcomings of such technology: Intuitive interaction with multi-dimensional data in immersive and semi-immersive environments as well as stereoscopic multi-user displays combining the advantages of Virtual and Augmented Reality technology.

  13. A virtual experimenter to increase standardization for the investigation of placebo effects.

    PubMed

    Horing, Bjoern; Newsome, Nathan D; Enck, Paul; Babu, Sabarish V; Muth, Eric R

    2016-07-18

    Placebo effects are mediated by expectancy, which is highly influenced by psychosocial factors of a treatment context. These factors are difficult to standardize. Furthermore, dedicated placebo research often necessitates single-blind deceptive designs where biases are easily introduced. We propose a study protocol employing a virtual experimenter - a computer program designed to deliver treatment and instructions - for the purpose of standardization and reduction of biases when investigating placebo effects. To evaluate the virtual experimenter's efficacy in inducing placebo effects via expectancy manipulation, we suggest a partially blinded, deceptive design with a baseline/retest pain protocol (hand immersions in hot water bath). Between immersions, participants will receive an (actually inert) medication. Instructions pertaining to the medication will be delivered by one of three metaphors: The virtual experimenter, a human experimenter, and an audio/text presentation (predictor "Metaphor"). The second predictor includes falsely informing participants that the medication is an effective pain killer, or correctly informing them that it is, in fact, inert (predictor "Instruction"). Analysis will be performed with hierarchical linear modelling, with a sample size of N = 50. Results from two pilot studies are presented that indicate the viability of the pain protocol (N = 33), and of the virtual experimenter software and placebo manipulation (N = 48). It will be challenging to establish full comparability between all metaphors used for instruction delivery, and to account for participant differences in acceptance of their virtual interaction partner. Once established, the presence of placebo effects would suggest that the virtual experimenter exhibits sufficient cues to be perceived as a social agent. He could consequently provide a convenient platform to investigate effects of experimenter behavior, or other experimenter characteristics, e.g., sex, age, race/ethnicity or professional status. More general applications are possible, for example in psychological research such as bias research, or virtual reality research. Potential applications also exist for standardizing clinical research by documenting and communicating instructions used in clinical trials.

  14. Multisensory Stimulation Can Induce an Illusion of Larger Belly Size in Immersive Virtual Reality

    PubMed Central

    Normand, Jean-Marie; Giannopoulos, Elias; Spanlang, Bernhard; Slater, Mel

    2011-01-01

    Background Body change illusions have been of great interest in recent years for the understanding of how the brain represents the body. Appropriate multisensory stimulation can induce an illusion of ownership over a rubber or virtual arm, simple types of out-of-the-body experiences, and even ownership with respect to an alternate whole body. Here we use immersive virtual reality to investigate whether the illusion of a dramatic increase in belly size can be induced in males through (a) first person perspective position (b) synchronous visual-motor correlation between real and virtual arm movements, and (c) self-induced synchronous visual-tactile stimulation in the stomach area. Methodology Twenty two participants entered into a virtual reality (VR) delivered through a stereo head-tracked wide field-of-view head-mounted display. They saw from a first person perspective a virtual body substituting their own that had an inflated belly. For four minutes they repeatedly prodded their real belly with a rod that had a virtual counterpart that they saw in the VR. There was a synchronous condition where their prodding movements were synchronous with what they felt and saw and an asynchronous condition where this was not the case. The experiment was repeated twice for each participant in counter-balanced order. Responses were measured by questionnaire, and also a comparison of before and after self-estimates of belly size produced by direct visual manipulation of the virtual body seen from the first person perspective. Conclusions The results show that first person perspective of a virtual body that substitutes for the own body in virtual reality, together with synchronous multisensory stimulation can temporarily produce changes in body representation towards the larger belly size. This was demonstrated by (a) questionnaire results, (b) the difference between the self-estimated belly size, judged from a first person perspective, after and before the experimental manipulation, and (c) significant positive correlations between these two measures. We discuss this result in the general context of body ownership illusions, and suggest applications including treatment for body size distortion illnesses. PMID:21283823

  15. Inspiration, simulation and design for smart robot manipulators from the sucker actuation mechanism of cephalopods.

    PubMed

    Grasso, Frank W; Setlur, Pradeep

    2007-12-01

    Octopus arms house 200-300 independently controlled suckers that can alternately afford an octopus fine manipulation of small objects and produce high adhesion forces on virtually any non-porous surface. Octopuses use their suckers to grasp, rotate and reposition soft objects (e.g., octopus eggs) without damaging them and to provide strong, reversible adhesion forces to anchor the octopus to hard substrates (e.g., rock) during wave surge. The biological 'design' of the sucker system is understood to be divided anatomically into three functional groups: the infundibulum that produces a surface seal that conforms to arbitrary surface geometry; the acetabulum that generates negative pressures for adhesion; and the extrinsic muscles that allow adhered surfaces to be rotated relative to the arm. The effector underlying these abilities is the muscular hydrostat. Guided by sensory input, the thousands of muscle fibers within the muscular hydrostats of the sucker act in coordination to provide stiffness or force when and where needed. The mechanical malleability of octopus suckers, the interdigitated arrangement of their muscle fibers and the flexible interconnections of its parts make direct studies of their control challenging. We developed a dynamic simulator (ABSAMS) that models the general functioning of muscular hydrostat systems built from assemblies of biologically constrained muscular hydrostat models. We report here on simulation studies of octopus-inspired and artificial suckers implemented in this system. These simulations reproduce aspects of octopus sucker performance and squid tentacle extension. Simulations run with these models using parameters from man-made actuators and materials can serve as tools for designing soft robotic implementations of man-made artificial suckers and soft manipulators.

  16. Unaware Processing of Tools in the Neural System for Object-Directed Action Representation.

    PubMed

    Tettamanti, Marco; Conca, Francesca; Falini, Andrea; Perani, Daniela

    2017-11-01

    The hypothesis that the brain constitutively encodes observed manipulable objects for the actions they afford is still debated. Yet, crucial evidence demonstrating that, even in the absence of perceptual awareness, the mere visual appearance of a manipulable object triggers a visuomotor coding in the action representation system including the premotor cortex, has hitherto not been provided. In this fMRI study, we instantiated reliable unaware visual perception conditions by means of continuous flash suppression, and we tested in 24 healthy human participants (13 females) whether the visuomotor object-directed action representation system that includes left-hemispheric premotor, parietal, and posterior temporal cortices is activated even under subliminal perceptual conditions. We found consistent activation in the target visuomotor cortices, both with and without perceptual awareness, specifically for pictures of manipulable versus non-manipulable objects. By means of a multivariate searchlight analysis, we also found that the brain activation patterns in this visuomotor network enabled the decoding of manipulable versus non-manipulable object picture processing, both with and without awareness. These findings demonstrate the intimate neural coupling between visual perception and motor representation that underlies manipulable object processing: manipulable object stimuli specifically engage the visuomotor object-directed action representation system, in a constitutive manner that is independent from perceptual awareness. This perceptuo-motor coupling endows the brain with an efficient mechanism for monitoring and planning reactions to external stimuli in the absence of awareness. SIGNIFICANCE STATEMENT Our brain constantly encodes the visual information that hits the retina, leading to a stimulus-specific activation of sensory and semantic representations, even for objects that we do not consciously perceive. Do these unconscious representations encompass the motor programming of actions that could be accomplished congruently with the objects' functions? In this fMRI study, we instantiated unaware visual perception conditions, by dynamically suppressing the visibility of manipulable object pictures with mondrian masks. Despite escaping conscious perception, manipulable objects activated an object-directed action representation system that includes left-hemispheric premotor, parietal, and posterior temporal cortices. This demonstrates that visuomotor encoding occurs independently of conscious object perception. Copyright © 2017 the authors 0270-6474/17/3710712-13$15.00/0.

  17. Distributed interactive virtual environments for collaborative experiential learning and training independent of distance over Internet2.

    PubMed

    Alverson, Dale C; Saiki, Stanley M; Jacobs, Joshua; Saland, Linda; Keep, Marcus F; Norenberg, Jeffrey; Baker, Rex; Nakatsu, Curtis; Kalishman, Summers; Lindberg, Marlene; Wax, Diane; Mowafi, Moad; Summers, Kenneth L; Holten, James R; Greenfield, John A; Aalseth, Edward; Nickles, David; Sherstyuk, Andrei; Haines, Karen; Caudell, Thomas P

    2004-01-01

    Medical knowledge and skills essential for tomorrow's healthcare professionals continue to change faster than ever before creating new demands in medical education. Project TOUCH (Telehealth Outreach for Unified Community Health) has been developing methods to enhance learning by coupling innovations in medical education with advanced technology in high performance computing and next generation Internet2 embedded in virtual reality environments (VRE), artificial intelligence and experiential active learning. Simulations have been used in education and training to allow learners to make mistakes safely in lieu of real-life situations, learn from those mistakes and ultimately improve performance by subsequent avoidance of those mistakes. Distributed virtual interactive environments are used over distance to enable learning and participation in dynamic, problem-based, clinical, artificial intelligence rules-based, virtual simulations. The virtual reality patient is programmed to dynamically change over time and respond to the manipulations by the learner. Participants are fully immersed within the VRE platform using a head-mounted display and tracker system. Navigation, locomotion and handling of objects are accomplished using a joy-wand. Distribution is managed via the Internet2 Access Grid using point-to-point or multi-casting connectivity through which the participants can interact. Medical students in Hawaii and New Mexico (NM) participated collaboratively in problem solving and managing of a simulated patient with a closed head injury in VRE; dividing tasks, handing off objects, and functioning as a team. Students stated that opportunities to make mistakes and repeat actions in the VRE were extremely helpful in learning specific principles. VRE created higher performance expectations and some anxiety among VRE users. VRE orientation was adequate but students needed time to adapt and practice in order to improve efficiency. This was also demonstrated successfully between Western Australia and UNM. We successfully demonstrated the ability to fully immerse participants in a distributed virtual environment independent of distance for collaborative team interaction in medical simulation designed for education and training. The ability to make mistakes in a safe environment is well received by students and has a positive impact on their understanding, as well as memory of the principles involved in correcting those mistakes. Bringing people together as virtual teams for interactive experiential learning and collaborative training, independent of distance, provides a platform for distributed "just-in-time" training, performance assessment and credentialing. Further validation is necessary to determine the potential value of the distributed VRE in knowledge transfer, improved future performance and should entail training participants to competence in using these tools.

  18. ERPs Differentially Reflect Automatic and Deliberate Processing of the Functional Manipulability of Objects

    PubMed Central

    Madan, Christopher R.; Chen, Yvonne Y.; Singhal, Anthony

    2016-01-01

    It is known that the functional properties of an object can interact with perceptual, cognitive, and motor processes. Previously we have found that a between-subjects manipulation of judgment instructions resulted in different manipulability-related memory biases in an incidental memory test. To better understand this effect we recorded electroencephalography (EEG) while participants made judgments about images of objects that were either high or low in functional manipulability (e.g., hammer vs. ladder). Using a between-subjects design, participants judged whether they had seen the object recently (Personal Experience), or could manipulate the object using their hand (Functionality). We focused on the P300 and slow-wave event-related potentials (ERPs) as reflections of attentional allocation. In both groups, we observed higher P300 and slow wave amplitudes for high-manipulability objects at electrodes Pz and C3. As P300 is thought to reflect bottom-up attentional processes, this may suggest that the processing of high-manipulability objects recruited more attentional resources. Additionally, the P300 effect was greater in the Functionality group. A more complex pattern was observed at electrode C3 during slow wave: processing the high-manipulability objects in the Functionality instruction evoked a more positive slow wave than in the other three conditions, likely related to motor simulation processes. These data provide neural evidence that effects of manipulability on stimulus processing are further mediated by automatic vs. deliberate motor-related processing. PMID:27536224

  19. Using mixed methods to evaluate efficacy and user expectations of a virtual reality-based training system for upper-limb recovery in patients after stroke: a study protocol for a randomised controlled trial.

    PubMed

    Schuster-Amft, Corina; Eng, Kynan; Lehmann, Isabelle; Schmid, Ludwig; Kobashi, Nagisa; Thaler, Irène; Verra, Martin L; Henneke, Andrea; Signer, Sandra; McCaskey, Michael; Kiper, Daniel

    2014-09-06

    In recent years, virtual reality has been introduced to neurorehabilitation, in particular with the intention of improving upper-limb training options and facilitating motor function recovery. The proposed study incorporates a quantitative part and a qualitative part, termed a mixed-methods approach: (1) a quantitative investigation of the efficacy of virtual reality training compared to conventional therapy in upper-limb motor function are investigated, (2a) a qualitative investigation of patients' experiences and expectations of virtual reality training and (2b) a qualitative investigation of therapists' experiences using the virtual reality training system in the therapy setting. At three participating clinics, 60 patients at least 6 months after stroke onset will be randomly allocated to an experimental virtual reality group (EG) or to a control group that will receive conventional physiotherapy or occupational therapy (16 sessions, 45 minutes each, over the course of 4 weeks). Using custom data gloves, patients' finger and arm movements will be displayed in real time on a monitor, and they will move and manipulate objects in various virtual environments. A blinded assessor will test patients' motor and cognitive performance twice before, once during, and twice after the 4-week intervention. The primary outcome measure is the Box and Block Test. Secondary outcome measures are the Chedoke-McMaster Stroke Assessments (hand, arm and shoulder pain subscales), the Chedoke-McMaster Arm and Hand Activity Inventory, the Line Bisection Test, the Stroke Impact Scale, the MiniMentalState Examination and the Extended Barthel Index. Semistructured face-to-face interviews will be conducted with patients in the EG after intervention finalization with a focus on the patients' expectations and experiences regarding the virtual reality training. Therapists' perspectives on virtual reality training will be reviewed in three focus groups comprising four to six occupational therapists and physiotherapists. The interviews will help to gain a deeper understanding of the phenomena under investigation to provide sound recommendations for the implementation of the virtual reality training system for routine use in neurorehabilitation complementing the quantitative clinical assessments. Cliniclatrials.gov Identifier: NCT01774669 (15 January 2013).

  20. Simulation of Physical Experiments in Immersive Virtual Environments

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Wasfy, Tamer M.

    2001-01-01

    An object-oriented event-driven immersive Virtual environment is described for the creation of virtual labs (VLs) for simulating physical experiments. Discussion focuses on a number of aspects of the VLs, including interface devices, software objects, and various applications. The VLs interface with output devices, including immersive stereoscopic screed(s) and stereo speakers; and a variety of input devices, including body tracking (head and hands), haptic gloves, wand, joystick, mouse, microphone, and keyboard. The VL incorporates the following types of primitive software objects: interface objects, support objects, geometric entities, and finite elements. Each object encapsulates a set of properties, methods, and events that define its behavior, appearance, and functions. A container object allows grouping of several objects. Applications of the VLs include viewing the results of the physical experiment, viewing a computer simulation of the physical experiment, simulation of the experiments procedure, computational steering, and remote control of the physical experiment. In addition, the VL can be used as a risk-free (safe) environment for training. The implementation of virtual structures testing machines, virtual wind tunnels, and a virtual acoustic testing facility is described.

  1. Virtual environment display for a 3D audio room simulation

    NASA Astrophysics Data System (ADS)

    Chapin, William L.; Foster, Scott

    1992-06-01

    Recent developments in virtual 3D audio and synthetic aural environments have produced a complex acoustical room simulation. The acoustical simulation models a room with walls, ceiling, and floor of selected sound reflecting/absorbing characteristics and unlimited independent localizable sound sources. This non-visual acoustic simulation, implemented with 4 audio ConvolvotronsTM by Crystal River Engineering and coupled to the listener with a Poihemus IsotrakTM, tracking the listener's head position and orientation, and stereo headphones returning binaural sound, is quite compelling to most listeners with eyes closed. This immersive effect should be reinforced when properly integrated into a full, multi-sensory virtual environment presentation. This paper discusses the design of an interactive, visual virtual environment, complementing the acoustic model and specified to: 1) allow the listener to freely move about the space, a room of manipulable size, shape, and audio character, while interactively relocating the sound sources; 2) reinforce the listener's feeling of telepresence into the acoustical environment with visual and proprioceptive sensations; 3) enhance the audio with the graphic and interactive components, rather than overwhelm or reduce it; and 4) serve as a research testbed and technology transfer demonstration. The hardware/software design of two demonstration systems, one installed and one portable, are discussed through the development of four iterative configurations. The installed system implements a head-coupled, wide-angle, stereo-optic tracker/viewer and multi-computer simulation control. The portable demonstration system implements a head-mounted wide-angle, stereo-optic display, separate head and pointer electro-magnetic position trackers, a heterogeneous parallel graphics processing system, and object oriented C++ program code.

  2. Anatomical education and surgical simulation based on the Chinese Visible Human: a three-dimensional virtual model of the larynx region.

    PubMed

    Liu, Kaijun; Fang, Binji; Wu, Yi; Li, Ying; Jin, Jun; Tan, Liwen; Zhang, Shaoxiang

    2013-09-01

    Anatomical knowledge of the larynx region is critical for understanding laryngeal disease and performing required interventions. Virtual reality is a useful method for surgical education and simulation. Here, we assembled segmented cross-section slices of the larynx region from the Chinese Visible Human dataset. The laryngeal structures were precisely segmented manually as 2D images, then reconstructed and displayed as 3D images in the virtual reality Dextrobeam system. Using visualization and interaction with the virtual reality modeling language model, a digital laryngeal anatomy instruction was constructed using HTML and JavaScript languages. The volume larynx models can thus display an arbitrary section of the model and provide a virtual dissection function. This networked teaching system of the digital laryngeal anatomy can be read remotely, displayed locally, and manipulated interactively.

  3. A Robust Control of Two-Wheeled Mobile Manipulator with Underactuated Joint by Nonlinear Backstepping Method

    NASA Astrophysics Data System (ADS)

    Acar, Cihan; Murakami, Toshiyuki

    In this paper, a robust control of two-wheeled mobile manipulator with underactuated joint is considered. Two-wheeled mobile manipulators are dynamically balanced two-wheeled driven systems that do not have any caster or extra wheels to stabilize their body. Two-wheeled mobile manipulators mainly have an important feature that makes them more flexible and agile than the statically stable mobile manipulators. However, two-wheeled mobile manipulator is an underactuated system due to its two-wheeled structure. Therefore, it is required to stabilize the underactuated passive body and, at the same time, control the position of the center of gravity (CoG) of the manipulator in this system. To realize this, nonlinear backstepping based control method with virtual double inverted pendulum model is proposed in this paper. Backstepping is used with sliding mode to increase the robustness of the system against modeling errors and other perturbations. Then robust acceleration control is also achieved by utilizing disturbance observer. Performance of the proposed method is evaluated by several experiments.

  4. ``Staying in Focus'' - An Online Optics Tutorial on the Eye

    NASA Astrophysics Data System (ADS)

    Hoeling, Barbara M.

    2011-02-01

    The human eye and its vision problems are often used as an entry subject and attention grabber in the teaching of geometrical optics. While this is a real-life application students can relate to, it is difficult to visualize how the eye forms images by studying the still pictures and drawings in a textbook. How to draw a principal ray diagram or how to calculate the image distance from a given object distance and focal length might be clear to most students after studying the book, but even then they often lack an understanding of the "big picture." Where is the image of a very far away object located? How come we can see both far away and close-by objects focused (although not simultaneously)? Computer animations,2 popular with our computer-game savvy students, provide considerably more information than the still images, especially if they allow the user to manipulate parameters and to observe the outcome of a "virtual" experiment. However, as stand-alone learning tools, they often don't provide the students with the necessary physics background or instruction on how to use them.

  5. Vision-based overlay of a virtual object into real scene for designing room interior

    NASA Astrophysics Data System (ADS)

    Harasaki, Shunsuke; Saito, Hideo

    2001-10-01

    In this paper, we introduce a geometric registration method for augmented reality (AR) and an application system, interior simulator, in which a virtual (CG) object can be overlaid into a real world space. Interior simulator is developed as an example of an AR application of the proposed method. Using interior simulator, users can visually simulate the location of virtual furniture and articles in the living room so that they can easily design the living room interior without placing real furniture and articles, by viewing from many different locations and orientations in real-time. In our system, two base images of a real world space are captured from two different views for defining a projective coordinate of object 3D space. Then each projective view of a virtual object in the base images are registered interactively. After such coordinate determination, an image sequence of a real world space is captured by hand-held camera with tracking non-metric measured feature points for overlaying a virtual object. Virtual objects can be overlaid onto the image sequence by taking each relationship between the images. With the proposed system, 3D position tracking device, such as magnetic trackers, are not required for the overlay of virtual objects. Experimental results demonstrate that 3D virtual furniture can be overlaid into an image sequence of the scene of a living room nearly at video rate (20 frames per second).

  6. Virtual Specimens

    NASA Astrophysics Data System (ADS)

    de Paor, D. G.

    2009-12-01

    Virtual Field Trips have been around almost as long as the Worldwide Web itself yet virtual explorers do not generally return to their desktops with folders full of virtual hand specimens. Collection of real specimens on fields trips for later analysis in the lab (or at least in the pub) has been an important part of classical field geoscience education and research for generations but concern for the landscape and for preservation of key outcrops from wanton destruction has lead to many restrictions. One of the author’s favorite outcrops was recently vandalized presumably by a geologist who felt the need to bash some of the world’s most spectacular buckle folds with a rock sledge. It is not surprising, therefore, that geologists sometimes leave fragile localities out of field trip itineraries. Once analyzed, most specimens repose in drawers or bins, never to be seen again. Some end up in teaching collections but recent pedagogical research shows that undergraduate students have difficulty relating specimens both to their collection location and ultimate provenance in the lithosphere. Virtual specimens can be created using 3D modeling software and imported into virtual globes such as Google Earth (GE) where, they may be linked to virtual field trip stops or restored to their source localities on the paleo-globe. Sensitive localities may be protected by placemark approximation. The GE application program interface (API) has a distinct advantage over the stand-alone GE application when it comes to viewing and manipulating virtual specimens. When instances of the virtual globe are embedded in web pages using the GE plug-in, Collada models of specimens can be manipulated with javascript controls residing in the enclosing HTML, permitting specimens to be magnified, rotated in 3D, and sliced. Associated analytical data may be linked into javascript and localities for comparison at various points on the globe referenced by ‘fetching’ KML. Virtual specimens open up new possibilities for distance learning, where design of effective lab exercises has long been an issue, and they permit independent evaluation of published field research by reviewers who do not have access to the physical field area. Although their creation can be labor intensive, the benefits of virtual specimens for education and research are potentially great. Interactive 3D Specimen of Sierra Granodiorite at Outcrop Location

  7. Method and apparatus for accurately manipulating an object during microelectrophoresis

    DOEpatents

    Parvin, Bahram A.; Maestre, Marcos F.; Fish, Richard H.; Johnston, William E.

    1997-01-01

    An apparatus using electrophoresis provides accurate manipulation of an object on a microscope stage for further manipulations add reactions. The present invention also provides an inexpensive and easily accessible means to move an object without damage to the object. A plurality of electrodes are coupled to the stage in an array whereby the electrode array allows for distinct manipulations of the electric field for accurate manipulations of the object. There is an electrode array control coupled to the plurality of electrodes for manipulating the electric field. In an alternative embodiment, a chamber is provided on the stage to hold the object. The plurality of electrodes are positioned in the chamber, and the chamber is filled with fluid. The system can be automated using visual servoing, which manipulates the control parameters, i.e., x, y stage, applying the field, etc., after extracting the significant features directly from image data. Visual servoing includes an imaging device and computer system to determine the location of the object. A second stage having a plurality of tubes positioned on top of the second stage, can be accurately positioned by visual servoing so that one end of one of the plurality of tubes surrounds at least part of the object on the first stage.

  8. Method and apparatus for accurately manipulating an object during microelectrophoresis

    DOEpatents

    Parvin, B.A.; Maestre, M.F.; Fish, R.H.; Johnston, W.E.

    1997-09-23

    An apparatus using electrophoresis provides accurate manipulation of an object on a microscope stage for further manipulations and reactions. The present invention also provides an inexpensive and easily accessible means to move an object without damage to the object. A plurality of electrodes are coupled to the stage in an array whereby the electrode array allows for distinct manipulations of the electric field for accurate manipulations of the object. There is an electrode array control coupled to the plurality of electrodes for manipulating the electric field. In an alternative embodiment, a chamber is provided on the stage to hold the object. The plurality of electrodes are positioned in the chamber, and the chamber is filled with fluid. The system can be automated using visual servoing, which manipulates the control parameters, i.e., x, y stage, applying the field, etc., after extracting the significant features directly from image data. Visual servoing includes an imaging device and computer system to determine the location of the object. A second stage having a plurality of tubes positioned on top of the second stage, can be accurately positioned by visual servoing so that one end of one of the plurality of tubes surrounds at least part of the object on the first stage. 11 figs.

  9. AR Feels "Softer" than VR: Haptic Perception of Stiffness in Augmented versus Virtual Reality.

    PubMed

    Gaffary, Yoren; Le Gouis, Benoit; Marchal, Maud; Argelaguet, Ferran; Arnaldi, Bruno; Lecuyer, Anatole

    2017-11-01

    Does it feel the same when you touch an object in Augmented Reality (AR) or in Virtual Reality (VR)? In this paper we study and compare the haptic perception of stiffness of a virtual object in two situations: (1) a purely virtual environment versus (2) a real and augmented environment. We have designed an experimental setup based on a Microsoft HoloLens and a haptic force-feedback device, enabling to press a virtual piston, and compare its stiffness successively in either Augmented Reality (the virtual piston is surrounded by several real objects all located inside a cardboard box) or in Virtual Reality (the same virtual piston is displayed in a fully virtual scene composed of the same other objects). We have conducted a psychophysical experiment with 12 participants. Our results show a surprising bias in perception between the two conditions. The virtual piston is on average perceived stiffer in the VR condition compared to the AR condition. For instance, when the piston had the same stiffness in AR and VR, participants would select the VR piston as the stiffer one in 60% of cases. This suggests a psychological effect as if objects in AR would feel "softer" than in pure VR. Taken together, our results open new perspectives on perception in AR versus VR, and pave the way to future studies aiming at characterizing potential perceptual biases.

  10. Manipulating the fidelity of lower extremity visual feedback to identify obstacle negotiation strategies in immersive virtual reality.

    PubMed

    Kim, Aram; Zhou, Zixuan; Kretch, Kari S; Finley, James M

    2017-07-01

    The ability to successfully navigate obstacles in our environment requires integration of visual information about the environment with estimates of our body's state. Previous studies have used partial occlusion of the visual field to explore how information about the body and impending obstacles are integrated to mediate a successful clearance strategy. However, because these manipulations often remove information about both the body and obstacle, it remains to be seen how information about the lower extremities alone is utilized during obstacle crossing. Here, we used an immersive virtual reality (VR) interface to explore how visual feedback of the lower extremities influences obstacle crossing performance. Participants wore a head-mounted display while walking on treadmill and were instructed to step over obstacles in a virtual corridor in four different feedback trials. The trials involved: (1) No visual feedback of the lower extremities, (2) an endpoint-only model, (3) a link-segment model, and (4) a volumetric multi-segment model. We found that the volumetric model improved success rate, placed their trailing foot before crossing and leading foot after crossing more consistently, and placed their leading foot closer to the obstacle after crossing compared to no model. This knowledge is critical for the design of obstacle negotiation tasks in immersive virtual environments as it may provide information about the fidelity necessary to reproduce ecologically valid practice environments.

  11. Exploration of factors that affect the comparative effectiveness of physical and virtual manipulatives in an undergraduate laboratory

    NASA Astrophysics Data System (ADS)

    Chini, Jacquelyn J.; Madsen, Adrian; Gire, Elizabeth; Rebello, N. Sanjay; Puntambekar, Sadhana

    2012-06-01

    Recent research results have failed to support the conventionally held belief that students learn physics best from hands-on experiences with physical equipment. Rather, studies have found that students who perform similar experiments with computer simulations perform as well or better on measures of conceptual understanding than their peers who used physical equipment. In this study, we explored how university-level nonscience majors’ understanding of the physics concepts related to pulleys was supported by experimentation with real pulleys and a computer simulation of pulleys. We report that when students use one type of manipulative (physical or virtual), the comparison is influenced both by the concept studied and the timing of the post-test. Students performed similarly on questions related to force and mechanical advantage regardless of the type of equipment used. On the other hand, students who used the computer simulation performed better on questions related to work immediately after completing the activities; however, the two groups performed similarly on the work questions on a test given one week later. Additionally, both sequences of experimentation (physical-virtual and virtual-physical) equally supported students’ understanding of all of the concepts. These results suggest that both the concept learned and the stability of learning gains should continue to be explored to improve educators’ ability to select the best learning experience for a given topic.

  12. Tools for Science Inquiry Learning: Tool Affordances, Experimentation Strategies, and Conceptual Understanding

    NASA Astrophysics Data System (ADS)

    Bumbacher, Engin; Salehi, Shima; Wieman, Carl; Blikstein, Paulo

    2017-12-01

    Manipulative environments play a fundamental role in inquiry-based science learning, yet how they impact learning is not fully understood. In a series of two studies, we develop the argument that manipulative environments (MEs) influence the kind of inquiry behaviors students engage in, and that this influence realizes through the affordances of MEs, independent of whether they are physical or virtual. In particular, we examine how MEs shape college students' experimentation strategies and conceptual understanding. In study 1, students engaged in two consecutive inquiry tasks, first on mass and spring systems and then on electric circuits. They either used virtual or physical MEs. We found that the use of experimentation strategies was strongly related to conceptual understanding across tasks, but that students engaged differently in those strategies depending on what ME they used. More students engaged in productive strategies using the virtual ME for electric circuits, and vice versa using the physical ME for mass and spring systems. In study 2, we isolated the affordance of measurement uncertainty by comparing two versions of the same virtual ME for electric circuits—one with and one without noise—and found that the conditions differed in terms of productive experimentation strategies. These findings indicate that measures of inquiry processes may resolve apparent ambiguities and inconsistencies between studies on MEs that are based on learning outcomes alone.

  13. Physiological reactivity during object manipulation among cigarette-exposed infants at 9 months of age.

    PubMed

    Schuetze, Pamela; Lessard, Jared; Colder, Craig R; Maiorana, Nicole; Shisler, Shannon; Eiden, Rina D; Huestis, Marilyn A; Henrie, James

    2015-01-01

    The purpose of this study was to examine the association between prenatal exposure to cigarettes and heart rate during an object manipulation task at 9 months of age. Second-by-second heart rate was recorded for 181 infants who were prenatally exposed to cigarettes and 77 nonexposed infants during the manipulation of four standardized toys. A series of longitudinal multilevel models were run to examine the association of prenatal smoking on the intercept and slope of heart rate during four 90-second object manipulation tasks. After controlling for maternal age, prenatal marijuana and alcohol use, duration of focused attention and activity level, results indicated that the heart rates of exposed infants significantly increased during the object manipulation task. These findings suggest casual rather than focused attention and a possible increase in physiological arousal during object manipulation. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. Neural coding in barrel cortex during whisker-guided locomotion

    PubMed Central

    Sofroniew, Nicholas James; Vlasov, Yurii A; Hires, Samuel Andrew; Freeman, Jeremy; Svoboda, Karel

    2015-01-01

    Animals seek out relevant information by moving through a dynamic world, but sensory systems are usually studied under highly constrained and passive conditions that may not probe important dimensions of the neural code. Here, we explored neural coding in the barrel cortex of head-fixed mice that tracked walls with their whiskers in tactile virtual reality. Optogenetic manipulations revealed that barrel cortex plays a role in wall-tracking. Closed-loop optogenetic control of layer 4 neurons can substitute for whisker-object contact to guide behavior resembling wall tracking. We measured neural activity using two-photon calcium imaging and extracellular recordings. Neurons were tuned to the distance between the animal snout and the contralateral wall, with monotonic, unimodal, and multimodal tuning curves. This rich representation of object location in the barrel cortex could not be predicted based on simple stimulus-response relationships involving individual whiskers and likely emerges within cortical circuits. DOI: http://dx.doi.org/10.7554/eLife.12559.001 PMID:26701910

  15. Operator Localization of Virtual Objects

    NASA Technical Reports Server (NTRS)

    Ellis, Stephen R.; Menges, Brian M.; Null, Cynthia H. (Technical Monitor)

    1998-01-01

    Errors in the localization of nearby virtual objects presented via see-through, helmet mounted displays are examined as a function of viewing conditions and scene content. Monocular, biocular or stereoscopic presentation of the virtual objects, accommodation (required focus), subjects'age, and the position of physical surfaces are examined. Nearby physical surfaces are found to introduce localization errors that differ depending upon the other experimental factors. The apparent physical size and transparency of the virtual objects and physical surfaces respectively are also influenced by their relative position when superimposed. Design implications are discussed.

  16. Parametric Method to Define Area of Allowable Configurations while Changing Position of Restricted Zones

    NASA Astrophysics Data System (ADS)

    Pritykin, F. N.; Nefedov, D. I.; Rogoza, Yu A.; Zinchenko, Yu V.

    2018-03-01

    The article presents the findings related to the development of the module for automatic collision detection of the manipulator with restricted zones for virtual motion modeling. It proposes the parametric method for specifying the area of allowable joint configurations. The authors study the cases when restricted zones are specified using the horizontal plane or front-projection planes. The joint coordinate space is specified by rectangular axes in the direction of which the angles defining the displacements in turning pairs are laid off. The authors present the results of modeling which enabled to develop a parametric method for specifying a set of cross-sections defining the shape and position of allowable configurations in different positions of a restricted zone. All joint points that define allowable configurations refer to the indicated sections. The area of allowable configurations is specified analytically by using several kinematic surfaces that limit it. A geometric analysis is developed based on the use of the area of allowable configurations characterizing the position of the manipulator and reported restricted zones. The paper presents numerical calculations related to virtual simulation of the manipulator path performed by the mobile robot Varan when using the developed algorithm and restricted zones. The obtained analytical dependencies allow us to define the area of allowable configurations, which is a knowledge pool to ensure the intelligent control of the manipulator path in a predefined environment. The use of the obtained region to synthesize a joint trajectory makes it possible to correct the manipulator path to foresee and eliminate deadlocks when synthesizing motions along the velocity vector.

  17. Impact of virtual reality simulation on learning barriers of phacoemulsification perceived by residents

    PubMed Central

    Ng, Danny Siu-Chun; Sun, Zihan; Young, Alvin Lerrmann; Ko, Simon Tak-Chuen; Lok, Jerry Ka-Hing; Lai, Timothy Yuk-Yau; Sikder, Shameema; Tham, Clement C

    2018-01-01

    Objective To identify residents’ perceived barriers to learning phacoemulsification surgical procedures and to evaluate whether virtual reality simulation training changed these perceptions. Design The ophthalmology residents undertook a simulation phacoemulsification course and proficiency assessment on the Eyesi system using the previously validated training modules of intracapsular navigation, anti-tremor, capsulorrhexis, and cracking/chopping. A cross-sectional, multicenter survey on the perceived difficulties in performing phacoemulsification tasks on patients, based on the validated International Council of Ophthalmology’s Ophthalmology Surgical Competency Assessment Rubric (ICO-OSCAR), using a 5-point Likert scale (1 = least and 5 = most difficulty), was conducted among residents with or without prior simulation training. Mann–Whitney U tests were carried out to compare the mean scores, and multivariate regression analyses were performed to evaluate the association of lower scores with the following potential predictors: 1) higher level trainee, 2) can complete phacoemulsification most of the time (>90%) without supervisor’s intervention, and 3) prior simulation training. Setting The study was conducted in ophthalmology residency training programs in five regional hospitals in Hong Kong. Results Of the 22 residents, 19 responded (86.3%), of which 13 (68.4%) had completed simulation training. Nucleus cracking/chopping was ranked highest in difficulty by all respondents followed by capsulorrhexis completion and nucleus rotation/manipulation. Respondents with prior simulation training had significantly lower difficulty scores on these three tasks (nucleus cracking/chopping 3.85 vs 4.75, P = 0.03; capsulorrhexis completion 3.31 vs 4.40, P = 0.02; and nucleus rotation/manipulation 3.00 vs 4.75, P = 0.01). In multivariate analyses, simulation training was significantly associated with lower difficulty scores on these three tasks. Conclusion Residents who had completed Eyesi simulation training had higher confidence in performing the most difficult tasks perceived during phacoemulsification. PMID:29785084

  18. Natural gesture interfaces

    NASA Astrophysics Data System (ADS)

    Starodubtsev, Illya

    2017-09-01

    The paper describes the implementation of the system of interaction with virtual objects based on gestures. The paper describes the common problems of interaction with virtual objects, specific requirements for the interfaces for virtual and augmented reality.

  19. Development of a virtual reality training system for endoscope-assisted submandibular gland removal.

    PubMed

    Miki, Takehiro; Iwai, Toshinori; Kotani, Kazunori; Dang, Jianwu; Sawada, Hideyuki; Miyake, Minoru

    2016-11-01

    Endoscope-assisted surgery has widely been adopted as a basic surgical procedure, with various training systems using virtual reality developed for this procedure. In the present study, a basic training system comprising virtual reality for the removal of submandibular glands under endoscope assistance was developed. The efficacy of the training system was verified in novice oral surgeons. A virtual reality training system was developed using existing haptic devices. Virtual reality models were constructed from computed tomography data to ensure anatomical accuracy. Novice oral surgeons were trained using the developed virtual reality training system. The developed virtual reality training system included models of the submandibular gland and surrounding connective tissues and blood vessels entering the submandibular gland. Cutting or abrasion of the connective tissue and manipulations, such as elevation of blood vessels, were reproduced by the virtual reality system. A training program using the developed system was devised. Novice oral surgeons were trained in accordance with the devised training program. Our virtual reality training system for endoscope-assisted removal of the submandibular gland is effective in the training of novice oral surgeons in endoscope-assisted surgery. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  20. Virtual Workshop Environment (VWE): A Taxonomy and Service Oriented Architecture (SOA) Framework for Modularized Virtual Learning Environments (VLE)--Applying the Learning Object Concept to the VLE

    ERIC Educational Resources Information Center

    Paulsson, Fredrik; Naeve, Ambjorn

    2006-01-01

    Based on existing Learning Object taxonomies, this article suggests an alternative Learning Object taxonomy, combined with a general Service Oriented Architecture (SOA) framework, aiming to transfer the modularized concept of Learning Objects to modularized Virtual Learning Environments. The taxonomy and SOA-framework exposes a need for a clearer…

  1. Taxonomy based analysis of force exchanges during object grasping and manipulation

    PubMed Central

    Martin-Brevet, Sandra; Jarrassé, Nathanaël; Burdet, Etienne

    2017-01-01

    The flexibility of the human hand in object manipulation is essential for daily life activities, but remains relatively little explored with quantitative methods. On the one hand, recent taxonomies describe qualitatively the classes of hand postures for object grasping and manipulation. On the other hand, the quantitative analysis of hand function has been generally restricted to precision grip (with thumb and index opposition) during lifting tasks. The aim of the present study is to fill the gap between these two kinds of descriptions, by investigating quantitatively the forces exerted by the hand on an instrumented object in a set of representative manipulation tasks. The object was a parallelepiped object able to measure the force exerted on the six faces and its acceleration. The grasping force was estimated from the lateral force and the unloading force from the bottom force. The protocol included eleven tasks with complementary constraints inspired by recent taxonomies: four tasks corresponding to lifting and holding the object with different grasp configurations, and seven to manipulating the object (rotation around each of its axis and translation). The grasping and unloading forces and object rotations were measured during the five phases of the actions: unloading, lifting, holding or manipulation, preparation to deposit, and deposit. The results confirm the tight regulation between grasping and unloading forces during lifting, and extend this to the deposit phase. In addition, they provide a precise description of the regulation of force exchanges during various manipulation tasks spanning representative actions of daily life. The timing of manipulation showed both sequential and overlapping organization of the different sub-actions, and micro-errors could be detected. This phenomenological study confirms the feasibility of using an instrumented object to investigate complex manipulative behavior in humans. This protocol will be used in the future to investigate upper-limb dexterity in patients with sensory-motor impairments. PMID:28562617

  2. Does yohimbine hydrochloride facilitate fear extinction in virtual reality treatment of fear of flying? A randomized placebo-controlled trial.

    PubMed

    Meyerbroeker, Katharina; Powers, Mark B; van Stegeren, Anda; Emmelkamp, Paul M G

    2012-01-01

    Research suggests that yohimbine hydrochloride (YOH), a noradrenaline agonist, can facilitate fear extinction. It is thought that the mechanism of enhanced emotional memory is stimulated through elevated noradrenaline levels. This randomized placebo-controlled trial examined the potential exposure-enhancing effects of YOH in a clinical sample of participants meeting DSM-IV criteria for a specific phobia (fear of flying). Sixty-seven participants with fear of flying were randomized to 4 sessions of virtual reality exposure therapy (VRET) combined with YOH (10 mg), or 4 sessions of VRET combined with a placebo. Treatment consisted of 4 weekly 1-hour exposure sessions consisting of two 25-minute virtual flights. At pre- and post- treatment, fear of flying was assessed. The YOH or placebo capsules were administered 1 h prior to exposures. The manipulation of the noradrenaline activity was confirmed by salivary α-amylase (sAA) samples taken pre-, during and post-exposure. Forty-eight participants completed treatment. Manipulation of noradrenaline levels with YOH was successful, with significantly higher levels of sAA in the YOH group when entering exposure. Results showed that both groups improved significantly from pre- to post-treatment with respect to anxiety reduction. However, although the manipulation of noradrenaline activity was successful, there was no evidence that YOH enhanced outcome. Participants improved significantly on anxiety measures independently of drug condition, after 4 sessions of VRET. These data do not support the initial findings of exposure-enhancing effects of YOH in this dosage in clinical populations. Copyright © 2011 S. Karger AG, Basel.

  3. Monitoring and Control Interface Based on Virtual Sensors

    PubMed Central

    Escobar, Ricardo F.; Adam-Medina, Manuel; García-Beltrán, Carlos D.; Olivares-Peregrino, Víctor H.; Juárez-Romero, David; Guerrero-Ramírez, Gerardo V.

    2014-01-01

    In this article, a toolbox based on a monitoring and control interface (MCI) is presented and applied in a heat exchanger. The MCI was programed in order to realize sensor fault detection and isolation and fault tolerance using virtual sensors. The virtual sensors were designed from model-based high-gain observers. To develop the control task, different kinds of control laws were included in the monitoring and control interface. These control laws are PID, MPC and a non-linear model-based control law. The MCI helps to maintain the heat exchanger under operation, even if a temperature outlet sensor fault occurs; in the case of outlet temperature sensor failure, the MCI will display an alarm. The monitoring and control interface is used as a practical tool to support electronic engineering students with heat transfer and control concepts to be applied in a double-pipe heat exchanger pilot plant. The method aims to teach the students through the observation and manipulation of the main variables of the process and by the interaction with the monitoring and control interface (MCI) developed in LabVIEW©. The MCI provides the electronic engineering students with the knowledge of heat exchanger behavior, since the interface is provided with a thermodynamic model that approximates the temperatures and the physical properties of the fluid (density and heat capacity). An advantage of the interface is the easy manipulation of the actuator for an automatic or manual operation. Another advantage of the monitoring and control interface is that all algorithms can be manipulated and modified by the users. PMID:25365462

  4. Fast grasping of unknown objects using principal component analysis

    NASA Astrophysics Data System (ADS)

    Lei, Qujiang; Chen, Guangming; Wisse, Martijn

    2017-09-01

    Fast grasping of unknown objects has crucial impact on the efficiency of robot manipulation especially subjected to unfamiliar environments. In order to accelerate grasping speed of unknown objects, principal component analysis is utilized to direct the grasping process. In particular, a single-view partial point cloud is constructed and grasp candidates are allocated along the principal axis. Force balance optimization is employed to analyze possible graspable areas. The obtained graspable area with the minimal resultant force is the best zone for the final grasping execution. It is shown that an unknown object can be more quickly grasped provided that the component analysis principle axis is determined using single-view partial point cloud. To cope with the grasp uncertainty, robot motion is assisted to obtain a new viewpoint. Virtual exploration and experimental tests are carried out to verify this fast gasping algorithm. Both simulation and experimental tests demonstrated excellent performances based on the results of grasping a series of unknown objects. To minimize the grasping uncertainty, the merits of the robot hardware with two 3D cameras can be utilized to suffice the partial point cloud. As a result of utilizing the robot hardware, the grasping reliance is highly enhanced. Therefore, this research demonstrates practical significance for increasing grasping speed and thus increasing robot efficiency under unpredictable environments.

  5. Learning inverse kinematics: reduced sampling through decomposition into virtual robots.

    PubMed

    de Angulo, Vicente Ruiz; Torras, Carme

    2008-12-01

    We propose a technique to speedup the learning of the inverse kinematics of a robot manipulator by decomposing it into two or more virtual robot arms. Unlike previous decomposition approaches, this one does not place any requirement on the robot architecture, and thus, it is completely general. Parametrized self-organizing maps are particularly adequate for this type of learning, and permit comparing results directly obtained and through the decomposition. Experimentation shows that time reductions of up to two orders of magnitude are easily attained.

  6. Maintaining Engagement in Long-term Interventions with Relational Agents

    PubMed Central

    Bickmore, Timothy; Schulman, Daniel; Yin, Langxuan

    2011-01-01

    We discuss issues in designing virtual humans for applications which require long-term voluntary use, and the problem of maintaining engagement with users over time. Concepts and theories related to engagement from a variety of disciplines are reviewed. We describe a platform for conducting studies into long-term interactions between humans and virtual agents, and present the results of two longitudinal randomized controlled experiments in which the effect of manipulations of agent behavior on user engagement was assessed. PMID:21318052

  7. Virtual fixtures as tools to enhance operator performance in telepresence environments

    NASA Astrophysics Data System (ADS)

    Rosenberg, Louis B.

    1993-12-01

    This paper introduces the notion of virtual fixtures for use in telepresence systems and presents an empirical study which demonstrates that such virtual fixtures can greatly enhance operator performance within remote environments. Just as tools and fixtures in the real world can enhance human performance by guiding manual operations, providing localizing references, and reducing the mental processing required to perform a task, virtual fixtures are computer generated percepts overlaid on top of the reflection of a remote workspace which can provide similar benefits. Like a ruler guiding a pencil in a real manipulation task, a virtual fixture overlaid on top of a remote workspace can act to reduce the mental processing required to perform a task, limit the workload of certain sensory modalities, and most of all allow precision and performance to exceed natural human abilities. Because such perceptual overlays are virtual constructions they can be diverse in modality, abstract in form, and custom tailored to individual task or user needs. This study investigates the potential of virtual fixtures by implementing simple combinations of haptic and auditory sensations as perceptual overlays during a standardized telemanipulation task.

  8. Development and application of virtual reality for man/systems integration

    NASA Technical Reports Server (NTRS)

    Brown, Marcus

    1991-01-01

    While the graphical presentation of computer models signified a quantum leap over presentations limited to text and numbers, it still has the problem of presenting an interface barrier between the human user and the computer model. The user must learn a command language in order to orient themselves in the model. For example, to move left from the current viewpoint of the model, they might be required to type 'LEFT' at a keyboard. This command is fairly intuitive, but if the viewpoint moves far enough that there are no visual cues overlapping with the first view, the user does not know if the viewpoint has moved inches, feet, or miles to the left, or perhaps remained in the same position, but rotated to the left. Until the user becomes quite familiar with the interface language of the computer model presentation, they will be proned to lossing their bearings frequently. Even a highly skilled user will occasionally get lost in the model. A new approach to presenting type type of information is to directly interpret the user's body motions as the input language for determining what view to present. When the user's head turns 45 degrees to the left, the viewpoint should be rotated 45 degrees to the left. Since the head moves through several intermediate angles between the original view and the final one, several intermediate views should be presented, providing the user with a sense of continuity between the original view and the final one. Since the primary way a human physically interacts with their environment should monitor the movements of the user's hands and alter objects in the virtual model in a way consistent with the way an actual object would move when manipulated using the same hand movements. Since this approach to the man-computer interface closely models the same type of interface that humans have with the physical world, this type of interface is often called virtual reality, and the model is referred to as a virtual world. The task of this summer fellowship was to set up a virtual reality system at MSFC and begin applying it to some of the questions which concern scientists and engineers involved in space flight. A brief discussion of this work is presented.

  9. Shifty: A Weight-Shifting Dynamic Passive Haptic Proxy to Enhance Object Perception in Virtual Reality.

    PubMed

    Zenner, Andre; Kruger, Antonio

    2017-04-01

    We define the concept of Dynamic Passive Haptic Feedback (DPHF) for virtual reality by introducing the weight-shifting physical DPHF proxy object Shifty. This concept combines actuators known from active haptics and physical proxies known from passive haptics to construct proxies that automatically adapt their passive haptic feedback. We describe the concept behind our ungrounded weight-shifting DPHF proxy Shifty and the implementation of our prototype. We then investigate how Shifty can, by automatically changing its internal weight distribution, enhance the user's perception of virtual objects interacted with in two experiments. In a first experiment, we show that Shifty can enhance the perception of virtual objects changing in shape, especially in length and thickness. Here, Shifty was shown to increase the user's fun and perceived realism significantly, compared to an equivalent passive haptic proxy. In a second experiment, Shifty is used to pick up virtual objects of different virtual weights. The results show that Shifty enhances the perception of weight and thus the perceived realism by adapting its kinesthetic feedback to the picked-up virtual object. In the same experiment, we additionally show that specific combinations of haptic, visual and auditory feedback during the pick-up interaction help to compensate for visual-haptic mismatch perceived during the shifting process.

  10. Trajectory Tracking of a Planer Parallel Manipulator by Using Computed Force Control Method

    NASA Astrophysics Data System (ADS)

    Bayram, Atilla

    2017-03-01

    Despite small workspace, parallel manipulators have some advantages over their serial counterparts in terms of higher speed, acceleration, rigidity, accuracy, manufacturing cost and payload. Accordingly, this type of manipulators can be used in many applications such as in high-speed machine tools, tuning machine for feeding, sensitive cutting, assembly and packaging. This paper presents a special type of planar parallel manipulator with three degrees of freedom. It is constructed as a variable geometry truss generally known planar Stewart platform. The reachable and orientation workspaces are obtained for this manipulator. The inverse kinematic analysis is solved for the trajectory tracking according to the redundancy and joint limit avoidance. Then, the dynamics model of the manipulator is established by using Virtual Work method. The simulations are performed to follow the given planar trajectories by using the dynamic equations of the variable geometry truss manipulator and computed force control method. In computed force control method, the feedback gain matrices for PD control are tuned with fixed matrices by trail end error and variable ones by means of optimization with genetic algorithm.

  11. Grasping trajectories in a virtual environment adhere to Weber's law.

    PubMed

    Ozana, Aviad; Berman, Sigal; Ganel, Tzvi

    2018-06-01

    Virtual-reality and telerobotic devices simulate local motor control of virtual objects within computerized environments. Here, we explored grasping kinematics within a virtual environment and tested whether, as in normal 3D grasping, trajectories in the virtual environment are performed analytically, violating Weber's law with respect to object's size. Participants were asked to grasp a series of 2D objects using a haptic system, which projected their movements to a virtual space presented on a computer screen. The apparatus also provided object-specific haptic information upon "touching" the edges of the virtual targets. The results showed that grasping movements performed within the virtual environment did not produce the typical analytical trajectory pattern obtained during 3D grasping. Unlike as in 3D grasping, grasping trajectories in the virtual environment adhered to Weber's law, which indicates relative resolution in size processing. In addition, the trajectory patterns differed from typical trajectories obtained during 3D grasping, with longer times to complete the movement, and with maximum grip apertures appearing relatively early in the movement. The results suggest that grasping movements within a virtual environment could differ from those performed in real space, and are subjected to irrelevant effects of perceptual information. Such atypical pattern of visuomotor control may be mediated by the lack of complete transparency between the interface and the virtual environment in terms of the provided visual and haptic feedback. Possible implications of the findings to movement control within robotic and virtual environments are further discussed.

  12. Lung segmentation refinement based on optimal surface finding utilizing a hybrid desktop/virtual reality user interface.

    PubMed

    Sun, Shanhui; Sonka, Milan; Beichel, Reinhard R

    2013-01-01

    Recently, the optimal surface finding (OSF) and layered optimal graph image segmentation of multiple objects and surfaces (LOGISMOS) approaches have been reported with applications to medical image segmentation tasks. While providing high levels of performance, these approaches may locally fail in the presence of pathology or other local challenges. Due to the image data variability, finding a suitable cost function that would be applicable to all image locations may not be feasible. This paper presents a new interactive refinement approach for correcting local segmentation errors in the automated OSF-based segmentation. A hybrid desktop/virtual reality user interface was developed for efficient interaction with the segmentations utilizing state-of-the-art stereoscopic visualization technology and advanced interaction techniques. The user interface allows a natural and interactive manipulation of 3-D surfaces. The approach was evaluated on 30 test cases from 18 CT lung datasets, which showed local segmentation errors after employing an automated OSF-based lung segmentation. The performed experiments exhibited significant increase in performance in terms of mean absolute surface distance errors (2.54±0.75 mm prior to refinement vs. 1.11±0.43 mm post-refinement, p≪0.001). Speed of the interactions is one of the most important aspects leading to the acceptance or rejection of the approach by users expecting real-time interaction experience. The average algorithm computing time per refinement iteration was 150 ms, and the average total user interaction time required for reaching complete operator satisfaction was about 2 min per case. This time was mostly spent on human-controlled manipulation of the object to identify whether additional refinement was necessary and to approve the final segmentation result. The reported principle is generally applicable to segmentation problems beyond lung segmentation in CT scans as long as the underlying segmentation utilizes the OSF framework. The two reported segmentation refinement tools were optimized for lung segmentation and might need some adaptation for other application domains. Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. Virtual exertions: evoking the sense of exerting forces in virtual reality using gestures and muscle activity.

    PubMed

    Chen, Karen B; Ponto, Kevin; Tredinnick, Ross D; Radwin, Robert G

    2015-06-01

    This study was a proof of concept for virtual exertions, a novel method that involves the use of body tracking and electromyography for grasping and moving projections of objects in virtual reality (VR). The user views objects in his or her hands during rehearsed co-contractions of the same agonist-antagonist muscles normally used for the desired activities to suggest exerting forces. Unlike physical objects, virtual objects are images and lack mass. There is currently no practical physically demanding way to interact with virtual objects to simulate strenuous activities. Eleven participants grasped and lifted similar physical and virtual objects of various weights in an immersive 3-D Cave Automatic Virtual Environment. Muscle activity, localized muscle fatigue, ratings of perceived exertions, and NASA Task Load Index were measured. Additionally, the relationship between levels of immersion (2-D vs. 3-D) was studied. Although the overall magnitude of biceps activity and workload were greater in VR, muscle activity trends and fatigue patterns for varying weights within VR and physical conditions were the same. Perceived exertions for varying weights were not significantly different between VR and physical conditions. Perceived exertion levels and muscle activity patterns corresponded to the assigned virtual loads, which supported the hypothesis that the method evoked the perception of physical exertions and showed that the method was promising. Ultimately this approach may offer opportunities for research and training individuals to perform strenuous activities under potentially safer conditions that mimic situations while seeing their own body and hands relative to the scene. © 2014, Human Factors and Ergonomics Society.

  14. A User-Centric Knowledge Creation Model in a Web of Object-Enabled Internet of Things Environment

    PubMed Central

    Kibria, Muhammad Golam; Fattah, Sheik Mohammad Mostakim; Jeong, Kwanghyeon; Chong, Ilyoung; Jeong, Youn-Kwae

    2015-01-01

    User-centric service features in a Web of Object-enabled Internet of Things environment can be provided by using a semantic ontology that classifies and integrates objects on the World Wide Web as well as shares and merges context-aware information and accumulated knowledge. The semantic ontology is applied on a Web of Object platform to virtualize the real world physical devices and information to form virtual objects that represent the features and capabilities of devices in the virtual world. Detailed information and functionalities of multiple virtual objects are combined with service rules to form composite virtual objects that offer context-aware knowledge-based services, where context awareness plays an important role in enabling automatic modification of the system to reconfigure the services based on the context. Converting the raw data into meaningful information and connecting the information to form the knowledge and storing and reusing the objects in the knowledge base can both be expressed by semantic ontology. In this paper, a knowledge creation model that synchronizes a service logistic model and a virtual world knowledge model on a Web of Object platform has been proposed. To realize the context-aware knowledge-based service creation and execution, a conceptual semantic ontology model has been developed and a prototype has been implemented for a use case scenario of emergency service. PMID:26393609

  15. A User-Centric Knowledge Creation Model in a Web of Object-Enabled Internet of Things Environment.

    PubMed

    Kibria, Muhammad Golam; Fattah, Sheik Mohammad Mostakim; Jeong, Kwanghyeon; Chong, Ilyoung; Jeong, Youn-Kwae

    2015-09-18

    User-centric service features in a Web of Object-enabled Internet of Things environment can be provided by using a semantic ontology that classifies and integrates objects on the World Wide Web as well as shares and merges context-aware information and accumulated knowledge. The semantic ontology is applied on a Web of Object platform to virtualize the real world physical devices and information to form virtual objects that represent the features and capabilities of devices in the virtual world. Detailed information and functionalities of multiple virtual objects are combined with service rules to form composite virtual objects that offer context-aware knowledge-based services, where context awareness plays an important role in enabling automatic modification of the system to reconfigure the services based on the context. Converting the raw data into meaningful information and connecting the information to form the knowledge and storing and reusing the objects in the knowledge base can both be expressed by semantic ontology. In this paper, a knowledge creation model that synchronizes a service logistic model and a virtual world knowledge model on a Web of Object platform has been proposed. To realize the context-aware knowledge-based service creation and execution, a conceptual semantic ontology model has been developed and a prototype has been implemented for a use case scenario of emergency service.

  16. Virtual reality aided visualization of fluid flow simulations with application in medical education and diagnostics.

    PubMed

    Djukic, Tijana; Mandic, Vesna; Filipovic, Nenad

    2013-12-01

    Medical education, training and preoperative diagnostics can be drastically improved with advanced technologies, such as virtual reality. The method proposed in this paper enables medical doctors and students to visualize and manipulate three-dimensional models created from CT or MRI scans, and also to analyze the results of fluid flow simulations. Simulation of fluid flow using the finite element method is performed, in order to compute the shear stress on the artery walls. The simulation of motion through the artery is also enabled. The virtual reality system proposed here could shorten the length of training programs and make the education process more effective. © 2013 Published by Elsevier Ltd.

  17. Handling knowledge via Concept Maps: a space weather use case

    NASA Astrophysics Data System (ADS)

    Messerotti, Mauro; Fox, Peter

    Concept Maps (Cmaps) are powerful means for knowledge coding in graphical form. As flexible software tools exist to manipulate the knowledge embedded in Cmaps in machine-readable form, such complex entities are suitable candidates not only for the representation of ontologies and semantics in Virtual Observatory (VO) architectures, but also for knowledge handling and knowledge discovery. In this work, we present a use case relevant to space weather applications and we elaborate on its possible implementation and adavanced use in Semantic Virtual Observatories dedicated to Sun-Earth Connections. This analysis was carried out in the framework of the Electronic Geophysical Year (eGY) and represents an achievement synergized by the eGY Virtual Observatories Working Group.

  18. Using virtual robot-mediated play activities to assess cognitive skills.

    PubMed

    Encarnação, Pedro; Alvarez, Liliana; Rios, Adriana; Maya, Catarina; Adams, Kim; Cook, Al

    2014-05-01

    To evaluate the feasibility of using virtual robot-mediated play activities to assess cognitive skills. Children with and without disabilities utilized both a physical robot and a matching virtual robot to perform the same play activities. The activities were designed such that successfully performing them is an indication of understanding of the underlying cognitive skills. Participants' performance with both robots was similar when evaluated by the success rates in each of the activities. Session video analysis encompassing participants' behavioral, interaction and communication aspects revealed differences in sustained attention, visuospatial and temporal perception, and self-regulation, favoring the virtual robot. The study shows that virtual robots are a viable alternative to the use of physical robots for assessing children's cognitive skills, with the potential of overcoming limitations of physical robots such as cost, reliability and the need for on-site technical support. Virtual robots can provide a vehicle for children to demonstrate cognitive understanding. Virtual and physical robots can be used as augmentative manipulation tools allowing children with disabilities to actively participate in play, educational and therapeutic activities. Virtual robots have the potential of overcoming limitations of physical robots such as cost, reliability and the need for on-site technical support.

  19. Energy efficiency analysis of the manipulation process by the industrial objects with the use of Bernoulli gripping devices

    NASA Astrophysics Data System (ADS)

    Savkiv, Volodymyr; Mykhailyshyn, Roman; Duchon, Frantisek; Mikhalishin, Mykhailo

    2017-11-01

    The article deals with the topical issue of reducing energy consumption for transportation of industrial objects. The energy efficiency of the process of objects manipulation with the use of the orientation optimization method while gripping with the help of different methods has been studied. The analysis of the influence of the constituent parts of inertial forces, that affect the object of manipulation, on the necessary force characteristics and energy consumption of Bernoulli gripping device has been proposed. The economic efficiency of the use of the optimal orientation of Bernoulli gripping device while transporting the object of manipulation in comparison to the transportation without re-orientation has been proved.

  20. DEC Ada interface to Screen Management Guidelines (SMG)

    NASA Technical Reports Server (NTRS)

    Laomanachareon, Somsak; Lekkos, Anthony A.

    1986-01-01

    DEC's Screen Management Guidelines are the Run-Time Library procedures that perform terminal-independent screen management functions on a VT100-class terminal. These procedures assist users in designing, composing, and keeping track of complex images on a video screen. There are three fundamental elements in the screen management model: the pasteboard, the virtual display, and the virtual keyboard. The pasteboard is like a two-dimensional area on which a user places and manipulates screen displays. The virtual display is a rectangular part of the terminal screen to which a program writes data with procedure calls. The virtual keyboard is a logical structure for input operation associated with a physical keyboard. SMG can be called by all major VAX languages. Through Ada, predefined language Pragmas are used to interface with SMG. These features and elements of SMG are briefly discussed.

  1. Virtual Environment User Interfaces to Support RLV and Space Station Simulations in the ANVIL Virtual Reality Lab

    NASA Technical Reports Server (NTRS)

    Dumas, Joseph D., II

    1998-01-01

    Several virtual reality I/O peripherals were successfully configured and integrated as part of the author's 1997 Summer Faculty Fellowship work. These devices, which were not supported by the developers of VR software packages, use new software drivers and configuration files developed by the author to allow them to be used with simulations developed using those software packages. The successful integration of these devices has added significant capability to the ANVIL lab at MSFC. In addition, the author was able to complete the integration of a networked virtual reality simulation of the Space Shuttle Remote Manipulator System docking Space Station modules which was begun as part of his 1996 Fellowship. The successful integration of this simulation demonstrates the feasibility of using VR technology for ground-based training as well as on-orbit operations.

  2. New trends in the virtualization of hospitals--tools for global e-Health.

    PubMed

    Graschew, Georgi; Roelofs, Theo A; Rakowsky, Stefan; Schlag, Peter M; Heinzlreiter, Paul; Kranzlmüller, Dieter; Volkert, Jens

    2006-01-01

    The development of virtual hospitals and digital medicine helps to bridge the digital divide between different regions of the world and enables equal access to high-level medical care. Pre-operative planning, intra-operative navigation and minimally-invasive surgery require a digital and virtual environment supporting the perception of the physician. As data and computing resources in a virtual hospital are distributed over many sites the concept of the Grid should be integrated with other communication networks and platforms. A promising approach is the implementation of service-oriented architectures for an invisible grid, hiding complexity for both application developers and end-users. Examples of promising medical applications of Grid technology are the real-time 3D-visualization and manipulation of patient data for individualized treatment planning and the creation of distributed intelligent databases of medical images.

  3. A model for flexible tools used in minimally invasive medical virtual environments.

    PubMed

    Soler, Francisco; Luzon, M Victoria; Pop, Serban R; Hughes, Chris J; John, Nigel W; Torres, Juan Carlos

    2011-01-01

    Within the limits of current technology, many applications of a virtual environment will trade-off accuracy for speed. This is not an acceptable compromise in a medical training application where both are essential. Efficient algorithms must therefore be developed. The purpose of this project is the development and validation of a novel physics-based real time tool manipulation model, which is easy to integrate into any medical virtual environment that requires support for the insertion of long flexible tools into complex geometries. This encompasses medical specialities such as vascular interventional radiology, endoscopy, and laparoscopy, where training, prototyping of new instruments/tools and mission rehearsal can all be facilitated by using an immersive medical virtual environment. Our model recognises and uses accurately patient specific data and adapts to the geometrical complexity of the vessel in real time.

  4. Autonomous Object Manipulation Using a Soft Planar Grasping Manipulator

    PubMed Central

    Katzschmann, Robert K.; Marchese, Andrew D.

    2015-01-01

    Abstract This article presents the development of an autonomous motion planning algorithm for a soft planar grasping manipulator capable of grasp-and-place operations by encapsulation with uncertainty in the position and shape of the object. The end effector of the soft manipulator is fabricated in one piece without weakening seams using lost-wax casting instead of the commonly used multilayer lamination process. The soft manipulation system can grasp randomly positioned objects within its reachable envelope and move them to a desired location without human intervention. The autonomous planning system leverages the compliance and continuum bending of the soft grasping manipulator to achieve repeatable grasps in the presence of uncertainty. A suite of experiments is presented that demonstrates the system's capabilities. PMID:27625916

  5. Providing haptic feedback in robot-assisted minimally invasive surgery: a direct optical force-sensing solution for haptic rendering of deformable bodies.

    PubMed

    Ehrampoosh, Shervin; Dave, Mohit; Kia, Michael A; Rablau, Corneliu; Zadeh, Mehrdad H

    2013-01-01

    This paper presents an enhanced haptic-enabled master-slave teleoperation system which can be used to provide force feedback to surgeons in minimally invasive surgery (MIS). One of the research goals was to develop a combined-control architecture framework that included both direct force reflection (DFR) and position-error-based (PEB) control strategies. To achieve this goal, it was essential to measure accurately the direct contact forces between deformable bodies and a robotic tool tip. To measure the forces at a surgical tool tip and enhance the performance of the teleoperation system, an optical force sensor was designed, prototyped, and added to a robot manipulator. The enhanced teleoperation architecture was formulated by developing mathematical models for the optical force sensor, the extended slave robot manipulator, and the combined-control strategy. Human factor studies were also conducted to (a) examine experimentally the performance of the enhanced teleoperation system with the optical force sensor, and (b) study human haptic perception during the identification of remote object deformability. The first experiment was carried out to discriminate deformability of objects when human subjects were in direct contact with deformable objects by means of a laparoscopic tool. The control parameters were then tuned based on the results of this experiment using a gain-scheduling method. The second experiment was conducted to study the effectiveness of the force feedback provided through the enhanced teleoperation system. The results show that the force feedback increased the ability of subjects to correctly identify materials of different deformable types. In addition, the virtual force feedback provided by the teleoperation system comes close to the real force feedback experienced in direct MIS. The experimental results provide design guidelines for choosing and validating the control architecture and the optical force sensor.

  6. Right insular damage decreases heartbeat awareness and alters cardio-visual effects on bodily self-consciousness.

    PubMed

    Ronchi, Roberta; Bello-Ruiz, Javier; Lukowska, Marta; Herbelin, Bruno; Cabrilo, Ivan; Schaller, Karl; Blanke, Olaf

    2015-04-01

    Recent evidence suggests that multisensory integration of bodily signals involving exteroceptive and interoceptive information modulates bodily aspects of self-consciousness such as self-identification and self-location. In the so-called Full Body Illusion subjects watch a virtual body being stroked while they perceive tactile stimulation on their own body inducing illusory self-identification with the virtual body and a change in self-location towards the virtual body. In a related illusion, it has recently been shown that similar changes in self-identification and self-location can be observed when an interoceptive signal is used in association with visual stimulation of the virtual body (i.e., participants observe a virtual body illuminated in synchrony with their heartbeat). Although brain imaging and neuropsychological evidence suggest that the insular cortex is a core region for interoceptive processing (such as cardiac perception and awareness) as well as for self-consciousness, it is currently not known whether the insula mediates cardio-visual modulation of self-consciousness. Here we tested the involvement of insular cortex in heartbeat awareness and cardio-visual manipulation of bodily self-consciousness in a patient before and after resection of a selective right neoplastic insular lesion. Cardio-visual stimulation induced an abnormally enhanced state of bodily self-consciousness; in addition, cardio-visual manipulation was associated with an experienced loss of the spatial unity of the self (illusory bi-location and duplication of his body), not observed in healthy subjects. Heartbeat awareness was found to decrease after insular resection. Based on these data we propose that the insula mediates interoceptive awareness as well as cardio-visual effects on bodily self-consciousness and that insular processing of interoceptive signals is an important mechanism for the experienced unity of the self. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Intra-operative 3D imaging system for robot-assisted fracture manipulation.

    PubMed

    Dagnino, G; Georgilas, I; Tarassoli, P; Atkins, R; Dogramadzi, S

    2015-01-01

    Reduction is a crucial step in the treatment of broken bones. Achieving precise anatomical alignment of bone fragments is essential for a good fast healing process. Percutaneous techniques are associated with faster recovery time and lower infection risk. However, deducing intra-operatively the desired reduction position is quite challenging due to the currently available technology. The 2D nature of this technology (i.e. the image intensifier) doesn't provide enough information to the surgeon regarding the fracture alignment and rotation, which is actually a three-dimensional problem. This paper describes the design and development of a 3D imaging system for the intra-operative virtual reduction of joint fractures. The proposed imaging system is able to receive and segment CT scan data of the fracture, to generate the 3D models of the bone fragments, and display them on a GUI. A commercial optical tracker was included into the system to track the actual pose of the bone fragments in the physical space, and generate the corresponding pose relations in the virtual environment of the imaging system. The surgeon virtually reduces the fracture in the 3D virtual environment, and a robotic manipulator connected to the fracture through an orthopedic pin executes the physical reductions accordingly. The system is here evaluated through fracture reduction experiments, demonstrating a reduction accuracy of 1.04 ± 0.69 mm (translational RMSE) and 0.89 ± 0.71 ° (rotational RMSE).

  8. Verbalizing, Visualizing, and Navigating: The Effect of Strategies on Encoding a Large-Scale Virtual Environment

    PubMed Central

    Kraemer, David J.M.; Schinazi, Victor R.; Cawkwell, Philip B.; Tekriwal, Anand; Epstein, Russell A.; Thompson-Schill, Sharon L.

    2016-01-01

    Using novel virtual cities, we investigated the influence of verbal and visual strategies on the encoding of navigation-relevant information in a large-scale virtual environment. In two experiments, participants watched videos of routes through four virtual cities and were subsequently tested on their memory for observed landmarks and on their ability to make judgments regarding the relative directions of the different landmarks along the route. In the first experiment, self-report questionnaires measuring visual and verbal cognitive styles were administered to examine correlations between cognitive styles, landmark recognition, and judgments of relative direction. Results demonstrate a tradeoff in which the verbal cognitive style is more beneficial for recognizing individual landmarks than for judging relative directions between them, whereas the visual cognitive style is more beneficial for judging relative directions than for landmark recognition. In a second experiment, we manipulated the use of verbal and visual strategies by varying task instructions given to separate groups of participants. Results confirm that a verbal strategy benefits landmark memory, whereas a visual strategy benefits judgments of relative direction. The manipulation of strategy by altering task instructions appears to trump individual differences in cognitive style. Taken together, we find that processing different details during route encoding, whether due to individual proclivities (Experiment 1) or task instructions (Experiment 2), results in benefits for different components of navigation relevant information. These findings also highlight the value of considering multiple sources of individual differences as part of spatial cognition investigations. PMID:27668486

  9. Hybrid polylingual object model: an efficient and seamless integration of Java and native components on the Dalvik virtual machine.

    PubMed

    Huang, Yukun; Chen, Rong; Wei, Jingbo; Pei, Xilong; Cao, Jing; Prakash Jayaraman, Prem; Ranjan, Rajiv

    2014-01-01

    JNI in the Android platform is often observed with low efficiency and high coding complexity. Although many researchers have investigated the JNI mechanism, few of them solve the efficiency and the complexity problems of JNI in the Android platform simultaneously. In this paper, a hybrid polylingual object (HPO) model is proposed to allow a CAR object being accessed as a Java object and as vice in the Dalvik virtual machine. It is an acceptable substitute for JNI to reuse the CAR-compliant components in Android applications in a seamless and efficient way. The metadata injection mechanism is designed to support the automatic mapping and reflection between CAR objects and Java objects. A prototype virtual machine, called HPO-Dalvik, is implemented by extending the Dalvik virtual machine to support the HPO model. Lifespan management, garbage collection, and data type transformation of HPO objects are also handled in the HPO-Dalvik virtual machine automatically. The experimental result shows that the HPO model outweighs the standard JNI in lower overhead on native side, better executing performance with no JNI bridging code being demanded.

  10. Pilot study on effectiveness of simulation for surgical robot design using manipulability.

    PubMed

    Kawamura, Kazuya; Seno, Hiroto; Kobayashi, Yo; Fujie, Masakatsu G

    2011-01-01

    Medical technology has advanced with the introduction of robot technology, which facilitates some traditional medical treatments that previously were very difficult. However, at present, surgical robots are used in limited medical domains because these robots are designed using only data obtained from adult patients and are not suitable for targets having different properties, such as children. Therefore, surgical robots are required to perform specific functions for each clinical case. In addition, the robots must exhibit sufficiently high movability and operability for each case. In the present study, we focused on evaluation of the mechanism and configuration of a surgical robot by a simulation based on movability and operability during an operation. We previously proposed the development of a simulator system that reproduces the conditions of a robot and a target in a virtual patient body to evaluate the operability of the surgeon during an operation. In the present paper, we describe a simple experiment to verify the condition of the surgical assisting robot during an operation. In this experiment, the operation imitating suturing motion was carried out in a virtual workspace, and the surgical robot was evaluated based on manipulability as an indicator of movability. As the result, it was confirmed that the robot was controlled with low manipulability of the left side manipulator during the suturing. This simulation system can verify the less movable condition of a robot before developing an actual robot. Our results show the effectiveness of this proposed simulation system.

  11. Helios: a tangible and augmented environment to learn optical phenomena in astronomy

    NASA Astrophysics Data System (ADS)

    Fleck, Stéphanie; Hachet, Martin

    2015-10-01

    France is among the few countries that have integrated astronomy in primary school levels. However, for fifteen years, a lot of studies have shown that children have difficulties in understanding elementary astronomic phenomena such as day/night alternation, seasons or moon phases' evolution. To understand these phenomena, learners have to mentally construct 3D perceptions of aster motions and to understand how light propagates from an allocentric point of view. Therefore, 4-5 grades children (8 to 11 years old), who are developing their spatial cognition, have many difficulties to assimilate geometric optical problems that are linked to astronomy. To make astronomical learning more efficient for young pupils, we have designed an Augmented Inquiry-Based Learning Environment (AIBLE): HELIOS. Because manipulations in astronomy are intrinsically not possible, we propose to manipulate the underlying model. With HELIOS, virtual replicas of the Sun, Moon and Earth are directly manipulated from tangible manipulations. This digital support combines the possibilities of Augmented Reality (AR) while maintaining intuitive interactions following the principles of didactic of sciences. Light properties are taken into account and shadows of Earth and Moon are directly produced by an omnidirectional light source associated to the virtual Sun. This AR environment provides users with experiences they would otherwise not be able to experiment in the physical world. Our main goal is that students can take active control of their learning, express and support their ideas, make predictions and hypotheses, and test them by conducting investigations.

  12. Contextual modulation of pain sensitivity utilising virtual environments

    PubMed Central

    Smith, Ashley; Carlow, Klancy; Biddulph, Tara; Murray, Brooke; Paton, Melissa; Harvie, Daniel S

    2017-01-01

    Background: Investigating psychological mechanisms that modulate pain, such as those that might be accessed by manipulation of context, is of great interest to researchers seeking to better understand and treat pain. The aim of this study was to better understand the interaction between pain sensitivity, and contexts with inherent emotional and social salience – by exploiting modern immersive virtual reality (VR) technology. Methods: A within-subjects, randomised, double-blinded, repeated measures (RM) design was used. In total, 25 healthy participants were exposed to neutral, pleasant, threatening, socially positive and socially negative contexts, using an Oculus Rift DK2. Pressure pain thresholds (PPTs) were recorded in each context, as well as prior to and following the procedure. We also investigated whether trait anxiety and pain catastrophisation interacted with the relationship between the different contexts and pain. Results: Pressure pain sensitivity was not modulated by context (p = 0.48). Anxiety and pain catastrophisation were not significantly associated with PPTs, nor did they interact with the relationship between context and PPTs. Conclusion: Contrary to our hypothesis, socially and emotionally salient contexts did not influence pain thresholds. In light of other research, we suggest that pain outcomes might only be tenable to manipulation by contextual cues if they specifically manipulate the meaning of the pain-eliciting stimulus, rather than manipulate psychological state generally – as per the current study. Future research might exploit immersive VR technology to better explore the link between noxious stimuli and contexts that directly alter its threat value. PMID:28491299

  13. Manipulation of Unknown Objects to Improve the Grasp Quality Using Tactile Information.

    PubMed

    Montaño, Andrés; Suárez, Raúl

    2018-05-03

    This work presents a novel and simple approach in the area of manipulation of unknown objects considering both geometric and mechanical constraints of the robotic hand. Starting with an initial blind grasp, our method improves the grasp quality through manipulation considering the three common goals of the manipulation process: improving the hand configuration, the grasp quality and the object positioning, and, at the same time, prevents the object from falling. Tactile feedback is used to obtain local information of the contacts between the fingertips and the object, and no additional exteroceptive feedback sources are considered in the approach. The main novelty of this work lies in the fact that the grasp optimization is performed on-line as a reactive procedure using the tactile and kinematic information obtained during the manipulation. Experimental results are shown to illustrate the efficiency of the approach.

  14. Fast and accurate edge orientation processing during object manipulation

    PubMed Central

    Flanagan, J Randall; Johansson, Roland S

    2018-01-01

    Quickly and accurately extracting information about a touched object’s orientation is a critical aspect of dexterous object manipulation. However, the speed and acuity of tactile edge orientation processing with respect to the fingertips as reported in previous perceptual studies appear inadequate in these respects. Here we directly establish the tactile system’s capacity to process edge-orientation information during dexterous manipulation. Participants extracted tactile information about edge orientation very quickly, using it within 200 ms of first touching the object. Participants were also strikingly accurate. With edges spanning the entire fingertip, edge-orientation resolution was better than 3° in our object manipulation task, which is several times better than reported in previous perceptual studies. Performance remained impressive even with edges as short as 2 mm, consistent with our ability to precisely manipulate very small objects. Taken together, our results radically redefine the spatial processing capacity of the tactile system. PMID:29611804

  15. Altered sense of Agency in children with spastic cerebral palsy

    PubMed Central

    2011-01-01

    Background Children diagnosed with spastic Cerebral Palsy (CP) often show perceptual and cognitive problems, which may contribute to their functional deficit. Here we investigated if altered ability to determine whether an observed movement is performed by themselves (sense of agency) contributes to the motor deficit in children with CP. Methods Three groups; 1) CP children, 2) healthy peers, and 3) healthy adults produced straight drawing movements on a pen-tablet which was not visible for the subjects. The produced movement was presented as a virtual moving object on a computer screen. Subjects had to evaluate after each trial whether the movement of the object on the computer screen was generated by themselves or by a computer program which randomly manipulated the visual feedback by angling the trajectories 0, 5, 10, 15, 20 degrees away from target. Results Healthy adults executed the movements in 310 seconds, whereas healthy children and especially CP children were significantly slower (p < 0.002) (on average 456 seconds and 543 seconds respectively). There was also a statistical difference between the healthy and age matched CP children (p = 0.037). When the trajectory of the object generated by the computer corresponded to the subject's own movements all three groups reported that they were responsible for the movement of the object. When the trajectory of the object deviated by more than 10 degrees from target, healthy adults and children more frequently than CP children reported that the computer was responsible for the movement of the object. CP children consequently also attempted to compensate more frequently from the perturbation generated by the computer. Conclusions We conclude that CP children have a reduced ability to determine whether movement of a virtual moving object is caused by themselves or an external source. We suggest that this may be related to a poor integration of their intention of movement with visual and proprioceptive information about the performed movement and that altered sense of agency may be an important functional problem in children with CP. PMID:22129483

  16. First Person Experience of Body Transfer in Virtual Reality

    PubMed Central

    Slater, Mel; Spanlang, Bernhard; Sanchez-Vives, Maria V.; Blanke, Olaf

    2010-01-01

    Background Altering the normal association between touch and its visual correlate can result in the illusory perception of a fake limb as part of our own body. Thus, when touch is seen to be applied to a rubber hand while felt synchronously on the corresponding hidden real hand, an illusion of ownership of the rubber hand usually occurs. The illusion has also been demonstrated using visuomotor correlation between the movements of the hidden real hand and the seen fake hand. This type of paradigm has been used with respect to the whole body generating out-of-the-body and body substitution illusions. However, such studies have only ever manipulated a single factor and although they used a form of virtual reality have not exploited the power of immersive virtual reality (IVR) to produce radical transformations in body ownership. Principal Findings Here we show that a first person perspective of a life-sized virtual human female body that appears to substitute the male subjects' own bodies was sufficient to generate a body transfer illusion. This was demonstrated subjectively by questionnaire and physiologically through heart-rate deceleration in response to a threat to the virtual body. This finding is in contrast to earlier experimental studies that assume visuotactile synchrony to be the critical contributory factor in ownership illusions. Our finding was possible because IVR allowed us to use a novel experimental design for this type of problem with three independent binary factors: (i) perspective position (first or third), (ii) synchronous or asynchronous mirror reflections and (iii) synchrony or asynchrony between felt and seen touch. Conclusions The results support the notion that bottom-up perceptual mechanisms can temporarily override top down knowledge resulting in a radical illusion of transfer of body ownership. The research also illustrates immersive virtual reality as a powerful tool in the study of body representation and experience, since it supports experimental manipulations that would otherwise be infeasible, with the technology being mature enough to represent human bodies and their motion. PMID:20485681

  17. Head-coupled remote stereoscopic camera system for telepresence applications

    NASA Astrophysics Data System (ADS)

    Bolas, Mark T.; Fisher, Scott S.

    1990-09-01

    The Virtual Environment Workstation Project (VIEW) at NASA's Ames Research Center has developed a remotely controlled stereoscopic camera system that can be used for telepresence research and as a tool to develop and evaluate configurations for head-coupled visual systems associated with space station telerobots and remote manipulation robotic arms. The prototype camera system consists of two lightweight CCD video cameras mounted on a computer controlled platform that provides real-time pan, tilt, and roll control of the camera system in coordination with head position transmitted from the user. This paper provides an overall system description focused on the design and implementation of the camera and platform hardware configuration and the development of control software. Results of preliminary performance evaluations are reported with emphasis on engineering and mechanical design issues and discussion of related psychophysiological effects and objectives.

  18. A test of the embodied simulation theory of object perception: potentiation of responses to artifacts and animals.

    PubMed

    Matheson, Heath E; White, Nicole C; McMullen, Patricia A

    2014-07-01

    Theories of embodied object representation predict a tight association between sensorimotor processes and visual processing of manipulable objects. Previous research has shown that object handles can 'potentiate' a manual response (i.e., button press) to a congruent location. This potentiation effect is taken as evidence that objects automatically evoke sensorimotor simulations in response to the visual presentation of manipulable objects. In the present series of experiments, we investigated a critical prediction of the theory of embodied object representations that potentiation effects should be observed with manipulable artifacts but not non-manipulable animals. In four experiments we show that (a) potentiation effects are observed with animals and artifacts; (b) potentiation effects depend on the absolute size of the objects and (c) task context influences the presence/absence of potentiation effects. We conclude that potentiation effects do not provide evidence for embodied object representations, but are suggestive of a more general stimulus-response compatibility effect that may depend on the distribution of attention to different object features.

  19. Visuo-Haptic Mixed Reality with Unobstructed Tool-Hand Integration.

    PubMed

    Cosco, Francesco; Garre, Carlos; Bruno, Fabio; Muzzupappa, Maurizio; Otaduy, Miguel A

    2013-01-01

    Visuo-haptic mixed reality consists of adding to a real scene the ability to see and touch virtual objects. It requires the use of see-through display technology for visually mixing real and virtual objects, and haptic devices for adding haptic interaction with the virtual objects. Unfortunately, the use of commodity haptic devices poses obstruction and misalignment issues that complicate the correct integration of a virtual tool and the user's real hand in the mixed reality scene. In this work, we propose a novel mixed reality paradigm where it is possible to touch and see virtual objects in combination with a real scene, using commodity haptic devices, and with a visually consistent integration of the user's hand and the virtual tool. We discuss the visual obstruction and misalignment issues introduced by commodity haptic devices, and then propose a solution that relies on four simple technical steps: color-based segmentation of the hand, tracking-based segmentation of the haptic device, background repainting using image-based models, and misalignment-free compositing of the user's hand. We have developed a successful proof-of-concept implementation, where a user can touch virtual objects and interact with them in the context of a real scene, and we have evaluated the impact on user performance of obstruction and misalignment correction.

  20. Collision detection and modeling of rigid and deformable objects in laparoscopic simulator

    NASA Astrophysics Data System (ADS)

    Dy, Mary-Clare; Tagawa, Kazuyoshi; Tanaka, Hiromi T.; Komori, Masaru

    2015-03-01

    Laparoscopic simulators are viable alternatives for surgical training and rehearsal. Haptic devices can also be incorporated with virtual reality simulators to provide additional cues to the users. However, to provide realistic feedback, the haptic device must be updated by 1kHz. On the other hand, realistic visual cues, that is, the collision detection and deformation between interacting objects must be rendered at least 30 fps. Our current laparoscopic simulator detects the collision between a point on the tool tip, and on the organ surfaces, in which haptic devices are attached on actual tool tips for realistic tool manipulation. The triangular-mesh organ model is rendered using a mass spring deformation model, or finite element method-based models. In this paper, we investigated multi-point-based collision detection on the rigid tool rods. Based on the preliminary results, we propose a method to improve the collision detection scheme, and speed up the organ deformation reaction. We discuss our proposal for an efficient method to compute simultaneous multiple collision between rigid (laparoscopic tools) and deformable (organs) objects, and perform the subsequent collision response, with haptic feedback, in real-time.

  1. Contact geometry and mechanics predict friction forces during tactile surface exploration.

    PubMed

    Janko, Marco; Wiertlewski, Michael; Visell, Yon

    2018-03-20

    When we touch an object, complex frictional forces are produced, aiding us in perceiving surface features that help to identify the object at hand, and also facilitating grasping and manipulation. However, even during controlled tactile exploration, sliding friction forces fluctuate greatly, and it is unclear how they relate to the surface topography or mechanics of contact with the finger. We investigated the sliding contact between the finger and different relief surfaces, using high-speed video and force measurements. Informed by these experiments, we developed a friction force model that accounts for surface shape and contact mechanical effects, and is able to predict sliding friction forces for different surfaces and exploration speeds. We also observed that local regions of disconnection between the finger and surface develop near high relief features, due to the stiffness of the finger tissues. Every tested surface had regions that were never contacted by the finger; we refer to these as "tactile blind spots". The results elucidate friction force production during tactile exploration, may aid efforts to connect sensory and motor function of the hand to properties of touched objects, and provide crucial knowledge to inform the rendering of realistic experiences of touch contact in virtual reality.

  2. Virtual and super - virtual refraction method: Application to synthetic data and 2012 of Karangsambung survey data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nugraha, Andri Dian; Adisatrio, Philipus Ronnie

    2013-09-09

    Seismic refraction survey is one of geophysical method useful for imaging earth interior, definitely for imaging near surface. One of the common problems in seismic refraction survey is weak amplitude due to attenuations at far offset. This phenomenon will make it difficult to pick first refraction arrival, hence make it challenging to produce the near surface image. Seismic interferometry is a new technique to manipulate seismic trace for obtaining Green's function from a pair of receiver. One of its uses is for improving first refraction arrival quality at far offset. This research shows that we could estimate physical properties suchmore » as seismic velocity and thickness from virtual refraction processing. Also, virtual refraction could enhance the far offset signal amplitude since there is stacking procedure involved in it. Our results show super - virtual refraction processing produces seismic image which has higher signal-to-noise ratio than its raw seismic image. In the end, the numbers of reliable first arrival picks are also increased.« less

  3. Anthropomorphic teleoperation: Controlling remote manipulators with the DataGlove

    NASA Technical Reports Server (NTRS)

    Hale, J. P., II

    1992-01-01

    A two phase effort was conducted to assess the capabilities and limitations of the DataGlove, a lightweight glove input device that can output signals in real-time based on hand shape, orientation, and movement. The first phase was a period for system integration, checkout, and familiarization in a virtual environment. The second phase was a formal experiment using the DataGlove as input device to control the protoflight manipulator arm (PFMA) - a large telerobotic arm with an 8-ft reach. The first phase was used to explore and understand how the DataGlove functions in a virtual environment, build a virtual PFMA, and consider and select a reasonable teleoperation control methodology. Twelve volunteers (six males and six females) participated in a 2 x 3 (x 2) full-factorial formal experiment using the DataGlove to control the PFMA in a simple retraction, slewing, and insertion task. Two within-subjects variables, time delay (0, 1, and 2 seconds) and PFMA wrist flexibility (rigid/flexible), were manipulated. Gender served as a blocking variable. A main effect of time delay was found for slewing and total task times. Correlations among questionnaire responses, and between questionnaire responses and session mean scores and gender were computed. The experimental data were also compared with data collected in another study that used a six degree-of-freedom handcontroller to control the PFMA in the same task. It was concluded that the DataGlove is a legitimate teleoperations input device that provides a natural, intuitive user interface. From an operational point of view, it compares favorably with other 'standard' telerobotic input devices and should be considered in future trades in teleoperation systems' designs.

  4. VCSim3: a VR simulator for cardiovascular interventions.

    PubMed

    Korzeniowski, Przemyslaw; White, Ruth J; Bello, Fernando

    2018-01-01

    Effective and safe performance of cardiovascular interventions requires excellent catheter/guidewire manipulation skills. These skills are currently mainly gained through an apprenticeship on real patients, which may not be safe or cost-effective. Computer simulation offers an alternative for core skills training. However, replicating the physical behaviour of real instruments navigated through blood vessels is a challenging task. We have developed VCSim3-a virtual reality simulator for cardiovascular interventions. The simulator leverages an inextensible Cosserat rod to model virtual catheters and guidewires. Their mechanical properties were optimized with respect to their real counterparts scanned in a silicone phantom using X-ray CT imaging. The instruments are manipulated via a VSP haptic device. Supporting solutions such as fluoroscopic visualization, contrast flow propagation, cardiac motion, balloon inflation, and stent deployment, enable performing a complete angioplasty procedure. We present detailed results of simulation accuracy of the virtual instruments, along with their computational performance. In addition, the results of a preliminary face and content validation study conveyed on a group of 17 interventional radiologists are given. VR simulation of cardiovascular procedure can contribute to surgical training and improve the educational experience without putting patients at risk, raising ethical issues or requiring expensive animal or cadaver facilities. VCSim3 is still a prototype, yet the initial results indicate that it provides promising foundations for further development.

  5. On the Value of Estimating Human Arm Stiffness during Virtual Teleoperation with Robotic Manipulators

    PubMed Central

    Buzzi, Jacopo; Ferrigno, Giancarlo; Jansma, Joost M.; De Momi, Elena

    2017-01-01

    Teleoperated robotic systems are widely spreading in multiple different fields, from hazardous environments exploration to surgery. In teleoperation, users directly manipulate a master device to achieve task execution at the slave robot side; this interaction is fundamental to guarantee both system stability and task execution performance. In this work, we propose a non-disruptive method to study the arm endpoint stiffness. We evaluate how users exploit the kinetic redundancy of the arm to achieve stability and precision during the execution of different tasks with different master devices. Four users were asked to perform two planar trajectories following virtual tasks using both a serial and a parallel link master device. Users' arm kinematics and muscular activation were acquired and combined with a user-specific musculoskeletal model to estimate the joint stiffness. Using the arm kinematic Jacobian, the arm end-point stiffness was derived. The proposed non-disruptive method is capable of estimating the arm endpoint stiffness during the execution of virtual teleoperated tasks. The obtained results are in accordance with the existing literature in human motor control and show, throughout the tested trajectory, a modulation of the arm endpoint stiffness that is affected by task characteristics and hand speed and acceleration. PMID:29018319

  6. A 3-RSR Haptic Wearable Device for Rendering Fingertip Contact Forces.

    PubMed

    Leonardis, Daniele; Solazzi, Massimiliano; Bortone, Ilaria; Frisoli, Antonio

    2017-01-01

    A novel wearable haptic device for modulating contact forces at the fingertip is presented. Rendering of forces by skin deformation in three degrees of freedom (DoF), with contact-no contact capabilities, was implemented through rigid parallel kinematics. The novel asymmetrical three revolute-spherical-revolute (3-RSR) configuration allowed compact dimensions with minimum encumbrance of the hand workspace. The device was designed to render constant to low frequency deformation of the fingerpad in three DoF, combining light weight with relatively high output forces. A differential method for solving the non-trivial inverse kinematics is proposed and implemented in real time for controlling the device. The first experimental activity evaluated discrimination of different fingerpad stretch directions in a group of five subjects. The second experiment, enrolling 19 subjects, evaluated cutaneous feedback provided in a virtual pick-and-place manipulation task. Stiffness of the fingerpad plus device was measured and used to calibrate the physics of the virtual environment. The third experiment with 10 subjects evaluated interaction forces in a virtual lift-and-hold task. Although with different performance in the two manipulation experiments, overall results show that participants better controlled interaction forces when the cutaneous feedback was active, with significant differences between the visual and visuo-haptic experimental conditions.

  7. Recognition profile of emotions in natural and virtual faces.

    PubMed

    Dyck, Miriam; Winbeck, Maren; Leiberg, Susanne; Chen, Yuhan; Gur, Ruben C; Gur, Rurben C; Mathiak, Klaus

    2008-01-01

    Computer-generated virtual faces become increasingly realistic including the simulation of emotional expressions. These faces can be used as well-controlled, realistic and dynamic stimuli in emotion research. However, the validity of virtual facial expressions in comparison to natural emotion displays still needs to be shown for the different emotions and different age groups. Thirty-two healthy volunteers between the age of 20 and 60 rated pictures of natural human faces and faces of virtual characters (avatars) with respect to the expressed emotions: happiness, sadness, anger, fear, disgust, and neutral. Results indicate that virtual emotions were recognized comparable to natural ones. Recognition differences in virtual and natural faces depended on specific emotions: whereas disgust was difficult to convey with the current avatar technology, virtual sadness and fear achieved better recognition results than natural faces. Furthermore, emotion recognition rates decreased for virtual but not natural faces in participants over the age of 40. This specific age effect suggests that media exposure has an influence on emotion recognition. Virtual and natural facial displays of emotion may be equally effective. Improved technology (e.g. better modelling of the naso-labial area) may lead to even better results as compared to trained actors. Due to the ease with which virtual human faces can be animated and manipulated, validated artificial emotional expressions will be of major relevance in future research and therapeutic applications.

  8. Recognition Profile of Emotions in Natural and Virtual Faces

    PubMed Central

    Dyck, Miriam; Winbeck, Maren; Leiberg, Susanne; Chen, Yuhan; Gur, Rurben C.; Mathiak, Klaus

    2008-01-01

    Background Computer-generated virtual faces become increasingly realistic including the simulation of emotional expressions. These faces can be used as well-controlled, realistic and dynamic stimuli in emotion research. However, the validity of virtual facial expressions in comparison to natural emotion displays still needs to be shown for the different emotions and different age groups. Methodology/Principal Findings Thirty-two healthy volunteers between the age of 20 and 60 rated pictures of natural human faces and faces of virtual characters (avatars) with respect to the expressed emotions: happiness, sadness, anger, fear, disgust, and neutral. Results indicate that virtual emotions were recognized comparable to natural ones. Recognition differences in virtual and natural faces depended on specific emotions: whereas disgust was difficult to convey with the current avatar technology, virtual sadness and fear achieved better recognition results than natural faces. Furthermore, emotion recognition rates decreased for virtual but not natural faces in participants over the age of 40. This specific age effect suggests that media exposure has an influence on emotion recognition. Conclusions/Significance Virtual and natural facial displays of emotion may be equally effective. Improved technology (e.g. better modelling of the naso-labial area) may lead to even better results as compared to trained actors. Due to the ease with which virtual human faces can be animated and manipulated, validated artificial emotional expressions will be of major relevance in future research and therapeutic applications. PMID:18985152

  9. Object impedance control for cooperative manipulation - Theory and experimental results

    NASA Technical Reports Server (NTRS)

    Schneider, Stanley A.; Cannon, Robert H., Jr.

    1992-01-01

    This paper presents the dynamic control module of the Dynamic and Strategic Control of Cooperating Manipulators (DASCCOM) project at Stanford University's Aerospace Robotics Laboratory. First, the cooperative manipulation problem is analyzed from a systems perspective, and the desirable features of a control system for cooperative manipulation are discussed. Next, a control policy is developed that enforces a controlled impedance not of the individual arm endpoints, but of the manipulated object itself. A parallel implementation for a multiprocessor system is presented. The controller fully compensates for the system dynamics and directly controls the object internal forces. Most importantly, it presents a simple, powerful, intuitive interface to higher level strategic control modules. Experimental results from a dual two-link-arm robotic system are used to compare the object impedance controller with other strategies, both for free-motion slews and environmental contact.

  10. Still Virtual After All These Years: Recent Developments in the Virtual Solar Observatory

    NASA Astrophysics Data System (ADS)

    Gurman, J. B.; Bogart, R. S.; Davey, A. R.; Hill, F.; Martens, P. C.; Zarro, D. M.; Team, T. v.

    2008-05-01

    While continuing to add access to data from new missions, including Hinode and STEREO, the Virtual Solar Observatory is also being enhanced as a research tool by the addition of new features such as the unified representation of catalogs and event lists (to allow joined searches in two or more catalogs) and workable representation and manipulation of large numbers of search results (as are expected from the Solar Dynamics Observatory database). Working with our RHESSI colleagues, we have also been able to improve the performance of IDL-callable vso_search and vso_get functions, to the point that use of those routines is a practical alternative to reproducing large subsets of mission data on one's own LAN.

  11. Research on Modeling Technology of Virtual Robot Based on LabVIEW

    NASA Astrophysics Data System (ADS)

    Wang, Z.; Huo, J. L.; Y Sun, L.; Y Hao, X.

    2017-12-01

    Because of the dangerous working environment, the underwater operation robot for nuclear power station needs manual teleoperation. In the process of operation, it is necessary to guide the position and orientation of the robot in real time. In this paper, the geometric modeling of the virtual robot and the working environment is accomplished by using SolidWorks software, and the accurate modeling and assembly of the robot are realized. Using LabVIEW software to read the model, and established the manipulator forward kinematics and inverse kinematics model, and realized the hierarchical modeling of virtual robot and computer graphics modeling. Experimental results show that the method studied in this paper can be successfully applied to robot control system.

  12. Still Virtual After All These Years: Recent Developments in the Virtual Solar Observatory

    NASA Technical Reports Server (NTRS)

    Gurman, Joseph B.; Bogart; Davey; Hill; Masters; Zarro

    2008-01-01

    While continuing to add access to data from new missions, including Hinode and STEREO, the Virtual Solar Observatory is also being enhanced as a research tool by the addition of new features such as the unified representation of catalogs and event lists (to allow joined searches in two or more catalogs) and workable representation and manipulation of large numbers of search results (as are expected from the Solar Dynamics Observatory database). Working with our RHESSI colleagues, we have also been able to improve the performance of IDL-callable vso_search and vso_get functions, to the point that use of those routines is a practical alternative to reproducing large subsets of mission data on one's own LAN.

  13. Use of the Remote Access Virtual Environment Network (RAVEN) for coordinated IVA-EVA astronaut training and evaluation.

    PubMed

    Cater, J P; Huffman, S D

    1995-01-01

    This paper presents a unique virtual reality training and assessment tool developed under a NASA grant, "Research in Human Factors Aspects of Enhanced Virtual Environments for Extravehicular Activity (EVA) Training and Simulation." The Remote Access Virtual Environment Network (RAVEN) was created to train and evaluate the verbal, mental and physical coordination required between the intravehicular (IVA) astronaut operating the Remote Manipulator System (RMS) arm and the EVA astronaut standing in foot restraints on the end of the RMS. The RAVEN system currently allows the EVA astronaut to approach the Hubble Space Telescope (HST) under control of the IVA astronaut and grasp, remove, and replace the Wide Field Planetary Camera drawer from its location in the HST. Two viewpoints, one stereoscopic and one monoscopic, were created all linked by Ethernet, that provided the two trainees with the appropriate training environments.

  14. Recent Progress in Virtual Reality Exposure Therapy for Phobias: A Systematic Review.

    PubMed

    Botella, Cristina; Fernández-Álvarez, Javier; Guillén, Verónica; García-Palacios, Azucena; Baños, Rosa

    2017-07-01

    This review is designed to systematically examine the available evidence about virtual reality exposure therapy's (VRET) efficacy for phobias, critically describe some of the most important challenges in the field and discuss possible directions. Evidence reveals that virtual reality (VR) is an effective treatment for phobias and useful for studying specific issues, such as pharmacological compounds and behavioral manipulations, that can enhance treatment outcomes. In addition, some variables, such as sense of presence in virtual environments, have a significant influence on outcomes, but further research is needed to better understand their role in therapeutic outcomes. We conclude that VR is a useful tool to improve exposure therapy and it can be a good option to analyze the processes and mechanisms involved in exposure therapy and the ways this strategy can be enhanced. In the coming years, there will be a significant expansion of VR in routine practice in clinical contexts.

  15. Perceiving interpersonally-mediated risk in virtual environments

    PubMed Central

    Portnoy, David B.; Smoak, Natalie D.; Marsh, Kerry L.

    2009-01-01

    Using virtual reality (VR) to examine risky behavior that is mediated by interpersonal contact, such as agreeing to have sex, drink, or smoke with someone, offers particular promise and challenges. Social contextual stimuli that might trigger impulsive responses can be carefully controlled in virtual environments (VE), and yet manipulations of risk might be implausible to participants if they do not feel sufficiently immersed in the environment. The current study examined whether individuals can display adequate evidence of presence in a VE that involved potential interpersonally-induced risk: meeting a potential dating partner. Results offered some evidence for the potential of VR for the study of such interpersonal risk situations. Participants’ reaction to the scenario and risk-associated responses to the situation suggested that the embodied nature of virtual reality override the reality of the risk’s impossibility, allowing participants to experience adequate situational embedding, or presence. PMID:20228871

  16. Perceiving interpersonally-mediated risk in virtual environments.

    PubMed

    Portnoy, David B; Smoak, Natalie D; Marsh, Kerry L

    2010-03-01

    Using virtual reality (VR) to examine risky behavior that is mediated by interpersonal contact, such as agreeing to have sex, drink, or smoke with someone, offers particular promise and challenges. Social contextual stimuli that might trigger impulsive responses can be carefully controlled in virtual environments (VE), and yet manipulations of risk might be implausible to participants if they do not feel sufficiently immersed in the environment. The current study examined whether individuals can display adequate evidence of presence in a VE that involved potential interpersonally-induced risk: meeting a potential dating partner. Results offered some evidence for the potential of VR for the study of such interpersonal risk situations. Participants' reaction to the scenario and risk-associated responses to the situation suggested that the embodied nature of virtual reality override the reality of the risk's impossibility, allowing participants to experience adequate situational embedding, or presence.

  17. Supporting Teachers' Use of Virtual Manipulatives

    ERIC Educational Resources Information Center

    Reiten, Lindsay

    2017-01-01

    Technology integration is a critical and longstanding issue in mathematics education. As access to various technology resources increases, so too has the expectation for teachers to use technology to enhance student engagement and understanding. Despite the potential benefits to integrating technology into teachers' instructional practices,…

  18. Neural network-based position synchronised internal force control scheme for cooperative manipulator system

    NASA Astrophysics Data System (ADS)

    Wang, Jin; Xu, Fan; Lu, GuoDong

    2017-09-01

    More complex problems of simultaneous position and internal force control occur with cooperative manipulator systems than that of a single one. In the presence of unwanted parametric and modelling uncertainties as well as external disturbances, a decentralised position synchronised force control scheme is proposed. With a feedforward neural network estimating engine, a precise model of the system dynamics is not required. Unlike conventional cooperative or synchronised controllers, virtual position and virtual synchronisation errors are introduced for internal force tracking control and task space position synchronisation. Meanwhile joint space synchronisation and force measurement are unnecessary. Together with simulation studies and analysis, the position and the internal force errors are shown to asymptotically converge to zero. Moreover, the controller exhibits different characteristics with selected synchronisation factors. Under certain settings, it can deal with temporary cooperation by an intelligent retreat mechanism, where less internal force would occur and rigid collision can be avoided. Using a Lyapunov stability approach, the controller is proven to be robust in face of the aforementioned uncertainties.

  19. A review of virtual cutting methods and technology in deformable objects.

    PubMed

    Wang, Monan; Ma, Yuzheng

    2018-06-05

    Virtual cutting of deformable objects has been a research topic for more than a decade and has been used in many areas, especially in surgery simulation. We refer to the relevant literature and briefly describe the related research. The virtual cutting method is introduced, and we discuss the benefits and limitations of these methods and explore possible research directions. Virtual cutting is a category of object deformation. It needs to represent the deformation of models in real time as accurately, robustly and efficiently as possible. To accurately represent models, the method must be able to: (1) model objects with different material properties; (2) handle collision detection and collision response; and (3) update the geometry and topology of the deformable model that is caused by cutting. Virtual cutting is widely used in surgery simulation, and research of the cutting method is important to the development of surgery simulation. Copyright © 2018 John Wiley & Sons, Ltd.

  20. Hybrid PolyLingual Object Model: An Efficient and Seamless Integration of Java and Native Components on the Dalvik Virtual Machine

    PubMed Central

    Huang, Yukun; Chen, Rong; Wei, Jingbo; Pei, Xilong; Cao, Jing; Prakash Jayaraman, Prem; Ranjan, Rajiv

    2014-01-01

    JNI in the Android platform is often observed with low efficiency and high coding complexity. Although many researchers have investigated the JNI mechanism, few of them solve the efficiency and the complexity problems of JNI in the Android platform simultaneously. In this paper, a hybrid polylingual object (HPO) model is proposed to allow a CAR object being accessed as a Java object and as vice in the Dalvik virtual machine. It is an acceptable substitute for JNI to reuse the CAR-compliant components in Android applications in a seamless and efficient way. The metadata injection mechanism is designed to support the automatic mapping and reflection between CAR objects and Java objects. A prototype virtual machine, called HPO-Dalvik, is implemented by extending the Dalvik virtual machine to support the HPO model. Lifespan management, garbage collection, and data type transformation of HPO objects are also handled in the HPO-Dalvik virtual machine automatically. The experimental result shows that the HPO model outweighs the standard JNI in lower overhead on native side, better executing performance with no JNI bridging code being demanded. PMID:25110745

  1. Integrating a Mobile Augmented Reality Activity to Contextualize Student Learning of a Socioscienti?c Issue

    ERIC Educational Resources Information Center

    Chang, Hsin-Yi; Wu, Hsin-Kai; Hsu, Ying-Shao

    2013-01-01

    virtual objects or information overlaying physical objects or environments, resulting in a mixed reality in which virtual objects and real environments coexist in a meaningful way to augment learning…

  2. Illusion media: Generating virtual objects using realizable metamaterials

    NASA Astrophysics Data System (ADS)

    Jiang, Wei Xiang; Ma, Hui Feng; Cheng, Qiang; Cui, Tie Jun

    2010-03-01

    We propose a class of optical transformation media, illusion media, which render the enclosed object invisible and generate one or more virtual objects as desired. We apply the proposed media to design a microwave device, which transforms an actual object into two virtual objects. Such an illusion device exhibits unusual electromagnetic behavior as verified by full-wave simulations. Different from the published illusion devices which are composed of left-handed materials with simultaneously negative permittivity and permeability, the proposed illusion media have finite and positive permittivity and permeability. Hence the designed device could be realizable using artificial metamaterials.

  3. Dynamic coupling of underactuated manipulators

    NASA Astrophysics Data System (ADS)

    Bergerman, Marcel; Lee, Christopher; Xu, Yangsheng

    1994-08-01

    In recent years, researchers have been turning their attention to so called underactuated systems, where the term underactuated refers to the fact that the system has more joints than control actuators. Some examples of underactuated systems are robot manipulators with failed actuators; free-floating space robots, where the base can be considered as a virtual passive linkage in inertia space; legged robots with passive joints; hyper-redundant (snake-like) robots with passive joints, etc. From the examples above, it is possible to justify the importance of the study of underactuated systems. For example, if some actuators of a conventional manipulator fail, the loss of one or more degrees of freedom may compromise an entire operation. In free-floating space systems, the base (satellite) can be considered as a 6-DOF device without positioning actuators. Finally, manipulators with passive joints and hyper-redundant robots with few actuators are important from the viewpoint of energy saving, lightweight design and compactness.

  4. Virtual wall-based haptic-guided teleoperated surgical robotic system for single-port brain tumor removal surgery.

    PubMed

    Seung, Sungmin; Choi, Hongseok; Jang, Jongseong; Kim, Young Soo; Park, Jong-Oh; Park, Sukho; Ko, Seong Young

    2017-01-01

    This article presents a haptic-guided teleoperation for a tumor removal surgical robotic system, so-called a SIROMAN system. The system was developed in our previous work to make it possible to access tumor tissue, even those that seat deeply inside the brain, and to remove the tissue with full maneuverability. For a safe and accurate operation to remove only tumor tissue completely while minimizing damage to the normal tissue, a virtual wall-based haptic guidance together with a medical image-guided control is proposed and developed. The virtual wall is extracted from preoperative medical images, and the robot is controlled to restrict its motion within the virtual wall using haptic feedback. Coordinate transformation between sub-systems, a collision detection algorithm, and a haptic-guided teleoperation using a virtual wall are described in the context of using SIROMAN. A series of experiments using a simplified virtual wall are performed to evaluate the performance of virtual wall-based haptic-guided teleoperation. With haptic guidance, the accuracy of the robotic manipulator's trajectory is improved by 57% compared to one without. The tissue removal performance is also improved by 21% ( p < 0.05). The experiments show that virtual wall-based haptic guidance provides safer and more accurate tissue removal for single-port brain surgery.

  5. Virtual landmarks

    NASA Astrophysics Data System (ADS)

    Tong, Yubing; Udupa, Jayaram K.; Odhner, Dewey; Bai, Peirui; Torigian, Drew A.

    2017-03-01

    Much has been published on finding landmarks on object surfaces in the context of shape modeling. While this is still an open problem, many of the challenges of past approaches can be overcome by removing the restriction that landmarks must be on the object surface. The virtual landmarks we propose may reside inside, on the boundary of, or outside the object and are tethered to the object. Our solution is straightforward, simple, and recursive in nature, proceeding from global features initially to local features in later levels to detect landmarks. Principal component analysis (PCA) is used as an engine to recursively subdivide the object region. The object itself may be represented in binary or fuzzy form or with gray values. The method is illustrated in 3D space (although it generalizes readily to spaces of any dimensionality) on four objects (liver, trachea and bronchi, and outer boundaries of left and right lungs along pleura) derived from 5 patient computed tomography (CT) image data sets of the thorax and abdomen. The virtual landmark identification approach seems to work well on different structures in different subjects and seems to detect landmarks that are homologously located in different samples of the same object. The approach guarantees that virtual landmarks are invariant to translation, scaling, and rotation of the object/image. Landmarking techniques are fundamental for many computer vision and image processing applications, and we are currently exploring the use virtual landmarks in automatic anatomy recognition and object analytics.

  6. Role of virtual reality for cerebral palsy management.

    PubMed

    Weiss, Patrice L Tamar; Tirosh, Emanuel; Fehlings, Darcy

    2014-08-01

    Virtual reality is the use of interactive simulations to present users with opportunities to perform in virtual environments that appear, sound, and less frequently, feel similar to real-world objects and events. Interactive computer play refers to the use of a game where a child interacts and plays with virtual objects in a computer-generated environment. Because of their distinctive attributes that provide ecologically realistic and motivating opportunities for active learning, these technologies have been used in pediatric rehabilitation over the past 15 years. The ability of virtual reality to create opportunities for active repetitive motor/sensory practice adds to their potential for neuroplasticity and learning in individuals with neurologic disorders. The objectives of this article is to provide an overview of how virtual reality and gaming are used clinically, to present the results of several example studies that demonstrate their use in research, and to briefly remark on future developments. © The Author(s) 2014.

  7. Object Representation in Infants' Coordination of Manipulative Force

    ERIC Educational Resources Information Center

    Mash, Clay

    2007-01-01

    This study examined infants' use of object knowledge for scaling the manipulative force of object-directed actions. Infants 9, 12, and 15 months of age were outfitted with motion-analysis sensors on their arms and then presented with stimulus objects to examine individually over a series of familiarization trials. Two stimulus objects were used in…

  8. Dynamic Primitives of Motor Behavior

    PubMed Central

    Hogan, Neville; Sternad, Dagmar

    2013-01-01

    We present in outline a theory of sensorimotor control based on dynamic primitives, which we define as attractors. To account for the broad class of human interactive behaviors—especially tool use—we propose three distinct primitives: submovements, oscillations and mechanical impedances, the latter necessary for interaction with objects. Due to fundamental features of the neuromuscular system, most notably its slow response, we argue that encoding in terms of parameterized primitives may be an essential simplification required for learning, performance, and retention of complex skills. Primitives may simultaneously and sequentially be combined to produce observable forces and motions. This may be achieved by defining a virtual trajectory composed of submovements and/or oscillations interacting with impedances. Identifying primitives requires care: in principle, overlapping submovements would be sufficient to compose all observed movements but biological evidence shows that oscillations are a distinct primitive. Conversely, we suggest that kinematic synergies, frequently discussed as primitives of complex actions, may be an emergent consequence of neuromuscular impedance. To illustrate how these dynamic primitives may account for complex actions, we briefly review three types of interactive behaviors: constrained motion, impact tasks, and manipulation of dynamic objects. PMID:23124919

  9. The RoboCup Mixed Reality League - A Case Study

    NASA Astrophysics Data System (ADS)

    Gerndt, Reinhard; Bohnen, Matthias; da Silva Guerra, Rodrigo; Asada, Minoru

    In typical mixed reality systems there is only a one-way interaction from real to virtual. A human user or the physics of a real object may influence the behavior of virtual objects, but real objects usually cannot be influenced by the virtual world. By introducing real robots into the mixed reality system, we allow a true two-way interaction between virtual and real worlds. Our system has been used since 2007 to implement the RoboCup mixed reality soccer games and other applications for research and edutainment. Our framework system is freely programmable to generate any virtual environment, which may then be further supplemented with virtual and real objects. The system allows for control of any real object based on differential drive robots. The robots may be adapted for different applications, e.g., with markers for identification or with covers to change shape and appearance. They may also be “equipped” with virtual tools. In this chapter we present the hardware and software architecture of our system and some applications. The authors believe this can be seen as a first implementation of Ivan Sutherland’s 1965 idea of the ultimate display: “The ultimate display would, of course, be a room within which the computer can control the existence of matter …” (Sutherland, 1965, Proceedings of IFIPS Congress 2:506-508).

  10. A domain-specific system for representing knowledge of both man-made objects and human actions. Evidence from a case with an association of deficits.

    PubMed

    Vannuscorps, Gilles; Pillon, Agnesa

    2011-07-01

    We report the single-case study of a brain-damaged individual, JJG, presenting with a conceptual deficit and whose knowledge of living things, man-made objects, and actions was assessed. The aim was to seek for empirical evidence pertaining to the issue of how conceptual knowledge of objects, both living things and man-made objects, is related to conceptual knowledge of actions at the functional level. We first found that JJG's conceptual knowledge of both man-made objects and actions was similarly impaired while his conceptual knowledge of living things was spared as well as his knowledge of unique entities. We then examined whether this pattern of association of a conceptual deficit for both man-made objects and actions could be accounted for, first, by the "sensory/functional" and, second, the "manipulability" account for category-specific conceptual impairments advocated within the Feature-Based-Organization theory of conceptual knowledge organization, by assessing, first, patient's knowledge of sensory compared to functional features, second, his knowledge of manipulation compared to functional features and, third, his knowledge of manipulable compared to non-manipulable objects and actions. The later assessment also allowed us to evaluate an account for the deficits in terms of failures of simulating the hand movements implied by manipulable objects and manual actions. The findings showed that, contrary to the predictions made by the "sensory/functional", the "manipulability", and the "failure-of-simulating" accounts for category-specific conceptual impairments, the patient's association of deficits for both man-made objects and actions was not associated with a disproportionate impairment of functional compared to sensory knowledge or of manipulation compared to functional knowledge; manipulable items were not more impaired than non-manipulable items either. In the general discussion, we propose to account for the patient's association of deficits by the hypothesis that concepts whose core property is that of being a mean of achieving a goal - like the concepts of man-made objects and of actions - are learned, represented and processed by a common domain-specific conceptual system, which would have evolved to allow human beings to quickly and efficiently design and understand means to achieve goals and purposes. Copyright © 2011 Elsevier Ltd. All rights reserved.

  11. Generation of realistic virtual nodules based on three-dimensional spatial resolution in lung computed tomography: A pilot phantom study.

    PubMed

    Narita, Akihiro; Ohkubo, Masaki; Murao, Kohei; Matsumoto, Toru; Wada, Shinichi

    2017-10-01

    The aim of this feasibility study using phantoms was to propose a novel method for obtaining computer-generated realistic virtual nodules in lung computed tomography (CT). In the proposed methodology, pulmonary nodule images obtained with a CT scanner are deconvolved with the point spread function (PSF) in the scan plane and slice sensitivity profile (SSP) measured for the scanner; the resultant images are referred to as nodule-like object functions. Next, by convolving the nodule-like object function with the PSF and SSP of another (target) scanner, the virtual nodule can be generated so that it has the characteristics of the spatial resolution of the target scanner. To validate the methodology, the authors applied physical nodules of 5-, 7- and 10-mm-diameter (uniform spheres) included in a commercial CT test phantom. The nodule-like object functions were calculated from the sphere images obtained with two scanners (Scanner A and Scanner B); these functions were referred to as nodule-like object functions A and B, respectively. From these, virtual nodules were generated based on the spatial resolution of another scanner (Scanner C). By investigating the agreement of the virtual nodules generated from the nodule-like object functions A and B, the equivalence of the nodule-like object functions obtained from different scanners could be assessed. In addition, these virtual nodules were compared with the real (true) sphere images obtained with Scanner C. As a practical validation, five types of laboratory-made physical nodules with various complicated shapes and heterogeneous densities, similar to real lesions, were used. The nodule-like object functions were calculated from the images of these laboratory-made nodules obtained with Scanner A. From them, virtual nodules were generated based on the spatial resolution of Scanner C and compared with the real images of laboratory-made nodules obtained with Scanner C. Good agreement of the virtual nodules generated from the nodule-like object functions A and B of the phantom spheres was found, suggesting the validity of the nodule-like object functions. The virtual nodules generated from the nodule-like object function A of the phantom spheres were similar to the real images obtained with Scanner C; the root mean square errors (RMSEs) between them were 10.8, 11.1, and 12.5 Hounsfield units (HU) for 5-, 7-, and 10-mm-diameter spheres, respectively. The equivalent results (RMSEs) using the nodule-like object function B were 15.9, 16.8, and 16.5 HU, respectively. These RMSEs were small considering the high contrast between the sphere density and background density (approximately 674 HU). The virtual nodules generated from the nodule-like object functions of the five laboratory-made nodules were similar to the real images obtained with Scanner C; the RMSEs between them ranged from 6.2 to 8.6 HU in five cases. The nodule-like object functions calculated from real nodule images would be effective to generate realistic virtual nodules. The proposed method would be feasible for generating virtual nodules that have the characteristics of the spatial resolution of the CT system used in each institution, allowing for site-specific nodule generation. © 2017 American Association of Physicists in Medicine.

  12. Toward automated formation of microsphere arrangements using multiplexed optical tweezers

    NASA Astrophysics Data System (ADS)

    Rajasekaran, Keshav; Bollavaram, Manasa; Banerjee, Ashis G.

    2016-09-01

    Optical tweezers offer certain advantages such as multiplexing using a programmable spatial light modulator, flexibility in the choice of the manipulated object and the manipulation medium, precise control, easy object release, and minimal object damage. However, automated manipulation of multiple objects in parallel, which is essential for efficient and reliable formation of micro-scale assembly structures, poses a difficult challenge. There are two primary research issues in addressing this challenge. First, the presence of stochastic Langevin force giving rise to Brownian motion requires motion control for all the manipulated objects at fast rates of several Hz. Second, the object dynamics is non-linear and even difficult to represent analytically due to the interaction of multiple optical traps that are manipulating neighboring objects. As a result, automated controllers have not been realized for tens of objects, particularly with three dimensional motions with guaranteed collision avoidances. In this paper, we model the effect of interacting optical traps on microspheres with significant Brownian motions in stationary fluid media, and develop simplified state-space representations. These representations are used to design a model predictive controller to coordinate the motions of several spheres in real time. Preliminary experiments demonstrate the utility of the controller in automatically forming desired arrangements of varying configurations starting with randomly dispersed microspheres.

  13. Exploring interaction with 3D volumetric displays

    NASA Astrophysics Data System (ADS)

    Grossman, Tovi; Wigdor, Daniel; Balakrishnan, Ravin

    2005-03-01

    Volumetric displays generate true volumetric 3D images by actually illuminating points in 3D space. As a result, viewing their contents is similar to viewing physical objects in the real world. These displays provide a 360 degree field of view, and do not require the user to wear hardware such as shutter glasses or head-trackers. These properties make them a promising alternative to traditional display systems for viewing imagery in 3D. Because these displays have only recently been made available commercially (e.g., www.actuality-systems.com), their current use tends to be limited to non-interactive output-only display devices. To take full advantage of the unique features of these displays, however, it would be desirable if the 3D data being displayed could be directly interacted with and manipulated. We investigate interaction techniques for volumetric display interfaces, through the development of an interactive 3D geometric model building application. While this application area itself presents many interesting challenges, our focus is on the interaction techniques that are likely generalizable to interactive applications for other domains. We explore a very direct style of interaction where the user interacts with the virtual data using direct finger manipulations on and around the enclosure surrounding the displayed 3D volumetric image.

  14. Correlation between perceptual, visuo-spatial, and psychomotor aptitude to duration of training required to reach performance goals on the MIST-VR surgical simulator.

    PubMed

    McClusky, D A; Ritter, E M; Lederman, A B; Gallagher, A G; Smith, C D

    2005-01-01

    Given the dynamic nature of modern surgical education, determining factors that may improve the efficiency of laparoscopic training is warranted. The objective of this study was to analyze whether perceptual, visuo-spatial, or psychomotor aptitude are related to the amount of training required to reach specific performance-based goals on a virtual reality surgical simulator. Sixteen MS4 medical students participated in an elective skills course intended to train laparoscopic skills. All were tested for perceptual, visuo-spatial, and psychomotor aptitude using previously validated psychological tests. Training involved as many instructor-guided 1-hour sessions as needed to reach performance goals on a custom designed MIST-VR manipulation-diathermy task (Mentice AB, Gothenberg, Sweden). Thirteen subjects reached performance goals by the end of the course. Two were excluded from analysis due to previous experience with the MIST-VR (total n = 11). Perceptual ability (r = -0.76, P = 0.007) and psychomotor skills (r = 0.62, P = 0.04) significantly correlated with the number of trials required. Visuo-spatial ability did not significantly correlate with training duration. The number of trials required to train subjects to performance goals on the MIST-VR manipulation diathermy task is significantly related to perceptual and psychomotor aptitude.

  15. Adaptive strategies of remote systems operators exposed to perturbed camera-viewing conditions

    NASA Technical Reports Server (NTRS)

    Stuart, Mark A.; Manahan, Meera K.; Bierschwale, John M.; Sampaio, Carlos E.; Legendre, A. J.

    1991-01-01

    This report describes a preliminary investigation of the use of perturbed visual feedback during the performance of simulated space-based remote manipulation tasks. The primary objective of this NASA evaluation was to determine to what extent operators exhibit adaptive strategies which allow them to perform these specific types of remote manipulation tasks more efficiently while exposed to perturbed visual feedback. A secondary objective of this evaluation was to establish a set of preliminary guidelines for enhancing remote manipulation performance and reducing the adverse effects. These objectives were accomplished by studying the remote manipulator performance of test subjects exposed to various perturbed camera-viewing conditions while performing a simulated space-based remote manipulation task. Statistical analysis of performance and subjective data revealed that remote manipulation performance was adversely affected by the use of perturbed visual feedback and performance tended to improve with successive trials in most perturbed viewing conditions.

  16. The development of a collaborative virtual environment for finite element simulation

    NASA Astrophysics Data System (ADS)

    Abdul-Jalil, Mohamad Kasim

    Communication between geographically distributed designers has been a major hurdle in traditional engineering design. Conventional methods of communication, such as video conferencing, telephone, and email, are less efficient especially when dealing with complex design models. Complex shapes, intricate features and hidden parts are often difficult to describe verbally or even using traditional 2-D or 3-D visual representations. Virtual Reality (VR) and Internet technologies have provided a substantial potential to bridge the present communication barrier. VR technology allows designers to immerse themselves in a virtual environment to view and manipulate this model just as in real-life. Fast Internet connectivity has enabled fast data transfer between remote locations. Although various collaborative virtual environment (CVE) systems have been developed in the past decade, they are limited to high-end technology that is not accessible to typical designers. The objective of this dissertation is to discover and develop a new approach to increase the efficiency of the design process, particularly for large-scale applications wherein participants are geographically distributed. A multi-platform and easily accessible collaborative virtual environment (CVRoom), is developed to accomplish the stated research objective. Geographically dispersed designers can meet in a single shared virtual environment to discuss issues pertaining to the engineering design process and to make trade-off decisions more quickly than before, thereby speeding the entire process. This 'faster' design process will be achieved through the development of capabilities to better enable the multidisciplinary and modeling the trade-off decisions that are so critical before launching into a formal detailed design. The features of the environment developed as a result of this research include the ability to view design models, use voice interaction, and to link engineering analysis modules (such as Finite Element Analysis module, such as is demonstrated in this work). One of the major issues in developing a CVE system for engineering design purposes is to obtain any pertinent simulation results in real-time. This is critical so that the designers can make decisions based on these results quickly. For example, in a finite element analysis, if a design model is changed or perturbed, the analysis results must be obtained in real-time or near real-time to make the virtual meeting environment realistic. In this research, the finite difference-based Design Sensitivity Analysis (DSA) approach is employed to approximate structural responses (i.e. stress, displacement, etc), so as to demonstrate the applicability of CVRoom for engineering design trade-offs. This DSA approach provides for fast approximation and is well-suited for the virtual meeting environment where fast response time is required. The DSA-based approach is tested on several example test problems to show its applicability and limitations. This dissertation demonstrates that an increase in efficiency and reduction of time required for a complex design processing can be accomplished using the approach developed in this dissertation research. Several implementations of CVRoom by students working on common design tasks were investigated. All participants confirmed the preference of using the collaborative virtual environment developed in this dissertation work (CVRoom) over other modes of interactions. It is proposed here that CVRoom is representative of the type of collaborative virtual environment that will be used by most designers in the future to reduce the time required in a design cycle and thereby reduce the associated cost.

  17. Robotically facilitated virtual rehabilitation of arm transport integrated with finger movement in persons with hemiparesis.

    PubMed

    Merians, Alma S; Fluet, Gerard G; Qiu, Qinyin; Saleh, Soha; Lafond, Ian; Davidow, Amy; Adamovich, Sergei V

    2011-05-16

    Recovery of upper extremity function is particularly recalcitrant to successful rehabilitation. Robotic-assisted arm training devices integrated with virtual targets or complex virtual reality gaming simulations are being developed to deal with this problem. Neural control mechanisms indicate that reaching and hand-object manipulation are interdependent, suggesting that training on tasks requiring coordinated effort of both the upper arm and hand may be a more effective method for improving recovery of real world function. However, most robotic therapies have focused on training the proximal, rather than distal effectors of the upper extremity. This paper describes the effects of robotically-assisted, integrated upper extremity training. Twelve subjects post-stroke were trained for eight days on four upper extremity gaming simulations using adaptive robots during 2-3 hour sessions. The subjects demonstrated improved proximal stability, smoothness and efficiency of the movement path. This was in concert with improvement in the distal kinematic measures of finger individuation and improved speed. Importantly, these changes were accompanied by a robust 16-second decrease in overall time in the Wolf Motor Function Test and a 24-second decrease in the Jebsen Test of Hand Function. Complex gaming simulations interfaced with adaptive robots requiring integrated control of shoulder, elbow, forearm, wrist and finger movements appear to have a substantial effect on improving hemiparetic hand function. We believe that the magnitude of the changes and the stability of the patient's function prior to training, along with maintenance of several aspects of the gains demonstrated at retention make a compelling argument for this approach to training.

  18. Robotically facilitated virtual rehabilitation of arm transport integrated with finger movement in persons with hemiparesis

    PubMed Central

    2011-01-01

    Background Recovery of upper extremity function is particularly recalcitrant to successful rehabilitation. Robotic-assisted arm training devices integrated with virtual targets or complex virtual reality gaming simulations are being developed to deal with this problem. Neural control mechanisms indicate that reaching and hand-object manipulation are interdependent, suggesting that training on tasks requiring coordinated effort of both the upper arm and hand may be a more effective method for improving recovery of real world function. However, most robotic therapies have focused on training the proximal, rather than distal effectors of the upper extremity. This paper describes the effects of robotically-assisted, integrated upper extremity training. Methods Twelve subjects post-stroke were trained for eight days on four upper extremity gaming simulations using adaptive robots during 2-3 hour sessions. Results The subjects demonstrated improved proximal stability, smoothness and efficiency of the movement path. This was in concert with improvement in the distal kinematic measures of finger individuation and improved speed. Importantly, these changes were accompanied by a robust 16-second decrease in overall time in the Wolf Motor Function Test and a 24-second decrease in the Jebsen Test of Hand Function. Conclusions Complex gaming simulations interfaced with adaptive robots requiring integrated control of shoulder, elbow, forearm, wrist and finger movements appear to have a substantial effect on improving hemiparetic hand function. We believe that the magnitude of the changes and the stability of the patient's function prior to training, along with maintenance of several aspects of the gains demonstrated at retention make a compelling argument for this approach to training. PMID:21575185

  19. Encoding the world around us: motor-related processing influences verbal memory.

    PubMed

    Madan, Christopher R; Singhal, Anthony

    2012-09-01

    It is known that properties of words such as their imageability can influence our ability to remember those words. However, it is not known if other object-related properties can also influence our memory. In this study we asked whether a word representing a concrete object that can be functionally interacted with (i.e., high-manipulability word) would enhance the memory representations for that item compared to a word representing a less manipulable object (i.e., low-manipulability word). Here participants incidentally encoded high-manipulability (e.g., CAMERA) and low-manipulability words (e.g., TABLE) while making word judgments. Using a between-subjects design, we varied the depth-of-processing involved in the word judgment task: participants judged the words based on personal experience (deep/elaborative processing), word length (shallow), or functionality (intermediate). Participants were able to remember high-manipulability words better than low-manipulability words in both the personal experience and word length groups; thus presenting the first evidence that manipulability can influence memory. However, we observed better memory for low- than high-manipulability words in the functionality group. We explain this surprising interaction between manipulability and memory as being mediated by automatic vs. controlled motor-related cognition. Copyright © 2012 Elsevier Inc. All rights reserved.

  20. Learning to Manipulate and Categorize in Human and Artificial Agents

    ERIC Educational Resources Information Center

    Morlino, Giuseppe; Gianelli, Claudia; Borghi, Anna M.; Nolfi, Stefano

    2015-01-01

    This study investigates the acquisition of integrated object manipulation and categorization abilities through a series of experiments in which human adults and artificial agents were asked to learn to manipulate two-dimensional objects that varied in shape, color, weight, and color intensity. The analysis of the obtained results and the…

  1. Object Manipulation Facilitates Kind-Based Object Individuation of Shape-Similar Objects

    ERIC Educational Resources Information Center

    Kingo, Osman S.; Krojgaard, Peter

    2011-01-01

    Five experiments investigated the importance of shape and object manipulation when 12-month-olds were given the task of individuating objects representing exemplars of kinds in an event-mapping design. In Experiments 1 and 2, results of the study from Xu, Carey, and Quint (2004, Experiment 4) were partially replicated, showing that infants were…

  2. Visible Geology - Interactive online geologic block modelling

    NASA Astrophysics Data System (ADS)

    Cockett, R.

    2012-12-01

    Geology is a highly visual science, and many disciplines require spatial awareness and manipulation. For example, interpreting cross-sections, geologic maps, or plotting data on a stereonet all require various levels of spatial abilities. These skills are often not focused on in undergraduate geoscience curricula and many students struggle with spatial relations, manipulations, and penetrative abilities (e.g. Titus & Horsman, 2009). A newly developed program, Visible Geology, allows for students to be introduced to many geologic concepts and spatial skills in a virtual environment. Visible Geology is a web-based, three-dimensional environment where students can create and interrogate their own geologic block models. The program begins with a blank model, users then add geologic beds (with custom thickness and color) and can add geologic deformation events like tilting, folding, and faulting. Additionally, simple intrusive dikes can be modelled, as well as unconformities. Students can also explore the interaction of geology with topography by drawing elevation contours to produce their own topographic models. Students can not only spatially manipulate their model, but can create cross-sections and boreholes to practice their visual penetrative abilities. Visible Geology is easy to access and use, with no downloads required, so it can be incorporated into current, paper-based, lab activities. Sample learning activities are being developed that target introductory and structural geology curricula with learning objectives such as relative geologic history, fault characterization, apparent dip and thickness, interference folding, and stereonet interpretation. Visible Geology provides a richly interactive, and immersive environment for students to explore geologic concepts and practice their spatial skills.; Screenshot of Visible Geology showing folding and faulting interactions on a ridge topography.

  3. Virtual expansion of the technical vision system for smart vehicles based on multi-agent cooperation model

    NASA Astrophysics Data System (ADS)

    Krapukhina, Nina; Senchenko, Roman; Kamenov, Nikolay

    2017-12-01

    Road safety and driving in dense traffic flows poses some challenges in receiving information about surrounding moving object, some of which can be in the vehicle's blind spot. This work suggests an approach to virtual monitoring of the objects in a current road scene via a system with a multitude of cooperating smart vehicles exchanging information. It also describes the intellectual agent model, and provides methods and algorithms of identifying and evaluating various characteristics of moving objects in video flow. Authors also suggest ways for integrating the information from the technical vision system into the model with further expansion of virtual monitoring for the system's objects. Implementation of this approach can help to expand the virtual field of view for a technical vision system.

  4. Reconfigurable optical assembly of nanostructures

    PubMed Central

    Montelongo, Yunuen; Yetisen, Ali K.; Butt, Haider; Yun, Seok-Hyun

    2016-01-01

    Arrangements of nanostructures in well-defined patterns are the basis of photonic crystals, metamaterials and holograms. Furthermore, rewritable optical materials can be achieved by dynamically manipulating nanoassemblies. Here we demonstrate a mechanism to configure plasmonic nanoparticles (NPs) in polymer media using nanosecond laser pulses. The mechanism relies on optical forces produced by the interference of laser beams, which allow NPs to migrate to lower-energy configurations. The resulting NP arrangements are stable without any external energy source, but erasable and rewritable by additional recording pulses. We demonstrate reconfigurable optical elements including multilayer Bragg diffraction gratings, volumetric photonic crystals and lenses, as well as dynamic holograms of three-dimensional virtual objects. We aim to expand the applications of optical forces, which have been mostly restricted to optical tweezers. Holographic assemblies of nanoparticles will allow a new generation of programmable composites for tunable metamaterials, data storage devices, sensors and displays. PMID:27337216

  5. Tomographic techniques for the study of exceptionally preserved fossils

    PubMed Central

    Sutton, Mark D

    2008-01-01

    Three-dimensional fossils, especially those preserving soft-part anatomy, are a rich source of palaeontological information; they can, however, be difficult to work with. Imaging of serial planes through an object (tomography) allows study of both the inside and outside of three-dimensional fossils. Tomography may be performed using physical grinding or sawing coupled with photography, through optical techniques of serial focusing, or using a variety of scanning technologies such as neutron tomography, magnetic resonance imaging and most usefully X-ray computed tomography. This latter technique is applicable at a variety of scales, and when combined with a synchrotron X-ray source can produce very high-quality data that may be augmented by phase-contrast information to enhance contrast. Tomographic data can be visualized in several ways, the most effective of which is the production of isosurface-based ‘virtual fossils’ that can be manipulated and dissected interactively. PMID:18426749

  6. Exploring Virtual Reality for Classroom Use: The Virtual Reality and Education Lab at East Carolina University.

    ERIC Educational Resources Information Center

    Auld, Lawrence W. S.; Pantelidis, Veronica S.

    1994-01-01

    Describes the Virtual Reality and Education Lab (VREL) established at East Carolina University to study the implications of virtual reality for elementary and secondary education. Highlights include virtual reality software evaluation; hardware evaluation; computer-based curriculum objectives which could use virtual reality; and keeping current…

  7. The Creation of a Theoretical Framework for Avatar Creation and Revision

    ERIC Educational Resources Information Center

    Beck, Dennis; Murphy, Cheryl

    2014-01-01

    Multi-User Virtual Environments (MUVE) are increasingly being used in education and provide environments where users can manipulate minute details of their avatar's appearance including those traditionally associated with gender and race identification. The ability to choose racial and gender characteristics differs from real-world educational…

  8. Techno-Mathematical Discourse: A Conceptual Framework for Analyzing Classroom Discussions

    ERIC Educational Resources Information Center

    Anderson-Pence, Katie L.

    2017-01-01

    Extensive research has been published on the nature of classroom mathematical discourse and on the impact of technology tools, such as virtual manipulatives (VM), on students' learning, while less research has focused on how technology tools facilitate that mathematical discourse. This paper presents an emerging construct, the Techno-Mathematical…

  9. Zooming in on Children's Thinking

    ERIC Educational Resources Information Center

    Tucker, Steven; Shumway, Jessica F.; Moyer-Packenham, Patricia S.; Jordan, Kerry E.

    2016-01-01

    Teachers increasingly use virtual manipulatives and other apps on touch-screen devices (e.g., "iPads") in an effort to help students understand mathematics concepts. However, students experience these apps and their affordances in different ways. The purpose of this article is to inform teachers' decisions about app implementation in the…

  10. Teaching Multistep Equations with Virtual Manipulatives to Secondary Students with Learning Disabilities

    ERIC Educational Resources Information Center

    Satsangi, Rajiv; Hammer, Rachel; Evmenova, Anya S.

    2018-01-01

    Students with learning disabilities often struggle with the academic demands presented in secondary mathematics curricula. To combat these students' struggles, researchers have studied various pedagogical practices and classroom technologies for teaching standards covered in subjects such as algebra and geometry. However, as the role of computer-…

  11. Anesthesiology training using 3D imaging and virtual reality

    NASA Astrophysics Data System (ADS)

    Blezek, Daniel J.; Robb, Richard A.; Camp, Jon J.; Nauss, Lee A.

    1996-04-01

    Current training for regional nerve block procedures by anesthesiology residents requires expert supervision and the use of cadavers; both of which are relatively expensive commodities in today's cost-conscious medical environment. We are developing methods to augment and eventually replace these training procedures with real-time and realistic computer visualizations and manipulations of the anatomical structures involved in anesthesiology procedures, such as nerve plexus injections (e.g., celiac blocks). The initial work is focused on visualizations: both static images and rotational renderings. From the initial results, a coherent paradigm for virtual patient and scene representation will be developed.

  12. STS-103 crew perform virtual reality training in building 9N

    NASA Image and Video Library

    1999-05-24

    S99-05678 (24 May 1999)--- Astronaut Jean-Francois Clervoy (right), STS-103 mission specialist representing the European Space Agency (ESA), "controls" the shuttle's remote manipulator system (RMS) during a simulation using virtual reality type hardware at the Johnson Space Center (JSC). Looking on is astronaut John M. Grunsfeld, mission specialist. Both astronauts are assigned to separate duties supporting NASA's third Hubble Space Telescope (HST) servicing mission. Clervoy will be controlling Discovery's RMS and Grunsfeld is one of four astronauts that will be paired off for a total of three spacewalks on the mission.

  13. Virtual Passive Controller for Robot Systems Using Joint Torque Sensors

    NASA Technical Reports Server (NTRS)

    Aldridge, Hal A.; Juang, Jer-Nan

    1997-01-01

    This paper presents a control method based on virtual passive dynamic control that will stabilize a robot manipulator using joint torque sensors and a simple joint model. The method does not require joint position or velocity feedback for stabilization. The proposed control method is stable in the sense of Lyaponov. The control method was implemented on several joints of a laboratory robot. The controller showed good stability robustness to system parameter error and to the exclusion of nonlinear dynamic effects on the joints. The controller enhanced position tracking performance and, in the absence of position control, dissipated joint energy.

  14. A Human Machine Interface for EVA

    NASA Astrophysics Data System (ADS)

    Hartmann, L.

    EVA astronauts work in a challenging environment that includes high rate of muscle fatigue, haptic and proprioception impairment, lack of dexterity and interaction with robotic equipment. Currently they are heavily dependent on support from on-board crew and ground station staff for information and robotics operation. They are limited to the operation of simple controls on the suit exterior and external robot controls that are difficult to operate because of the heavy gloves that are part of the EVA suit. A wearable human machine interface (HMI) inside the suit provides a powerful alternative for robot teleoperation, procedure checklist access, generic equipment operation via virtual control panels and general information retrieval and presentation. The HMI proposed here includes speech input and output, a simple 6 degree of freedom (dof) pointing device and a heads up display (HUD). The essential characteristic of this interface is that it offers an alternative to the standard keyboard and mouse interface of a desktop computer. The astronaut's speech is used as input to command mode changes, execute arbitrary computer commands and generate text. The HMI can respond with speech also in order to confirm selections, provide status and feedback and present text output. A candidate 6 dof pointing device is Measurand's Shapetape, a flexible "tape" substrate to which is attached an optic fiber with embedded sensors. Measurement of the modulation of the light passing through the fiber can be used to compute the shape of the tape and, in particular, the position and orientation of the end of the Shapetape. It can be used to provide any kind of 3d geometric information including robot teleoperation control. The HUD can overlay graphical information onto the astronaut's visual field including robot joint torques, end effector configuration, procedure checklists and virtual control panels. With suitable tracking information about the position and orientation of the EVA suit, the overlaid graphical information can be registered with the external world. For example, information about an object can be positioned on or beside the object. This wearable HMI supports many applications during EVA including robot teleoperation, procedure checklist usage, operation of virtual control panels and general information or documentation retrieval and presentation. Whether the robot end effector is a mobile platform for the EVA astronaut or is an assistant to the astronaut in an assembly or repair task, the astronaut can control the robot via a direct manipulation interface. Embedded in the suit or the astronaut's clothing, Shapetape can measure the user's arm/hand position and orientation which can be directly mapped into the workspace coordinate system of the robot. Motion of the users hand can generate corresponding motion of the robot end effector in order to reposition the EVA platform or to manipulate objects in the robot's grasp. Speech input can be used to execute commands and mode changes without the astronaut having to withdraw from the teleoperation task. Speech output from the system can provide feedback without affecting the user's visual attention. The procedure checklist guiding the astronaut's detailed activities can be presented on the HUD and manipulated (e.g., move, scale, annotate, mark tasks as done, consult prerequisite tasks) by spoken command. Virtual control panels for suit equipment, equipment being repaired or arbitrary equipment on the space station can be displayed on the HUD and can be operated by speech commands or by hand gestures. For example, an antenna being repaired could be pointed under the control of the EVA astronaut. Additionally arbitrary computer activities such as information retrieval and presentation can be carried out using similar interface techniques. Considering the risks, expense and physical challenges of EVA work, it is appropriate that EVA astronauts have considerable support from station crew and ground station staff. Reducing their dependence on such personnel may under many circumstances, however, improve performance and reduce risk. For example, the EVA astronaut is likely to have the best viewpoint at a robotic worksite. Direct access to the procedure checklist can help provide temporal context and continuity throughout an EVA. Access to station facilities through an HMI such as the one described here could be invaluable during an emergency or in a situation in which a fault occurs. The full paper will describe the HMI operation and applications in the EVA context in more detail and will describe current laboratory prototyping activities.

  15. Optical Tweezer Assembly and Calibration

    NASA Technical Reports Server (NTRS)

    Collins, Timothy M.

    2004-01-01

    An Optical Tweezer, as the name implies, is a useful tool for precision manipulation of micro and nano scale objects. Using the principle of electromagnetic radiation pressure, an optical tweezer employs a tightly focused laser beam to trap and position objects of various shapes and sizes. These devices can trap micrometer and nanometer sized objects. An exciting possibility for optical tweezers is its future potential to manipulate and assemble micro and nano sized sensors. A typical optical tweezer makes use of the following components: laser, mirrors, lenses, a high quality microscope, stage, Charge Coupled Device (CCD) camera, TV monitor and Position Sensitive Detectors (PSDs). The laser wavelength employed is typically in the visible or infrared spectrum. The laser beam is directed via mirrors and lenses into the microscope. It is then tightly focused by a high magnification, high numerical aperture microscope objective into the sample slide, which is mounted on a translating stage. The sample slide contains a sealed, small volume of fluid that the objects are suspended in. The most common objects trapped by optical tweezers are dielectric spheres. When trapped, a sphere will literally snap into and center itself in the laser beam. The PSD s are mounted in such a way to receive the backscatter after the beam has passed through the trap. PSD s used with the Differential Interference Contrast (DIC) technique provide highly precise data. Most optical tweezers employ lasers with power levels ranging from 10 to 100 miliwatts. Typical forces exerted on trapped objects are in the pico-newton range. When PSDs are employed, object movement can be resolved on a nanometer scale in a time range of milliseconds. Such accuracy, however, can only by utilized by calibrating the optical tweezer. Fortunately, an optical tweezer can be modeled accurately as a simple spring. This allows Hook s Law to be used. My goal this summer at NASA Glenn Research Center is the assembly and calibration of an optical tweezer setup in the Instrumentation and Controls Division (5520). I am utilizing a custom LabVIEW Virtual Instrument program for data collection and microscope stage control. Helping me in my assignment are the following people: Mentor Susan Wrbanek (5520), Dr. Baha Jassemnejad (UCO) and Technicians Ken Weiland (7650) and James Williams (7650). Without their help, my task would not be possible.

  16. Finding the Correspondence of Audio-Visual Events by Object Manipulation

    NASA Astrophysics Data System (ADS)

    Nishibori, Kento; Takeuchi, Yoshinori; Matsumoto, Tetsuya; Kudo, Hiroaki; Ohnishi, Noboru

    A human being understands the objects in the environment by integrating information obtained by the senses of sight, hearing and touch. In this integration, active manipulation of objects plays an important role. We propose a method for finding the correspondence of audio-visual events by manipulating an object. The method uses the general grouping rules in Gestalt psychology, i.e. “simultaneity” and “similarity” among motion command, sound onsets and motion of the object in images. In experiments, we used a microphone, a camera, and a robot which has a hand manipulator. The robot grasps an object like a bell and shakes it or grasps an object like a stick and beat a drum in a periodic, or non-periodic motion. Then the object emits periodical/non-periodical events. To create more realistic scenario, we put other event source (a metronome) in the environment. As a result, we had a success rate of 73.8 percent in finding the correspondence between audio-visual events (afferent signal) which are relating to robot motion (efferent signal).

  17. Research on modeling and motion simulation of a spherical space robot with telescopic manipulator based on virtual prototype technology

    NASA Astrophysics Data System (ADS)

    Shi, Chengkun; Sun, Hanxu; Jia, Qingxuan; Zhao, Kailiang

    2009-05-01

    For realizing omni-directional movement and operating task of spherical space robot system, this paper describes an innovated prototype and analyzes dynamic characteristics of a spherical rolling robot with telescopic manipulator. Based on the Newton-Euler equations, the kinematics and dynamic equations of the spherical robot's motion are instructed detailedly. Then the motion simulations of the robot in different environments are developed with ADAMS. The simulation results validate the mathematics model of the system. And the dynamic model establishes theoretical basis for the latter job.

  18. Real behavior in virtual environments: psychology experiments in a simple virtual-reality paradigm using video games.

    PubMed

    Kozlov, Michail D; Johansen, Mark K

    2010-12-01

    The purpose of this research was to illustrate the broad usefulness of simple video-game-based virtual environments (VEs) for psychological research on real-world behavior. To this end, this research explored several high-level social phenomena in a simple, inexpensive computer-game environment: the reduced likelihood of helping under time pressure and the bystander effect, which is reduced helping in the presence of bystanders. In the first experiment, participants had to find the exit in a virtual labyrinth under either high or low time pressure. They encountered rooms with and without virtual bystanders, and in each room, a virtual person requested assistance. Participants helped significantly less frequently under time pressure but the presence/absence of a small number of bystanders did not significantly moderate helping. The second experiment increased the number of virtual bystanders, and participants were instructed to imagine that these were real people. Participants helped significantly less in rooms with large numbers of bystanders compared to rooms with no bystanders, thus demonstrating a bystander effect. These results indicate that even sophisticated high-level social behaviors can be observed and experimentally manipulated in simple VEs, thus implying the broad usefulness of this paradigm in psychological research as a good compromise between experimental control and ecological validity.

  19. Cerebellar input configuration toward object model abstraction in manipulation tasks.

    PubMed

    Luque, Niceto R; Garrido, Jesus A; Carrillo, Richard R; Coenen, Olivier J-M D; Ros, Eduardo

    2011-08-01

    It is widely assumed that the cerebellum is one of the main nervous centers involved in correcting and refining planned movement and accounting for disturbances occurring during movement, for instance, due to the manipulation of objects which affect the kinematics and dynamics of the robot-arm plant model. In this brief, we evaluate a way in which a cerebellar-like structure can store a model in the granular and molecular layers. Furthermore, we study how its microstructure and input representations (context labels and sensorimotor signals) can efficiently support model abstraction toward delivering accurate corrective torque values for increasing precision during different-object manipulation. We also describe how the explicit (object-related input labels) and implicit state input representations (sensorimotor signals) complement each other to better handle different models and allow interpolation between two already stored models. This facilitates accurate corrections during manipulations of new objects taking advantage of already stored models.

  20. Hybrid Reality Lab Capabilities - Video 2

    NASA Technical Reports Server (NTRS)

    Delgado, Francisco J.; Noyes, Matthew

    2016-01-01

    Our Hybrid Reality and Advanced Operations Lab is developing incredibly realistic and immersive systems that could be used to provide training, support engineering analysis, and augment data collection for various human performance metrics at NASA. To get a better understanding of what Hybrid Reality is, let's go through the two most commonly known types of immersive realities: Virtual Reality, and Augmented Reality. Virtual Reality creates immersive scenes that are completely made up of digital information. This technology has been used to train astronauts at NASA, used during teleoperation of remote assets (arms, rovers, robots, etc.) and other activities. One challenge with Virtual Reality is that if you are using it for real time-applications (like landing an airplane) then the information used to create the virtual scenes can be old (i.e. visualized long after physical objects moved in the scene) and not accurate enough to land the airplane safely. This is where Augmented Reality comes in. Augmented Reality takes real-time environment information (from a camera, or see through window, and places digitally created information into the scene so that it matches with the video/glass information). Augmented Reality enhances real environment information collected with a live sensor or viewport (e.g. camera, window, etc.) with the information-rich visualization provided by Virtual Reality. Hybrid Reality takes Augmented Reality even further, by creating a higher level of immersion where interactivity can take place. Hybrid Reality takes Virtual Reality objects and a trackable, physical representation of those objects, places them in the same coordinate system, and allows people to interact with both objects' representations (virtual and physical) simultaneously. After a short period of adjustment, the individuals begin to interact with all the objects in the scene as if they were real-life objects. The ability to physically touch and interact with digitally created objects that have the same shape, size, location to their physical object counterpart in virtual reality environment can be a game changer when it comes to training, planning, engineering analysis, science, entertainment, etc. Our Project is developing such capabilities for various types of environments. The video outlined with this abstract is a representation of an ISS Hybrid Reality experience. In the video you can see various Hybrid Reality elements that provide immersion beyond just standard Virtual Reality or Augmented Reality.

  1. Context and hand posture modulate the neural dynamics of tool-object perception.

    PubMed

    Natraj, Nikhilesh; Poole, Victoria; Mizelle, J C; Flumini, Andrea; Borghi, Anna M; Wheaton, Lewis A

    2013-02-01

    Prior research has linked visual perception of tools with plausible motor strategies. Thus, observing a tool activates the putative action-stream, including the left posterior parietal cortex. Observing a hand functionally grasping a tool involves the inferior frontal cortex. However, tool-use movements are performed in a contextual and grasp specific manner, rather than relative isolation. Our prior behavioral data has demonstrated that the context of tool-use (by pairing the tool with different objects) and varying hand grasp postures of the tool can interact to modulate subjects' reaction times while evaluating tool-object content. Specifically, perceptual judgment was delayed in the evaluation of functional tool-object pairings (Correct context) when the tool was non-functionally (Manipulative) grasped. Here, we hypothesized that this behavioral interference seen with the Manipulative posture would be due to increased and extended left parietofrontal activity possibly underlying motor simulations when resolving action conflict due to this particular grasp at time scales relevant to the behavioral data. Further, we hypothesized that this neural effect will be restricted to the Correct tool-object context wherein action affordances are at a maximum. 64-channel electroencephalography (EEG) was recorded from 16 right-handed subjects while viewing images depicting three classes of tool-object contexts: functionally Correct (e.g. coffee pot-coffee mug), functionally Incorrect (e.g. coffee pot-marker) and Spatial (coffee pot-milk). The Spatial context pairs a tool and object that would not functionally match, but may commonly appear in the same scene. These three contexts were modified by hand interaction: No Hand, Static Hand near the tool, Functional Hand posture and Manipulative Hand posture. The Manipulative posture is convenient for relocating a tool but does not afford a functional engagement of the tool on the target object. Subjects were instructed to visually assess whether the pictures displayed correct tool-object associations. EEG data was analyzed in time-voltage and time-frequency domains. Overall, Static Hand, Functional and Manipulative postures cause early activation (100-400ms post image onset) of parietofrontal areas, to varying intensity in each context, when compared to the No Hand control condition. However, when context is Correct, only the Manipulative Posture significantly induces extended neural responses, predominantly over right parietal and right frontal areas [400-600ms post image onset]. Significant power increase was observed in the theta band [4-8Hz] over the right frontal area, [0-500ms]. In addition, when context is Spatial, Manipulative posture alone significantly induces extended neural responses, over bilateral parietofrontal and left motor areas [400-600ms]. Significant power decrease occurred primarily in beta bands [12-16, 20-25Hz] over the aforementioned brain areas [400-600ms]. Here, we demonstrate that the neural processing of tool-object perception is sensitive to several factors. While both Functional and Manipulative postures in Correct context engage predominantly an early left parietofrontal circuit, the Manipulative posture alone extends the neural response and transitions to a late right parietofrontal network. This suggests engagement of a right neural system to evaluate action affordances when hand posture does not support action (Manipulative). Additionally, when tool-use context is ambiguous (Spatial context), there is increased bilateral parietofrontal activation and, extended neural response for the Manipulative posture. These results point to the existence of other networks evaluating tool-object associations when motoric affordances are not readily apparent and underlie corresponding delayed perceptual judgment in our prior behavioral data wherein Manipulative postures had exclusively interfered in judging tool-object content. Copyright © 2012 Elsevier Ltd. All rights reserved.

  2. Virtual Jupiter - Real Learning

    NASA Astrophysics Data System (ADS)

    Ruzhitskaya, Lanika; Speck, A.; Laffey, J.

    2010-01-01

    How many earthlings went to visit Jupiter? None. How many students visited virtual Jupiter to fulfill their introductory astronomy courses’ requirements? Within next six months over 100 students from University of Missouri will get a chance to explore the planet and its Galilean Moons using a 3D virtual environment created especially for them to learn Kepler's and Newton's laws, eclipses, parallax, and other concepts in astronomy. The virtual world of Jupiter system is a unique 3D environment that allows students to learn course material - physical laws and concepts in astronomy - while engaging them into exploration of the Jupiter's system, encouraging their imagination, curiosity, and motivation. The virtual learning environment let students to work individually or collaborate with their teammates. The 3D world is also a great opportunity for research in astronomy education to investigate impact of social interaction, gaming features, and use of manipulatives offered by a learning tool on students’ motivation and learning outcomes. Use of 3D environment is also a valuable source for exploration of how the learners’ spatial awareness can be enhanced by working in 3-dimensional environment.

  3. COGNITION, ACTION, AND OBJECT MANIPULATION

    PubMed Central

    Rosenbaum, David A.; Chapman, Kate M.; Weigelt, Matthias; Weiss, Daniel J.; van der Wel, Robrecht

    2012-01-01

    Although psychology is the science of mental life and behavior, it has paid little attention to the means by which mental life is translated into behavior. One domain where links between cognition and action have been explored is the manipulation of objects. This article reviews psychological research on this topic, with special emphasis on the tendency to grasp objects differently depending on what one plans to do with the objects. Such differential grasping has been demonstrated in a wide range of object manipulation tasks, including grasping an object in a way that reveals anticipation of the object's future orientation, height, and required placement precision. Differential grasping has also been demonstrated in a wide range of behaviors, including one-hand grasps, two-hand grasps, walking, and transferring objects from place to place as well as from person to person. The populations in whom the tendency has been shown are also diverse, including nonhuman primates as well as human adults, children, and babies. Meanwhile, the tendency is compromised in a variety of clinical populations and in children of a surprisingly advanced age. Verbal working memory is compromised as well if words are memorized while object manipulation tasks are performed; the recency portion of the serial position curve is reduced in this circumstance. In general, the research reviewed here points to rich connections between cognition and action as revealed through the study of object manipulation. Other implications concern affordances, Donders' Law, and naturalistic observation and the teaching of psychology. PMID:22448912

  4. An interactive, stereoscopic virtual environment for medical imaging visualization, simulation and training

    NASA Astrophysics Data System (ADS)

    Krueger, Evan; Messier, Erik; Linte, Cristian A.; Diaz, Gabriel

    2017-03-01

    Recent advances in medical image acquisition allow for the reconstruction of anatomies with 3D, 4D, and 5D renderings. Nevertheless, standard anatomical and medical data visualization still relies heavily on the use of traditional 2D didactic tools (i.e., textbooks and slides), which restrict the presentation of image data to a 2D slice format. While these approaches have their merits beyond being cost effective and easy to disseminate, anatomy is inherently three-dimensional. By using 2D visualizations to illustrate more complex morphologies, important interactions between structures can be missed. In practice, such as in the planning and execution of surgical interventions, professionals require intricate knowledge of anatomical complexities, which can be more clearly communicated and understood through intuitive interaction with 3D volumetric datasets, such as those extracted from high-resolution CT or MRI scans. Open source, high quality, 3D medical imaging datasets are freely available, and with the emerging popularity of 3D display technologies, affordable and consistent 3D anatomical visualizations can be created. In this study we describe the design, implementation, and evaluation of one such interactive, stereoscopic visualization paradigm for human anatomy extracted from 3D medical images. A stereoscopic display was created by projecting the scene onto the lab floor using sequential frame stereo projection and viewed through active shutter glasses. By incorporating a PhaseSpace motion tracking system, a single viewer can navigate an augmented reality environment and directly manipulate virtual objects in 3D. While this paradigm is sufficiently versatile to enable a wide variety of applications in need of 3D visualization, we designed our study to work as an interactive game, which allows users to explore the anatomy of various organs and systems. In this study we describe the design, implementation, and evaluation of an interactive and stereoscopic visualization platform for exploring and understanding human anatomy. This system can present medical imaging data in three dimensions and allows for direct physical interaction and manipulation by the viewer. This should provide numerous benefits over traditional, 2D display and interaction modalities, and in our analysis, we aim to quantify and qualify users' visual and motor interactions with the virtual environment when employing this interactive display as a 3D didactic tool.

  5. Varieties of virtualization

    NASA Technical Reports Server (NTRS)

    Ellis, Stephen R.

    1991-01-01

    Natural environments have a content, i.e., the objects in them; a geometry, i.e., a pattern of rules for positioning and displacing the objects; and a dynamics, i.e., a system of rules describing the effects of forces acting on the objects. Human interaction with most common natural environments has been optimized by centuries of evolution. Virtual environments created through the human-computer interface similarly have a content, geometry, and dynamics, but the arbitrary character of the computer simulation creating them does not insure that human interaction with these virtual environments will be natural. The interaction, indeed, could be supernatural but it also could be impossible. An important determinant of the comprehensibility of a virtual environment is the correspondence between the environmental frames of reference and those associated with the control of environmental objects. The effects of rotation and displacement of control frames of reference with respect to corresponding environmental references differ depending upon whether perceptual judgement or manual tracking performance is measured. The perceptual effects of frame of reference displacement may be analyzed in terms of distortions in the process of virtualizing the synthetic environment space. The effects of frame of reference displacement and rotation have been studied by asking subjects to estimate exocentric direction in a virtual space.

  6. Are persons with nervous habit nervous? A preliminary examination of habit function in a nonreferred population.

    PubMed Central

    Woods, D W; Miltenberger, R G

    1996-01-01

    In this study, 44 individual were exposed to three conditions (anxiety, bored, and neutral) while being covertly videotaped. The videotapes were then scored for the occurrence of five classes of habits including hair, face, and object manipulation; object mouthing; and repetitive movement of the limbs. Results showed that hair and face manipulation increased during the anxiety condition, whereas object manipulation increased in the bored condition. The implications of this research are discussed. PMID:8682744

  7. Distinctions between manipulation and function knowledge of objects: evidence from functional magnetic resonance imaging.

    PubMed

    Boronat, Consuelo B; Buxbaum, Laurel J; Coslett, H Branch; Tang, Kathy; Saffran, Eleanor M; Kimberg, Daniel Y; Detre, John A

    2005-05-01

    A prominent account of conceptual knowledge proposes that information is distributed over visual, tactile, auditory, motor and verbal-declarative attribute domains to the degree to which these features were activated when the knowledge was acquired [D.A. Allport, Distributed memory, modular subsystems and dysphagia, In: S.K. Newman, R. Epstein (Eds.), Current perspectives in dysphagia, Churchill Livingstone, Edinburgh, 1985, pp. 32-60]. A corollary is that when drawing upon this knowledge (e.g., to answer questions), particular aspects of this distributed information is re-activated as a function of the requirements of the task at hand [L.J. Buxbaum, E.M. Saffran, Knowledge of object manipulation and object function: dissociations in apraxic and non-apraxic subjects. Brain and Language, 82 (2002) 179-199; L.J. Buxbaum, T. Veramonti, M.F. Schwartz, Function and manipulation tool knowledge in apraxia: knowing 'what for' but not 'how', Neurocase, 6 (2000) 83-97; W. Simmons, L. Barsalou, The similarity-in-topography principle: Reconciling theories of conceptual deficits, Cognitive Neuropsychology, 20 (2003) 451-486]. This account predicts that answering questions about object manipulation should activate brain regions previously identified as components of the distributed sensory-motor system involved in object use, whereas answering questions about object function (that is, the purpose that it serves) should activate regions identified as components of the systems supporting verbal-declarative features. These predictions were tested in a functional magnetic resonance imaging (fMRI) study in which 15 participants viewed picture or word pairs denoting manipulable objects and determined whether the objects are manipulated similarly (M condition) or serve the same function (F condition). Significantly greater and more extensive activations in the left inferior parietal lobe bordering the intraparietal sulcus were seen in the M condition with pictures and, to a lesser degree, words. These findings are consistent with the known role of this region in skilled object use [K.M. Heilman, L.J. Gonzalez Rothi, Apraxia, In: K.M. Heilman, E. Valenstein (Eds.), Clinical Neuropsychology, Oxford University Press, New York, 1993, pp. 141-150] as well as previous fMRI results [M. Kellenbach, M. Brett, K. Patterson, Actions speak louder than functions: the importance of manipulability and action in tool representation, Journal of Cognitive Neuroscience, 15 (2003) 30-46] and behavioral findings in brain-lesion patients [L.J. Buxbaum, E.M. Saffran, Knowledge of object manipulation and object function: dissociations in apraxic and non-apraxic subjects, Brain and Language, 82 (2002) 179-199]. No brain regions were significantly more activated in the F than M condition. These data suggest that brain regions specialized for sensory-motor function are a critical component of distributed representations of manipulable objects.

  8. Kinematics and force analysis of a robot hand based on an artificial biological control scheme

    NASA Astrophysics Data System (ADS)

    Kim, Man Guen

    An artificial biological control scheme (ABCS) is used to study the kinematics and statics of a multifingered hand with a view to developing an efficient control scheme for grasping. The ABCS is based on observation of human grasping, intuitively taking it as the optimum model for robotic grasping. A final chapter proposes several grasping measures to be applied to the design and control of a robot hand. The ABCS leads to the definition of two modes of the grasping action: natural grasping (NG), which is the human motion to grasp the object without any special task command, and forced grasping (FG), which is the motion with a specific task. The grasping direction line (GDL) is defined to determine the position and orientation of the object in the hand. The kinematic model of a redundant robot arm and hand is developed by reconstructing the human upper extremity and using anthropometric measurement data. The inverse kinematic analyses of various types of precision and power grasping are studied by replacing the three-link with one virtual link and using the GDL. The static force analysis for grasping with fingertips is studied by applying the ABCS. A measure of grasping stability, that maintains the positions of contacts as well as the configurations of the redundant fingers, is derived. The grasping stability measure (GSM), a measure of how well the hand maintains grasping under the existence of external disturbance, is derived by the torque vector of the hand calculated from the external force applied to the object. The grasping manipulability measure (GMM), a measure of how well the hand manipulates the object for the task, is derived by the joint velocity vector of the hand calculated from the object velocity. The grasping performance measure (GPM) is defined by the sum of the directional components of the GSM and the GMM. Finally, a planar redundant hand with two fingers is examined in order to study the various postures of the hand performing pinch grasping by applying the GSM and the GMM.

  9. Robust augmented reality registration method for localization of solid organs' tumors using CT-derived virtual biomechanical model and fluorescent fiducials.

    PubMed

    Kong, Seong-Ho; Haouchine, Nazim; Soares, Renato; Klymchenko, Andrey; Andreiuk, Bohdan; Marques, Bruno; Shabat, Galyna; Piechaud, Thierry; Diana, Michele; Cotin, Stéphane; Marescaux, Jacques

    2017-07-01

    Augmented reality (AR) is the fusion of computer-generated and real-time images. AR can be used in surgery as a navigation tool, by creating a patient-specific virtual model through 3D software manipulation of DICOM imaging (e.g., CT scan). The virtual model can be superimposed to real-time images enabling transparency visualization of internal anatomy and accurate localization of tumors. However, the 3D model is rigid and does not take into account inner structures' deformations. We present a concept of automated AR registration, while the organs undergo deformation during surgical manipulation, based on finite element modeling (FEM) coupled with optical imaging of fluorescent surface fiducials. Two 10 × 1 mm wires (pseudo-tumors) and six 10 × 0.9 mm fluorescent fiducials were placed in ex vivo porcine kidneys (n = 10). Biomechanical FEM-based models were generated from CT scan. Kidneys were deformed and the shape changes were identified by tracking the fiducials, using a near-infrared optical system. The changes were registered automatically with the virtual model, which was deformed accordingly. Accuracy of prediction of pseudo-tumors' location was evaluated with a CT scan in the deformed status (ground truth). In vivo: fluorescent fiducials were inserted under ultrasound guidance in the kidney of one pig, followed by a CT scan. The FEM-based virtual model was superimposed on laparoscopic images by automatic registration of the fiducials. Biomechanical models were successfully generated and accurately superimposed on optical images. The mean measured distance between the estimated tumor by biomechanical propagation and the scanned tumor (ground truth) was 0.84 ± 0.42 mm. All fiducials were successfully placed in in vivo kidney and well visualized in near-infrared mode enabling accurate automatic registration of the virtual model on the laparoscopic images. Our preliminary experiments showed the potential of a biomechanical model with fluorescent fiducials to propagate the deformation of solid organs' surface to their inner structures including tumors with good accuracy and automatized robust tracking.

  10. Information Retrieval in Virtual Universities

    ERIC Educational Resources Information Center

    Puustjärvi, Juha; Pöyry, Päivi

    2006-01-01

    Information retrieval in the context of virtual universities deals with the representation, organization, and access to learning objects. The representation and organization of learning objects should provide the learner with an easy access to the learning objects. In this article, we give an overview of the ONES system, and analyze the relevance…

  11. Effects of transcranial direct current stimulation on the control of finger force during dexterous manipulation in healthy older adults.

    PubMed

    Parikh, Pranav J; Cole, Kelly J

    2015-01-01

    The contribution of poor finger force control to age-related decline in manual dexterity is above and beyond ubiquitous behavioral slowing. Altered control of the finger forces can impart unwanted torque on the object affecting its orientation, thus impairing manual performance. Anodal transcranial direct current stimulation (tDCS) over primary motor cortex (M1) has been shown to improve the performance speed on manual tasks in older adults. However, the effects of anodal tDCS over M1 on the finger force control during object manipulation in older adults remain to be fully explored. Here we determined the effects of anodal tDCS over M1 on the control of grip force in older adults while they manipulated an object with an uncertain mechanical property. Eight healthy older adults were instructed to grip and lift an object whose contact surfaces were unexpectedly made more or less slippery across trials using acetate and sandpaper surfaces, respectively. Subjects performed this task before and after receiving anodal or sham tDCS over M1 on two separate sessions using a cross-over design. We found that older adults used significantly lower grip force following anodal tDCS compared to sham tDCS. Friction measured at the finger-object interface remained invariant after anodal and sham tDCS. These findings suggest that anodal tDCS over M1 improved the control of grip force during object manipulation in healthy older adults. Although the cortical networks for representing objects and manipulative actions are complex, the reduction in grip force following anodal tDCS over M1 might be due to a cortical excitation yielding improved processing of object-specific sensory information and its integration with the motor commands for production of manipulative forces. Our findings indicate that tDCS has a potential to improve the control of finger force during dexterous manipulation in older adults.

  12. An Interactive 3D Virtual Anatomy Puzzle for Learning and Simulation - Initial Demonstration and Evaluation.

    PubMed

    Messier, Erik; Wilcox, Jascha; Dawson-Elli, Alexander; Diaz, Gabriel; Linte, Cristian A

    2016-01-01

    To inspire young students (grades 6-12) to become medical practitioners and biomedical engineers, it is necessary to expose them to key concepts of the field in a way that is both exciting and informative. Recent advances in medical image acquisition, manipulation, processing, visualization, and display have revolutionized the approach in which the human body and internal anatomy can be seen and studied. It is now possible to collect 3D, 4D, and 5D medical images of patient specific data, and display that data to the end user using consumer level 3D stereoscopic display technology. Despite such advancements, traditional 2D modes of content presentation such as textbooks and slides are still the standard didactic equipment used to teach young students anatomy. More sophisticated methods of display can help to elucidate the complex 3D relationships between structures that are so often missed when viewing only 2D media, and can instill in students an appreciation for the interconnection between medicine and technology. Here we describe the design, implementation, and preliminary evaluation of a 3D virtual anatomy puzzle dedicated to helping users learn the anatomy of various organs and systems by manipulating 3D virtual data. The puzzle currently comprises several components of the human anatomy and can be easily extended to include additional organs and systems. The 3D virtual anatomy puzzle game was implemented and piloted using three display paradigms - a traditional 2D monitor, a 3D TV with active shutter glass, and the DK2 version Oculus Rift, as well as two different user interaction devices - a space mouse and traditional keyboard controls.

  13. Engagement of neural circuits underlying 2D spatial navigation in a rodent virtual reality system.

    PubMed

    Aronov, Dmitriy; Tank, David W

    2014-10-22

    Virtual reality (VR) enables precise control of an animal's environment and otherwise impossible experimental manipulations. Neural activity in rodents has been studied on virtual 1D tracks. However, 2D navigation imposes additional requirements, such as the processing of head direction and environment boundaries, and it is unknown whether the neural circuits underlying 2D representations can be sufficiently engaged in VR. We implemented a VR setup for rats, including software and large-scale electrophysiology, that supports 2D navigation by allowing rotation and walking in any direction. The entorhinal-hippocampal circuit, including place, head direction, and grid cells, showed 2D activity patterns similar to those in the real world. Furthermore, border cells were observed, and hippocampal remapping was driven by environment shape, suggesting functional processing of virtual boundaries. These results illustrate that 2D spatial representations can be engaged by visual and rotational vestibular stimuli alone and suggest a novel VR tool for studying rat navigation.

  14. Virtual reality and robotics for stroke rehabilitation: where do we go from here?

    PubMed

    Wade, Eric; Winstein, Carolee J

    2011-01-01

    Promoting functional recovery after stroke requires collaborative and innovative approaches to neurorehabilitation research. Task-oriented training (TOT) approaches that include challenging, adaptable, and meaningful activities have led to successful outcomes in several large-scale multisite definitive trials. This, along with recent technological advances of virtual reality and robotics, provides a fertile environment for furthering clinical research in neurorehabilitation. Both virtual reality and robotics make use of multimodal sensory interfaces to affect human behavior. In the therapeutic setting, these systems can be used to quantitatively monitor, manipulate, and augment the users' interaction with their environment, with the goal of promoting functional recovery. This article describes recent advances in virtual reality and robotics and the synergy with best clinical practice. Additionally, we describe the promise shown for automated assessments and in-home activity-based interventions. Finally, we propose a broader approach to ensuring that technology-based assessment and intervention complement evidence-based practice and maintain a patient-centered perspective.

  15. Controlling social stress in virtual reality environments.

    PubMed

    Hartanto, Dwi; Kampmann, Isabel L; Morina, Nexhmedin; Emmelkamp, Paul G M; Neerincx, Mark A; Brinkman, Willem-Paul

    2014-01-01

    Virtual reality exposure therapy has been proposed as a viable alternative in the treatment of anxiety disorders, including social anxiety disorder. Therapists could benefit from extensive control of anxiety eliciting stimuli during virtual exposure. Two stimuli controls are studied in this study: the social dialogue situation, and the dialogue feedback responses (negative or positive) between a human and a virtual character. In the first study, 16 participants were exposed in three virtual reality scenarios: a neutral virtual world, blind date scenario, and job interview scenario. Results showed a significant difference between the three virtual scenarios in the level of self-reported anxiety and heart rate. In the second study, 24 participants were exposed to a job interview scenario in a virtual environment where the ratio between negative and positive dialogue feedback responses of a virtual character was systematically varied on-the-fly. Results yielded that within a dialogue the more positive dialogue feedback resulted in less self-reported anxiety, lower heart rate, and longer answers, while more negative dialogue feedback of the virtual character resulted in the opposite. The correlations between on the one hand the dialogue stressor ratio and on the other hand the means of SUD score, heart rate and audio length in the eight dialogue conditions showed a strong relationship: r(6) = 0.91, p = 0.002; r(6) = 0.76, p = 0.028 and r(6) = -0.94, p = 0.001 respectively. Furthermore, more anticipatory anxiety reported before exposure was found to coincide with more self-reported anxiety, and shorter answers during the virtual exposure. These results demonstrate that social dialogues in a virtual environment can be effectively manipulated for therapeutic purposes.

  16. Virtual bystanders in a language lesson: examining the effect of social evaluation, vicarious experience, cognitive consistency and praising on students' beliefs, self-efficacy and anxiety in a virtual reality environment.

    PubMed

    Qu, Chao; Ling, Yun; Heynderickx, Ingrid; Brinkman, Willem-Paul

    2015-01-01

    Bystanders in a real world's social setting have the ability to influence people's beliefs and behavior. This study examines whether this effect can be recreated in a virtual environment, by exposing people to virtual bystanders in a classroom setting. Participants (n = 26) first witnessed virtual students answering questions from an English teacher, after which they were also asked to answer questions from the teacher as part of a simulated training for spoken English. During the experiment the attitudes of the other virtual students in the classroom was manipulated; they could whisper either positive or negative remarks to each other when a virtual student was talking or when a participant was talking. The results show that the expressed attitude of virtual bystanders towards the participants affected their self-efficacy, and their avoidance behavior. Furthermore, the experience of witnessing bystanders commenting negatively on the performance of other students raised the participants' heart rate when it was their turn to speak. Two-way interaction effects were also found on self-reported anxiety and self-efficacy. After witnessing bystanders' positive attitude towards peer students, participants' self-efficacy when answering questions received a boost when bystanders were also positive towards them, and a blow when bystanders reversed their attitude by being negative towards them. Still, inconsistency, instead of consistency, between the bystanders' attitudes towards virtual peers and the participants was not found to result in a larger change in the participants' beliefs. Finally the results also reveal that virtual flattering or destructive criticizing affected the participants' beliefs not only about the virtual bystanders, but also about the neutral teacher. Together these findings show that virtual bystanders in a classroom can affect people's beliefs, anxiety and behavior.

  17. Controlling Social Stress in Virtual Reality Environments

    PubMed Central

    Hartanto, Dwi; Kampmann, Isabel L.; Morina, Nexhmedin; Emmelkamp, Paul G. M.; Neerincx, Mark A.; Brinkman, Willem-Paul

    2014-01-01

    Virtual reality exposure therapy has been proposed as a viable alternative in the treatment of anxiety disorders, including social anxiety disorder. Therapists could benefit from extensive control of anxiety eliciting stimuli during virtual exposure. Two stimuli controls are studied in this study: the social dialogue situation, and the dialogue feedback responses (negative or positive) between a human and a virtual character. In the first study, 16 participants were exposed in three virtual reality scenarios: a neutral virtual world, blind date scenario, and job interview scenario. Results showed a significant difference between the three virtual scenarios in the level of self-reported anxiety and heart rate. In the second study, 24 participants were exposed to a job interview scenario in a virtual environment where the ratio between negative and positive dialogue feedback responses of a virtual character was systematically varied on-the-fly. Results yielded that within a dialogue the more positive dialogue feedback resulted in less self-reported anxiety, lower heart rate, and longer answers, while more negative dialogue feedback of the virtual character resulted in the opposite. The correlations between on the one hand the dialogue stressor ratio and on the other hand the means of SUD score, heart rate and audio length in the eight dialogue conditions showed a strong relationship: r(6) = 0.91, p = 0.002; r(6) = 0.76, p = 0.028 and r(6) = −0.94, p = 0.001 respectively. Furthermore, more anticipatory anxiety reported before exposure was found to coincide with more self-reported anxiety, and shorter answers during the virtual exposure. These results demonstrate that social dialogues in a virtual environment can be effectively manipulated for therapeutic purposes. PMID:24671006

  18. Approach for scene reconstruction from the analysis of a triplet of still images

    NASA Astrophysics Data System (ADS)

    Lechat, Patrick; Le Mestre, Gwenaelle; Pele, Danielle

    1997-03-01

    Three-dimensional modeling of a scene from the automatic analysis of 2D image sequences is a big challenge for future interactive audiovisual services based on 3D content manipulation such as virtual vests, 3D teleconferencing and interactive television. We propose a scheme that computes 3D objects models from stereo analysis of image triplets shot by calibrated cameras. After matching the different views with a correlation based algorithm, a depth map referring to a given view is built by using a fusion criterion taking into account depth coherency, visibility constraints and correlation scores. Because luminance segmentation helps to compute accurate object borders and to detect and improve the unreliable depth values, a two steps segmentation algorithm using both depth map and graylevel image is applied to extract the objects masks. First an edge detection segments the luminance image in regions and a multimodal thresholding method selects depth classes from the depth map. Then the regions are merged and labelled with the different depth classes numbers by using a coherence test on depth values according to the rate of reliable and dominant depth values and the size of the regions. The structures of the segmented objects are obtained with a constrained Delaunay triangulation followed by a refining stage. Finally, texture mapping is performed using open inventor or VRML1.0 tools.

  19. Modeling and Design of an Electro-Rheological Fluid Based Haptic System for Tele-Operation of Space Robots

    NASA Technical Reports Server (NTRS)

    Mavroidis, Constantinos; Pfeiffer, Charles; Paljic, Alex; Celestino, James; Lennon, Jamie; Bar-Cohen, Yoseph

    2000-01-01

    For many years, the robotic community sought to develop robots that can eventually operate autonomously and eliminate the need for human operators. However, there is an increasing realization that there are some tasks that human can perform significantly better but, due to associated hazards, distance, physical limitations and other causes, only robot can be employed to perform these tasks. Remotely performing these types of tasks requires operating robots as human surrogates. While current "hand master" haptic systems are able to reproduce the feeling of rigid objects, they present great difficulties in emulating the feeling of remote/virtual stiffness. In addition, they tend to be heavy, cumbersome and usually they only allow limited operator workspace. In this paper a novel haptic interface is presented to enable human-operators to "feel" and intuitively mirror the stiffness/forces at remote/virtual sites enabling control of robots as human-surrogates. This haptic interface is intended to provide human operators intuitive feeling of the stiffness and forces at remote or virtual sites in support of space robots performing dexterous manipulation tasks (such as operating a wrench or a drill). Remote applications are referred to the control of actual robots whereas virtual applications are referred to simulated operations. The developed haptic interface will be applicable to IVA operated robotic EVA tasks to enhance human performance, extend crew capability and assure crew safety. The electrically controlled stiffness is obtained using constrained ElectroRheological Fluids (ERF), which changes its viscosity under electrical stimulation. Forces applied at the robot end-effector due to a compliant environment will be reflected to the user using this ERF device where a change in the system viscosity will occur proportionally to the force to be transmitted. In this paper, we will present the results of our modeling, simulation, and initial testing of such an electrorheological fluid (ERF) based haptic device.

  20. Is All Motivation Good for Learning? Dissociable Influences of Approach and Avoidance Motivation in Declarative Memory

    ERIC Educational Resources Information Center

    Murty, Vishnu P.; LaBar, Kevin S.; Hamilton, Derek A.; Adcock, R. Alison

    2011-01-01

    The present study investigated the effects of approach versus avoidance motivation on declarative learning. Human participants navigated a virtual reality version of the Morris water task, a classic spatial memory paradigm, adapted to permit the experimental manipulation of motivation during learning. During this task, participants were instructed…

  1. Third Graders' Mathematical Thinking of Place Value through the Use of Concrete and Virtual Manipulatives

    ERIC Educational Resources Information Center

    Burris, Justin T.

    2010-01-01

    As one research priority for mathematics education is "to research how mathematical meanings are structured by tools available," the present study examined mathematical representations more closely by investigating instructional modes of representation (Noss, Healy & Hoyles, 1997). The study compared two modes of instruction of place value with…

  2. Necessity Fuels Creativity: Adapting Long-Distance Collaborative Methods for the Classroom

    ERIC Educational Resources Information Center

    Sopoci Drake, Katie; Larson, Eliza; Rugh, Rachel; Tait, Barbara

    2016-01-01

    Improved technology has made it possible to virtually bridge distance between dance makers, rendering physical location another choreographic device to be manipulated. Long-distance collaboration as an artistic process is not only a fertile new ground for creation and necessary for many practicing dance artists in the field today, but there is…

  3. Assessing the Effectiveness of a Computer Simulation for Teaching Ecological Experimental Design

    ERIC Educational Resources Information Center

    Stafford, Richard; Goodenough, Anne E.; Davies, Mark S.

    2010-01-01

    Designing manipulative ecological experiments is a complex and time-consuming process that is problematic to teach in traditional undergraduate classes. This study investigates the effectiveness of using a computer simulation--the Virtual Rocky Shore (VRS)--to facilitate rapid, student-centred learning of experimental design. We gave a series of…

  4. Improving the Fraction Word Problem Solving of Students with Mathematics Learning Disabilities: Interactive Computer Application

    ERIC Educational Resources Information Center

    Shin, Mikyung; Bryant, Diane P.

    2017-01-01

    Students with mathematics learning disabilities (MLD) have a weak understanding of fraction concepts and skills, which are foundations of algebra. Such students might benefit from computer-assisted instruction that utilizes evidence-based instructional components (cognitive strategies, feedback, virtual manipulatives). As a pilot study using a…

  5. Students' Attention When Using Touchscreens and Pen Tablets in a Mathematics Classroom

    ERIC Educational Resources Information Center

    Chen, Cheng-Huan; Chiu, Chiung-Hui; Lin, Chia-Ping; Chou, Ying-Chun

    2017-01-01

    Aim/Purpose: The present study investigated and compared students' attention in terms of time-on-task and number of distractors between using a touchscreen and a pen tablet in mathematical problem solving activities with virtual manipulatives. Background: Although there is an increasing use of these input devices in educational practice, little…

  6. Fostering Mathematical Understanding through Physical and Virtual Manipulatives

    ERIC Educational Resources Information Center

    Loong, Esther Yook Kin

    2014-01-01

    When solving mathematical problems, many students know the procedure to get to the answer but cannot explain why they are doing it in that way. According to Skemp (1976) these students have instrumental understanding but not relational understanding of the problem. They have accepted the rules to arriving at the answer without questioning or…

  7. User Acceptance of a Haptic Interface for Learning Anatomy

    ERIC Educational Resources Information Center

    Yeom, Soonja; Choi-Lundberg, Derek; Fluck, Andrew; Sale, Arthur

    2013-01-01

    Visualizing the structure and relationships in three dimensions (3D) of organs is a challenge for students of anatomy. To provide an alternative way of learning anatomy engaging multiple senses, we are developing a force-feedback (haptic) interface for manipulation of 3D virtual organs, using design research methodology, with iterations of system…

  8. The Relationship between the Use of Virtual Manipulatives and Mathematics Performance among Fifth Grade Students

    ERIC Educational Resources Information Center

    Bryan, Rosemarie

    2014-01-01

    Students in U.S. public schools have consistently recorded substandard scores on measures of school performance in mathematics. This substandard performance could adversely affect the nation's future economic competitiveness, growth, and welfare. Educational and political leaders have sought school reforms that will result in U.S. students scoring…

  9. Virtual and Concrete Manipulatives: A Comparison of Approaches for Solving Mathematics Problems for Students with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Bouck, Emily C.; Satsangi, Rajiv; Doughty, Teresa Taber; Courtney, William T.

    2014-01-01

    Students with autism spectrum disorder (ASD) are included in general education classes and expected to participate in general education content, such as mathematics. Yet, little research explores academically-based mathematics instruction for this population. This single subject alternating treatment design study explored the effectiveness of…

  10. Manipulation of volumetric patient data in a distributed virtual reality environment.

    PubMed

    Dech, F; Ai, Z; Silverstein, J C

    2001-01-01

    Due to increases in network speed and bandwidth, distributed exploration of medical data in immersive Virtual Reality (VR) environments is becoming increasingly feasible. The volumetric display of radiological data in such environments presents a unique set of challenges. The shear size and complexity of the datasets involved not only make them difficult to transmit to remote sites, but these datasets also require extensive user interaction in order to make them understandable to the investigator and manageable to the rendering hardware. A sophisticated VR user interface is required in order for the clinician to focus on the aspects of the data that will provide educational and/or diagnostic insight. We will describe a software system of data acquisition, data display, Tele-Immersion, and data manipulation that supports interactive, collaborative investigation of large radiological datasets. The hardware required in this strategy is still at the high-end of the graphics workstation market. Future software ports to Linux and NT, along with the rapid development of PC graphics cards, open the possibility for later work with Linux or NT PCs and PC clusters.

  11. [Study on the effect of vertebrae semi-dislocation on the stress distribution in facet joint and interuertebral disc of patients with cervical syndrome based on the three dimensional finite element model].

    PubMed

    Zhang, Ming-cai; Lü, Si-zhe; Cheng, Ying-wu; Gu, Li-xu; Zhan, Hong-sheng; Shi, Yin-yu; Wang, Xiang; Huang, Shi-rong

    2011-02-01

    To study the effect of vertebrae semi-dislocation on the stress distribution in facet joint and interuertebral disc of patients with cervical syndrome using three dimensional finite element model. A patient with cervical spondylosis was randomly chosen, who was male, 28 years old, and diagnosed as cervical vertebra semidislocation by dynamic and static palpation and X-ray, and scanned from C(1) to C(7) by 0.75 mm slice thickness of CT. Based on the CT data, the software was used to construct the three dimensional finite element model of cervical vertebra semidislocation (C(4)-C(6)). Based on the model,virtual manipulation was used to correct the vertebra semidislocation by the software, and the stress distribution was analyzed. The result of finite element analysis showed that the stress distribution of C(5-6) facet joint and intervertebral disc changed after virtual manipulation. The vertebra semidislocation leads to the abnormal stress distribution of facet joint and intervertebral disc.

  12. The development of a virtual camera system for astronaut-rover planetary exploration.

    PubMed

    Platt, Donald W; Boy, Guy A

    2012-01-01

    A virtual assistant is being developed for use by astronauts as they use rovers to explore the surface of other planets. This interactive database, called the Virtual Camera (VC), is an interactive database that allows the user to have better situational awareness for exploration. It can be used for training, data analysis and augmentation of actual surface exploration. This paper describes the development efforts and Human-Computer Interaction considerations for implementing a first-generation VC on a tablet mobile computer device. Scenarios for use will be presented. Evaluation and success criteria such as efficiency in terms of processing time and precision situational awareness, learnability, usability, and robustness will also be presented. Initial testing and the impact of HCI design considerations of manipulation and improvement in situational awareness using a prototype VC will be discussed.

  13. DJ Sim: a virtual reality DJ simulation game

    NASA Astrophysics Data System (ADS)

    Tang, Ka Yin; Loke, Mei Hwan; Chin, Ching Ling; Chua, Gim Guan; Chong, Jyh Herng; Manders, Corey; Khan, Ishtiaq Rasool; Yuan, Miaolong; Farbiz, Farzam

    2009-02-01

    This work describes the process of developing a 3D Virtual Reality (VR) DJ simulation game intended to be displayed on a stereoscopic display. Using a DLP projector and shutter glasses, the user of the system plays a game in which he or she is a DJ in a night club. The night club's music is playing, and the DJ is "scratching" in correspondence to this music. Much in the flavor of Guitar Hero or Dance Dance Revolution, a virtual turntable is manipulated to project information about how the user should perform. The user only needs a small set of hand gestures, corresponding to the turntable scratch movements to play the game. As the music plays, a series of moving arrows approaching the DJ's turntable instruct the user as to when and how to perform the scratches.

  14. Virtual environment application with partial gravity simulation

    NASA Technical Reports Server (NTRS)

    Ray, David M.; Vanchau, Michael N.

    1994-01-01

    To support manned missions to the surface of Mars and missions requiring manipulation of payloads and locomotion in space, a training facility is required to simulate the conditions of both partial and microgravity. A partial gravity simulator (Pogo) which uses pneumatic suspension is being studied for use in virtual reality training. Pogo maintains a constant partial gravity simulation with a variation of simulated body force between 2.2 and 10 percent, depending on the type of locomotion inputs. this paper is based on the concept and application of a virtual environment system with Pogo including a head-mounted display and glove. The reality engine consists of a high end SGI workstation and PC's which drive Pogo's sensors and data acquisition hardware used for tracking and control. The tracking system is a hybrid of magnetic and optical trackers integrated for this application.

  15. 3D workflow for HDR image capture of projection systems and objects for CAVE virtual environments authoring with wireless touch-sensitive devices

    NASA Astrophysics Data System (ADS)

    Prusten, Mark J.; McIntyre, Michelle; Landis, Marvin

    2006-02-01

    A 3D workflow pipeline is presented for High Dynamic Range (HDR) image capture of projected scenes or objects for presentation in CAVE virtual environments. The methods of HDR digital photography of environments vs. objects are reviewed. Samples of both types of virtual authoring being the actual CAVE environment and a sculpture are shown. A series of software tools are incorporated into a pipeline called CAVEPIPE, allowing for high-resolution objects and scenes to be composited together in natural illumination environments [1] and presented in our CAVE virtual reality environment. We also present a way to enhance the user interface for CAVE environments. The traditional methods of controlling the navigation through virtual environments include: glove, HUD's and 3D mouse devices. By integrating a wireless network that includes both WiFi (IEEE 802.11b/g) and Bluetooth (IEEE 802.15.1) protocols the non-graphical input control device can be eliminated. Therefore wireless devices can be added that would include: PDA's, Smart Phones, TabletPC's, Portable Gaming consoles, and PocketPC's.

  16. Learning Objects and Virtual Learning Environments Technical Evaluation Criteria

    ERIC Educational Resources Information Center

    Kurilovas, Eugenijus; Dagiene, Valentina

    2009-01-01

    The main scientific problems investigated in this article deal with technical evaluation of quality attributes of the main components of e-Learning systems (referred here as DLEs--Digital Libraries of Educational Resources and Services), i.e., Learning Objects (LOs) and Virtual Learning Environments (VLEs). The main research object of the work is…

  17. Extracting Objects for Aerial Manipulation on UAVs Using Low Cost Stereo Sensors.

    PubMed

    Ramon Soria, Pablo; Bevec, Robert; Arrue, Begoña C; Ude, Aleš; Ollero, Aníbal

    2016-05-14

    Giving unmanned aerial vehicles (UAVs) the possibility to manipulate objects vastly extends the range of possible applications. This applies to rotary wing UAVs in particular, where their capability of hovering enables a suitable position for in-flight manipulation. Their manipulation skills must be suitable for primarily natural, partially known environments, where UAVs mostly operate. We have developed an on-board object extraction method that calculates information necessary for autonomous grasping of objects, without the need to provide the model of the object's shape. A local map of the work-zone is generated using depth information, where object candidates are extracted by detecting areas different to our floor model. Their image projections are then evaluated using support vector machine (SVM) classification to recognize specific objects or reject bad candidates. Our method builds a sparse cloud representation of each object and calculates the object's centroid and the dominant axis. This information is then passed to a grasping module. Our method works under the assumption that objects are static and not clustered, have visual features and the floor shape of the work-zone area is known. We used low cost cameras for creating depth information that cause noisy point clouds, but our method has proved robust enough to process this data and return accurate results.

  18. Human factors optimization of virtual environment attributes for a space telerobotic control station

    NASA Astrophysics Data System (ADS)

    Lane, Jason Corde

    2000-10-01

    Remote control of underwater vehicles and other robotic systems has, up until now, proved to be a challenging task for the human operator. With technology advancements in computers and displays, computer interfaces can be used to alleviate the workload on the operator. This research introduces the concept of a commanded display, which is a graphical simulation that shows the commands sent to the actual system in real-time. The primary goal of this research was to show a commanded display as an alternative to the traditional predictive display for reducing the effects of time delay. Several experiments were used to investigate how subjects compensated for time delay under a variety of conditions while controlling a 7-degree of freedom robotic manipulator. Results indicate that time delay increased completion time linearly; this linear relationship occurred even at different manipulator speeds, varying levels of error, and when using a commanded display. The commanded display alleviated the majority of time delay effects, up to 91% reduction. The commanded display also facilitated more accurate control, reducing the number of inadvertent impacts to the task worksite, even when compared to no time delay. Even with a moderate error between the commanded and actual displays, the commanded display was still a useful tool for mitigating time delay. The way subjects controlled the manipulator with the input device was tracked and their control strategies were extracted. A correlation between the subjects' use of the input device and their task completion time was determined. The importance of stereo vision and head tracking was examined and shown to improve a subject's depth perception within a virtual environment. Reports of simulator sickness induced by display equipment, including a head mounted display and LCD shutter glasses, were compared. The results of the above testing were used to develop an effective virtual environment control station to control a multi-arm robot.

  19. ELECTRONIC MASTER SLAVE MANIPULATOR

    DOEpatents

    Goertz, R.C.; Thompson, Wm.M.; Olsen, R.A.

    1958-08-01

    A remote control manipulator is described in which the master and slave arms are electrically connected to produce the desired motions. A response signal is provided in the master unit in order that the operator may sense a feel of the object and may not thereby exert such pressures that would ordinarily damage delicate objects. This apparatus will permit the manipulation of objects at a great distance, that may be viewed over a closed TV circuit, thereby permitting a remote operator to carry out operations in an extremely dangerous area with complete safety.

  20. Effects of motor congruence on visual working memory.

    PubMed

    Quak, Michel; Pecher, Diane; Zeelenberg, Rene

    2014-10-01

    Grounded-cognition theories suggest that memory shares processing resources with perception and action. The motor system could be used to help memorize visual objects. In two experiments, we tested the hypothesis that people use motor affordances to maintain object representations in working memory. Participants performed a working memory task on photographs of manipulable and nonmanipulable objects. The manipulable objects were objects that required either a precision grip (i.e., small items) or a power grip (i.e., large items) to use. A concurrent motor task that could be congruent or incongruent with the manipulable objects caused no difference in working memory performance relative to nonmanipulable objects. Moreover, the precision- or power-grip motor task did not affect memory performance on small and large items differently. These findings suggest that the motor system plays no part in visual working memory.

  1. Attention and perceptual implicit memory: effects of selective versus divided attention and number of visual objects.

    PubMed

    Mulligan, Neil W

    2002-08-01

    Extant research presents conflicting results on whether manipulations of attention during encoding affect perceptual priming. Two suggested mediating factors are type of manipulation (selective vs divided) and whether attention is manipulated across multiple objects or within a single object. Words printed in different colors (Experiment 1) or flanked by colored blocks (Experiment 2) were presented at encoding. In the full-attention condition, participants always read the word, in the unattended condition they always identified the color, and in the divided-attention conditions, participants attended to both word identity and color. Perceptual priming was assessed with perceptual identification and explicit memory with recognition. Relative to the full-attention condition, attending to color always reduced priming. Dividing attention between word identity and color, however, only disrupted priming when these attributes were presented as multiple objects (Experiment 2) but not when they were dimensions of a common object (Experiment 1). On the explicit test, manipulations of attention always affected recognition accuracy.

  2. Hierarchical Robot Control System and Method for Controlling Select Degrees of Freedom of an Object Using Multiple Manipulators

    NASA Technical Reports Server (NTRS)

    Platt, Robert (Inventor); Wampler, II, Charles W. (Inventor); Abdallah, Muhammad E. (Inventor)

    2013-01-01

    A robotic system includes a robot having manipulators for grasping an object using one of a plurality of grasp types during a primary task, and a controller. The controller controls the manipulators during the primary task using a multiple-task control hierarchy, and automatically parameterizes the internal forces of the system for each grasp type in response to an input signal. The primary task is defined at an object-level of control, e.g., using a closed-chain transformation, such that only select degrees of freedom are commanded for the object. A control system for the robotic system has a host machine and algorithm for controlling the manipulators using the above hierarchy. A method for controlling the system includes receiving and processing the input signal using the host machine, including defining the primary task at the object-level of control, e.g., using a closed-chain definition, and parameterizing the internal forces for each of grasp type.

  3. 3D Laser Scanner for Underwater Manipulation.

    PubMed

    Palomer, Albert; Ridao, Pere; Youakim, Dina; Ribas, David; Forest, Josep; Petillot, Yvan

    2018-04-04

    Nowadays, research in autonomous underwater manipulation has demonstrated simple applications like picking an object from the sea floor, turning a valve or plugging and unplugging a connector. These are fairly simple tasks compared with those already demonstrated by the mobile robotics community, which include, among others, safe arm motion within areas populated with a priori unknown obstacles or the recognition and location of objects based on their 3D model to grasp them. Kinect-like 3D sensors have contributed significantly to the advance of mobile manipulation providing 3D sensing capabilities in real-time at low cost. Unfortunately, the underwater robotics community is lacking a 3D sensor with similar capabilities to provide rich 3D information of the work space. In this paper, we present a new underwater 3D laser scanner and demonstrate its capabilities for underwater manipulation. In order to use this sensor in conjunction with manipulators, a calibration method to find the relative position between the manipulator and the 3D laser scanner is presented. Then, two different advanced underwater manipulation tasks beyond the state of the art are demonstrated using two different manipulation systems. First, an eight Degrees of Freedom (DoF) fixed-base manipulator system is used to demonstrate arm motion within a work space populated with a priori unknown fixed obstacles. Next, an eight DoF free floating Underwater Vehicle-Manipulator System (UVMS) is used to autonomously grasp an object from the bottom of a water tank.

  4. Research on 3D virtual campus scene modeling based on 3ds Max and VRML

    NASA Astrophysics Data System (ADS)

    Kang, Chuanli; Zhou, Yanliu; Liang, Xianyue

    2015-12-01

    With the rapid development of modem technology, the digital information management and the virtual reality simulation technology has become a research hotspot. Virtual campus 3D model can not only express the real world objects of natural, real and vivid, and can expand the campus of the reality of time and space dimension, the combination of school environment and information. This paper mainly uses 3ds Max technology to create three-dimensional model of building and on campus buildings, special land etc. And then, the dynamic interactive function is realized by programming the object model in 3ds Max by VRML .This research focus on virtual campus scene modeling technology and VRML Scene Design, and the scene design process in a variety of real-time processing technology optimization strategy. This paper guarantees texture map image quality and improve the running speed of image texture mapping. According to the features and architecture of Guilin University of Technology, 3ds Max, AutoCAD and VRML were used to model the different objects of the virtual campus. Finally, the result of virtual campus scene is summarized.

  5. LivePhantom: Retrieving Virtual World Light Data to Real Environments.

    PubMed

    Kolivand, Hoshang; Billinghurst, Mark; Sunar, Mohd Shahrizal

    2016-01-01

    To achieve realistic Augmented Reality (AR), shadows play an important role in creating a 3D impression of a scene. Casting virtual shadows on real and virtual objects is one of the topics of research being conducted in this area. In this paper, we propose a new method for creating complex AR indoor scenes using real time depth detection to exert virtual shadows on virtual and real environments. A Kinect camera was used to produce a depth map for the physical scene mixing into a single real-time transparent tacit surface. Once this is created, the camera's position can be tracked from the reconstructed 3D scene. Real objects are represented by virtual object phantoms in the AR scene enabling users holding a webcam and a standard Kinect camera to capture and reconstruct environments simultaneously. The tracking capability of the algorithm is shown and the findings are assessed drawing upon qualitative and quantitative methods making comparisons with previous AR phantom generation applications. The results demonstrate the robustness of the technique for realistic indoor rendering in AR systems.

  6. LivePhantom: Retrieving Virtual World Light Data to Real Environments

    PubMed Central

    2016-01-01

    To achieve realistic Augmented Reality (AR), shadows play an important role in creating a 3D impression of a scene. Casting virtual shadows on real and virtual objects is one of the topics of research being conducted in this area. In this paper, we propose a new method for creating complex AR indoor scenes using real time depth detection to exert virtual shadows on virtual and real environments. A Kinect camera was used to produce a depth map for the physical scene mixing into a single real-time transparent tacit surface. Once this is created, the camera’s position can be tracked from the reconstructed 3D scene. Real objects are represented by virtual object phantoms in the AR scene enabling users holding a webcam and a standard Kinect camera to capture and reconstruct environments simultaneously. The tracking capability of the algorithm is shown and the findings are assessed drawing upon qualitative and quantitative methods making comparisons with previous AR phantom generation applications. The results demonstrate the robustness of the technique for realistic indoor rendering in AR systems. PMID:27930663

  7. Using EMG to anticipate head motion for virtual-environment applications

    NASA Technical Reports Server (NTRS)

    Barniv, Yair; Aguilar, Mario; Hasanbelliu, Erion

    2005-01-01

    In virtual environment (VE) applications, where virtual objects are presented in a see-through head-mounted display, virtual images must be continuously stabilized in space in response to user's head motion. Time delays in head-motion compensation cause virtual objects to "swim" around instead of being stable in space which results in misalignment errors when overlaying virtual and real objects. Visual update delays are a critical technical obstacle for implementing head-mounted displays in applications such as battlefield simulation/training, telerobotics, and telemedicine. Head motion is currently measurable by a head-mounted 6-degrees-of-freedom inertial measurement unit. However, even given this information, overall VE-system latencies cannot be reduced under about 25 ms. We present a novel approach to eliminating latencies, which is premised on the fact that myoelectric signals from a muscle precede its exertion of force, thereby limb or head acceleration. We thus suggest utilizing neck-muscles' myoelectric signals to anticipate head motion. We trained a neural network to map such signals onto equivalent time-advanced inertial outputs. The resulting network can achieve time advances of up to 70 ms.

  8. Using EMG to anticipate head motion for virtual-environment applications.

    PubMed

    Barniv, Yair; Aguilar, Mario; Hasanbelliu, Erion

    2005-06-01

    In virtual environment (VE) applications, where virtual objects are presented in a see-through head-mounted display, virtual images must be continuously stabilized in space in response to user's head motion. Time delays in head-motion compensation cause virtual objects to "swim" around instead of being stable in space which results in misalignment errors when overlaying virtual and real objects. Visual update delays are a critical technical obstacle for implementing head-mounted displays in applications such as battlefield simulation/training, telerobotics, and telemedicine. Head motion is currently measurable by a head-mounted 6-degrees-of-freedom inertial measurement unit. However, even given this information, overall VE-system latencies cannot be reduced under about 25 ms. We present a novel approach to eliminating latencies, which is premised on the fact that myoelectric signals from a muscle precede its exertion of force, thereby limb or head acceleration. We thus suggest utilizing neck-muscles' myoelectric signals to anticipate head motion. We trained a neural network to map such signals onto equivalent time-advanced inertial outputs. The resulting network can achieve time advances of up to 70 ms.

  9. Development of a robotic evaluation system for the ability of proprioceptive sensation in slow hand motion.

    PubMed

    Tanaka, Yoshiyuki; Mizoe, Genki; Kawaguchi, Tomohiro

    2015-01-01

    This paper proposes a simple diagnostic methodology for checking the ability of proprioceptive/kinesthetic sensation by using a robotic device. The perception ability of virtual frictional forces is examined in operations of the robotic device by the hand at a uniform slow velocity along the virtual straight/circular path. Experimental results by healthy subjects demonstrate that percentage of correct answers for the designed perceptual tests changes in the motion direction as well as the arm configuration and the HFM (human force manipulability) measure. It can be supposed that the proposed methodology can be applied into the early detection of neuromuscular/neurological disorders.

  10. Combining Digital Archives Content with Serious Game Approach to Create a Gamified Learning Experience

    NASA Astrophysics Data System (ADS)

    Shih, D.-T.; Lin, C. L.; Tseng, C.-Y.

    2015-08-01

    This paper presents an interdisciplinary to develop content-aware application that combines game with learning on specific categories of digital archives. The employment of content-oriented game enhances the gamification and efficacy of learning in culture education on architectures and history of Hsinchu County, Taiwan. The gamified form of the application is used as a backbone to support and provide a strong stimulation to engage users in learning art and culture, therefore this research is implementing under the goal of "The Digital ARt/ARchitecture Project". The purpose of the abovementioned project is to develop interactive serious game approaches and applications for Hsinchu County historical archives and architectures. Therefore, we present two applications, "3D AR for Hukou Old " and "Hsinchu County History Museum AR Tour" which are in form of augmented reality (AR). By using AR imaging techniques to blend real object and virtual content, the users can immerse in virtual exhibitions of Hukou Old Street and Hsinchu County History Museum, and to learn in ubiquitous computing environment. This paper proposes a content system that includes tools and materials used to create representations of digitized cultural archives including historical artifacts, documents, customs, religion, and architectures. The Digital ARt / ARchitecture Project is based on the concept of serious game and consists of three aspects: content creation, target management, and AR presentation. The project focuses on developing a proper approach to serve as an interactive game, and to offer a learning opportunity for appreciating historic architectures by playing AR cards. Furthermore, the card game aims to provide multi-faceted understanding and learning experience to help user learning through 3D objects, hyperlinked web data, and the manipulation of learning mode, and then effectively developing their learning levels on cultural and historical archives in Hsinchu County.

  11. Automation and Robotics for Space-Based Systems, 1991

    NASA Technical Reports Server (NTRS)

    Williams, Robert L., II (Editor)

    1992-01-01

    The purpose of this in-house workshop was to assess the state-of-the-art of automation and robotics for space operations from an LaRC perspective and to identify areas of opportunity for future research. Over half of the presentations came from the Automation Technology Branch, covering telerobotic control, extravehicular activity (EVA) and intra-vehicular activity (IVA) robotics, hand controllers for teleoperation, sensors, neural networks, and automated structural assembly, all applied to space missions. Other talks covered the Remote Manipulator System (RMS) active damping augmentation, space crane work, modeling, simulation, and control of large, flexible space manipulators, and virtual passive controller designs for space robots.

  12. Spatial constraints of stereopsis in video displays

    NASA Technical Reports Server (NTRS)

    Schor, Clifton

    1989-01-01

    Recent development in video technology, such as the liquid crystal displays and shutters, have made it feasible to incorporate stereoscopic depth into the 3-D representations on 2-D displays. However, depth has already been vividly portrayed in video displays without stereopsis using the classical artists' depth cues described by Helmholtz (1866) and the dynamic depth cues described in detail by Ittleson (1952). Successful static depth cues include overlap, size, linear perspective, texture gradients, and shading. Effective dynamic cues include looming (Regan and Beverly, 1979) and motion parallax (Rogers and Graham, 1982). Stereoscopic depth is superior to the monocular distance cues under certain circumstances. It is most useful at portraying depth intervals as small as 5 to 10 arc secs. For this reason it is extremely useful in user-video interactions such as telepresence. Objects can be manipulated in 3-D space, for example, while a person who controls the operations views a virtual image of the manipulated object on a remote 2-D video display. Stereopsis also provides structure and form information in camouflaged surfaces such as tree foliage. Motion parallax also reveals form; however, without other monocular cues such as overlap, motion parallax can yield an ambiguous perception. For example, a turning sphere, portrayed as solid by parallax can appear to rotate either leftward or rightward. However, only one direction of rotation is perceived when stereo-depth is included. If the scene is static, then stereopsis is the principal cue for revealing the camouflaged surface structure. Finally, dynamic stereopsis provides information about the direction of motion in depth (Regan and Beverly, 1979). Clearly there are many spatial constraints, including spatial frequency content, retinal eccentricity, exposure duration, target spacing, and disparity gradient, which - when properly adjusted - can greatly enhance stereodepth in video displays.

  13. The virtual reality simulator dV-Trainer(®) is a valid assessment tool for robotic surgical skills.

    PubMed

    Perrenot, Cyril; Perez, Manuela; Tran, Nguyen; Jehl, Jean-Philippe; Felblinger, Jacques; Bresler, Laurent; Hubert, Jacques

    2012-09-01

    Exponential development of minimally invasive techniques, such as robotic-assisted devices, raises the question of how to assess robotic surgery skills. Early development of virtual simulators has provided efficient tools for laparoscopic skills certification based on objective scoring, high availability, and lower cost. However, similar evaluation is lacking for robotic training. The purpose of this study was to assess several criteria, such as reliability, face, content, construct, and concurrent validity of a new virtual robotic surgery simulator. This prospective study was conducted from December 2009 to April 2010 using three simulators dV-Trainers(®) (MIMIC Technologies(®)) and one Da Vinci S(®) (Intuitive Surgical(®)). Seventy-five subjects, divided into five groups according to their initial surgical training, were evaluated based on five representative exercises of robotic specific skills: 3D perception, clutching, visual force feedback, EndoWrist(®) manipulation, and camera control. Analysis was extracted from (1) questionnaires (realism and interest), (2) automatically generated data from simulators, and (3) subjective scoring by two experts of depersonalized videos of similar exercises with robot. Face and content validity were generally considered high (77 %). Five levels of ability were clearly identified by the simulator (ANOVA; p = 0.0024). There was a strong correlation between automatic data from dV-Trainer and subjective evaluation with robot (r = 0.822). Reliability of scoring was high (r = 0.851). The most relevant criteria were time and economy of motion. The most relevant exercises were Pick and Place and Ring and Rail. The dV-Trainer(®) simulator proves to be a valid tool to assess basic skills of robotic surgery.

  14. RoboLab and virtual environments

    NASA Technical Reports Server (NTRS)

    Giarratano, Joseph C.

    1994-01-01

    A useful adjunct to the manned space station would be a self-contained free-flying laboratory (RoboLab). This laboratory would have a robot operated under telepresence from the space station or ground. Long duration experiments aboard RoboLab could be performed by astronauts or scientists using telepresence to operate equipment and perform experiments. Operating the lab by telepresence would eliminate the need for life support such as food, water and air. The robot would be capable of motion in three dimensions, have binocular vision TV cameras, and two arms with manipulators to simulate hands. The robot would move along a two-dimensional grid and have a rotating, telescoping periscope section for extension in the third dimension. The remote operator would wear a virtual reality type headset to allow the superposition of computer displays over the real-time video of the lab. The operators would wear exoskeleton type arms to facilitate the movement of objects and equipment operation. The combination of video displays, motion, and the exoskeleton arms would provide a high degree of telepresence, especially for novice users such as scientists doing short-term experiments. The RoboLab could be resupplied and samples removed on other space shuttle flights. A self-contained RoboLab module would be designed to fit within the cargo bay of the space shuttle. Different modules could be designed for specific applications, i.e., crystal-growing, medicine, life sciences, chemistry, etc. This paper describes a RoboLab simulation using virtual reality (VR). VR provides an ideal simulation of telepresence before the actual robot and laboratory modules are constructed. The easy simulation of different telepresence designs will produce a highly optimum design before construction rather than the more expensive and time consuming hardware changes afterwards.

  15. Applied Virtual Reality in Reusable Launch Vehicle Design, Operations Development, and Training

    NASA Technical Reports Server (NTRS)

    Hale, Joseph P.

    1997-01-01

    Application of Virtual Reality (VR) technology offers much promise to enhance and accelerate the development of Reusable Launch Vehicle (RLV) infrastructure and operations while simultaneously reducing developmental and operational costs. One of the primary cost areas in the RLV concept that is receiving special attention is maintenance and refurbishment operations. To produce and operate a cost effective RLV, turnaround cost must be minimized. Designing for maintainability is a necessary requirement in developing RLVs. VR can provide cost effective methods to design and evaluate components and systems for maintenance and refurbishment operations. The National Aeronautics and Space Administration (NASA)/Marshall Space Flight Center (MSFC) is beginning to utilize VR for design, operations development, and design analysis for RLVs. A VR applications program has been under development at NASA/MSFC since 1989. The objectives of the MSFC VR Applications Program are to develop, assess, validate, and utilize VR in hardware development, operations development and support, mission operations training and science training. The NASA/MSFC VR capability has also been utilized in several applications. These include: 1) the assessment of the design of the late Space Station Freedom Payload Control Area (PCA), the control room from which onboard payload operations are managed; 2) a viewing analysis of the Tethered Satellite System's (TSS) "end-of-reel" tether marking options; 3) development of a virtual mockup of the International Space Welding Experiment for science viewing analyses from the Shuttle Remote Manipulator System elbow camera and as a trainer for ground controllers; and 4) teleoperations using VR. This presentation will give a general overview of the MSFC VR Applications Program and describe the use of VR in design analyses, operations development, and training for RLVs.

  16. Evaluating the use of augmented reality to support undergraduate student learning in geomorphology

    NASA Astrophysics Data System (ADS)

    Ockelford, A.; Bullard, J. E.; Burton, E.; Hackney, C. R.

    2016-12-01

    Augmented Reality (AR) supports the understanding of complex phenomena by providing unique visual and interactive experiences that combine real and virtual information and help communicate abstract problems to learners. With AR, designers can superimpose virtual graphics over real objects, allowing users to interact with digital content through physical manipulation. One of the most significant pedagogic features of AR is that it provides an essentially student-centred and flexible space in which students can learn. By actively engaging participants using a design-thinking approach, this technology has the potential to provide a more productive and engaging learning environment than real or virtual learning environments alone. AR is increasingly being used in support of undergraduate learning and public engagement activities across engineering, medical and humanities disciplines but it is not widely used across the geosciences disciplines despite the obvious applicability. This paper presents preliminary results from a multi-institutional project which seeks to evaluate the benefits and challenges of using an augmented reality sand box to support undergraduate learning in geomorphology. The sandbox enables users to create and visualise topography. As the sand is sculpted, contours are projected onto the miniature landscape. By hovering a hand over the box, users can make it `rain' over the landscape and the water `flows' down in to rivers and valleys. At undergraduate level, the sand-box is an ideal focus for problem-solving exercises, for example exploring how geomorphology controls hydrological processes, how such processes can be altered and the subsequent impacts of the changes for environmental risk. It is particularly valuable for students who favour a visual or kinesthetic learning style. Results presented in this paper discuss how the sandbox provides a complex interactive environment that encourages communication, collaboration and co-design.

  17. The Effect of Perspective on Presence and Space Perception

    PubMed Central

    Ling, Yun; Nefs, Harold T.; Brinkman, Willem-Paul; Qu, Chao; Heynderickx, Ingrid

    2013-01-01

    In this paper we report two experiments in which the effect of perspective projection on presence and space perception was investigated. In Experiment 1, participants were asked to score a presence questionnaire when looking at a virtual classroom. We manipulated the vantage point, the viewing mode (binocular versus monocular viewing), the display device/screen size (projector versus TV) and the center of projection. At the end of each session of Experiment 1, participants were asked to set their preferred center of projection such that the image seemed most natural to them. In Experiment 2, participants were asked to draw a floor plan of the virtual classroom. The results show that field of view, viewing mode, the center of projection and display all significantly affect presence and the perceived layout of the virtual environment. We found a significant linear relationship between presence and perceived layout of the virtual classroom, and between the preferred center of projection and perceived layout. The results indicate that the way in which virtual worlds are presented is critical for the level of experienced presence. The results also suggest that people ignore veridicality and they experience a higher level of presence while viewing elongated virtual environments compared to viewing the original intended shape. PMID:24223156

  18. Virtual Acoustics: Evaluation of Psychoacoustic Parameters

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Null, Cynthia H. (Technical Monitor)

    1997-01-01

    Current virtual acoustic displays for teleconferencing and virtual reality are usually limited to very simple or non-existent renderings of reverberation, a fundamental part of the acoustic environmental context that is encountered in day-to-day hearing. Several research efforts have produced results that suggest that environmental cues dramatically improve perceptual performance within virtual acoustic displays, and that is possible to manipulate signal processing parameters to effectively reproduce important aspects of virtual acoustic perception in real-time. However, the computational resources for rendering reverberation remain formidable. Our efforts at NASA Ames have been focused using a several perceptual threshold metrics, to determine how various "trade-offs" might be made in real-time acoustic rendering. This includes both original work and confirmation of existing data that was obtained in real rather than virtual environments. The talk will consider the importance of using individualized versus generalized pinnae cues (the "Head-Related Transfer Function"); the use of head movement cues; threshold data for early reflections and late reverberation; and consideration of the necessary accuracy for measuring and rendering octave-band absorption characteristics of various wall surfaces. In addition, a consideration of the analysis-synthesis of the reverberation within "everyday spaces" (offices, conference rooms) will be contrasted to the commonly used paradigm of concert hall spaces.

  19. Sensorimotor Training in Virtual Reality: A Review

    PubMed Central

    Adamovich, Sergei V.; Fluet, Gerard G.; Tunik, Eugene; Merians, Alma S.

    2010-01-01

    Recent experimental evidence suggests that rapid advancement of virtual reality (VR) technologies has great potential for the development of novel strategies for sensorimotor training in neurorehabilitation. We discuss what the adaptive and engaging virtual environments can provide for massive and intensive sensorimotor stimulation needed to induce brain reorganization. Second, discrepancies between the veridical and virtual feedback can be introduced in VR to facilitate activation of targeted brain networks, which in turn can potentially speed up the recovery process. Here we review the existing experimental evidence regarding the beneficial effects of training in virtual environments on the recovery of function in the areas of gait, upper extremity function and balance, in various patient populations. We also discuss possible mechanisms underlying these effects. We feel that future research in the area of virtual rehabilitation should follow several important paths. Imaging studies to evaluate the effects of sensory manipulation on brain activation patterns and the effect of various training parameters on long term changes in brain function are needed to guide future clinical inquiry. Larger clinical studies are also needed to establish the efficacy of sensorimotor rehabilitation using VR approaches in various clinical populations and most importantly, to identify VR training parameters that are associated with optimal transfer into real-world functional improvements. PMID:19713617

  20. Virtual learning object and environment: a concept analysis.

    PubMed

    Salvador, Pétala Tuani Candido de Oliveira; Bezerril, Manacés Dos Santos; Mariz, Camila Maria Santos; Fernandes, Maria Isabel Domingues; Martins, José Carlos Amado; Santos, Viviane Euzébia Pereira

    2017-01-01

    To analyze the concept of virtual learning object and environment according to Rodgers' evolutionary perspective. Descriptive study with a mixed approach, based on the stages proposed by Rodgers in his concept analysis method. Data collection occurred in August 2015 with the search of dissertations and theses in the Bank of Theses of the Coordination for the Improvement of Higher Education Personnel. Quantitative data were analyzed based on simple descriptive statistics and the concepts through lexicographic analysis with support of the IRAMUTEQ software. The sample was made up of 161 studies. The concept of "virtual learning environment" was presented in 99 (61.5%) studies, whereas the concept of "virtual learning object" was presented in only 15 (9.3%) studies. A virtual learning environment includes several and different types of virtual learning objects in a common pedagogical context. Analisar o conceito de objeto e de ambiente virtual de aprendizagem na perspectiva evolucionária de Rodgers. Estudo descritivo, de abordagem mista, realizado a partir das etapas propostas por Rodgers em seu modelo de análise conceitual. A coleta de dados ocorreu em agosto de 2015 com a busca de dissertações e teses no Banco de Teses e Dissertações da Coordenação de Aperfeiçoamento de Pessoal de Nível Superior. Os dados quantitativos foram analisados a partir de estatística descritiva simples e os conceitos pela análise lexicográfica com suporte do IRAMUTEQ. A amostra é constituída de 161 estudos. O conceito de "ambiente virtual de aprendizagem" foi apresentado em 99 (61,5%) estudos, enquanto o de "objeto virtual de aprendizagem" em apenas 15 (9,3%). Concluiu-se que um ambiente virtual de aprendizagem reúne vários e diferentes tipos de objetos virtuais de aprendizagem em um contexto pedagógico comum.

  1. Effects of Axial Torsion on Disc Height Distribution: An In Vivo Study.

    PubMed

    Espinoza Orías, Alejandro A; Mammoser, Nicole M; Triano, John J; An, Howard S; Andersson, Gunnar B J; Inoue, Nozomu

    2016-05-01

    Axial rotation of the torso is commonly used during manipulation treatment of low back pain. Little is known about the effect of these positions on disc morphology. Rotation is a three-dimensional event that is inadequately represented with planar images in the clinic. True quantification of the intervertebral gap can be achieved with a disc height distribution. The objective of this study was to analyze disc height distribution patterns during torsion relevant to manipulation in vivo. Eighty-one volunteers were computed tomography-scanned both in supine and in right 50° rotation positions. Virtual models of each intervertebral gap representing the disc were created with the inferior endplate of each "disc" set as the reference surface and separated into 5 anatomical zones: 4 peripheral and 1 central, corresponding to the footprint of the annulus fibrosus and nucleus pulposus, respectively. Whole-disc and individual anatomical zone disc height distributions were calculated in both positions and were compared against each other with analysis of variance, with significance set at P < .05. Mean neutral disc height was 7.32 mm (1.59 mm). With 50° rotation, a small but significant increase to 7.44 mm (1.52 mm) (P < .0002) was observed. The right side showed larger separation in most levels, except at L5/S1. The posterior and right zones increased in height upon axial rotation of the spine (P < .0001), whereas the left, anterior, and central decreased. This study quantified important tensile/compressive changes disc height during torsion. The implications of these mutually opposing changes on spinal manipulation are still unknown. Copyright © 2016 National University of Health Sciences. Published by Elsevier Inc. All rights reserved.

  2. Manipulation of micro-objects using acoustically oscillating bubbles based on the gas permeability of PDMS.

    PubMed

    Liu, Bendong; Tian, Baohua; Yang, Xu; Li, Mohan; Yang, Jiahui; Li, Desheng; Oh, Kwang W

    2018-05-01

    This paper presents a novel manipulation method for micro-objects using acoustically oscillating bubbles with a controllable position based on the gas permeability of polydimethylsiloxane. The oscillating bubble trapped within the side channel attracts the neighboring micro-objects, and the position of the air-liquid interface is controlled by generating temporary pressure difference between the side channel and the air channel. To demonstrate the feasibility of the method in technological applications, polystyrene microparticles of 10  μ m in diameter were successfully captured, transported, and released. The influence of pressure difference on the movement speed of the air-liquid interface was demonstrated in our experiments, and the manipulation performance was also characterized by varying the frequency of the acoustic excitation and the pressure difference. Since the bubble generation and the air-liquid interface movement in our manipulation method do not need any electrochemical reaction and any high temperature, this on-chip manipulation method provides a controllable, efficient, and noninvasive tool for handling micro-objects such as particles, cells, and other entities. The whole manipulation process, including capturing, transporting, and releasing of particles, spent less than 1 min. It can be used to select the cells and particles in the microfluidic device or change the cell culture medium.

  3. Altering User Movement Behaviour in Virtual Environments.

    PubMed

    Simeone, Adalberto L; Mavridou, Ifigeneia; Powell, Wendy

    2017-04-01

    In immersive Virtual Reality systems, users tend to move in a Virtual Environment as they would in an analogous physical environment. In this work, we investigated how user behaviour is affected when the Virtual Environment differs from the physical space. We created two sets of four environments each, plus a virtual replica of the physical environment as a baseline. The first focused on aesthetic discrepancies, such as a water surface in place of solid ground. The second focused on mixing immaterial objects together with those paired to tangible objects. For example, barring an area with walls or obstacles. We designed a study where participants had to reach three waypoints laid out in such a way to prompt a decision on which path to follow based on the conflict between the mismatching visual stimuli and their awareness of the real layout of the room. We analysed their performances to determine whether their trajectories were altered significantly from the shortest route. Our results indicate that participants altered their trajectories in presence of surfaces representing higher walking difficulty (for example, water instead of grass). However, when the graphical appearance was found to be ambiguous, there was no significant trajectory alteration. The environments mixing immaterial with physical objects had the most impact on trajectories with a mean deviation from the shortest route of 60 cm against the 37 cm of environments with aesthetic alterations. The co-existance of paired and unpaired virtual objects was reported to support the idea that all objects participants saw were backed by physical props. From these results and our observations, we derive guidelines on how to alter user movement behaviour in Virtual Environments.

  4. The perception of spatial layout in real and virtual worlds.

    PubMed

    Arthur, E J; Hancock, P A; Chrysler, S T

    1997-01-01

    As human-machine interfaces grow more immersive and graphically-oriented, virtual environment systems become more prominent as the medium for human-machine communication. Often, virtual environments (VE) are built to provide exact metrical representations of existing or proposed physical spaces. However, it is not known how individuals develop representational models of these spaces in which they are immersed and how those models may be distorted with respect to both the virtual and real-world equivalents. To evaluate the process of model development, the present experiment examined participant's ability to reproduce a complex spatial layout of objects having experienced them previously under different viewing conditions. The layout consisted of nine common objects arranged on a flat plane. These objects could be viewed in a free binocular virtual condition, a free binocular real-world condition, and in a static monocular view of the real world. The first two allowed active exploration of the environment while the latter condition allowed the participant only a passive opportunity to observe from a single viewpoint. Viewing conditions were a between-subject variable with 10 participants randomly assigned to each condition. Performance was assessed using mapping accuracy and triadic comparisons of relative inter-object distances. Mapping results showed a significant effect of viewing condition where, interestingly, the static monocular condition was superior to both the active virtual and real binocular conditions. Results for the triadic comparisons showed a significant interaction for gender by viewing condition in which males were more accurate than females. These results suggest that the situation model resulting from interaction with a virtual environment was indistinguishable from interaction with real objects at least within the constraints of the present procedure.

  5. ARCHAEO-SCAN: Portable 3D shape measurement system for archaeological field work

    NASA Astrophysics Data System (ADS)

    Knopf, George K.; Nelson, Andrew J.

    2004-10-01

    Accurate measurement and thorough documentation of excavated artifacts are the essential tasks of archaeological fieldwork. The on-site recording and long-term preservation of fragile evidence can be improved using 3D spatial data acquisition and computer-aided modeling technologies. Once the artifact is digitized and geometry created in a virtual environment, the scientist can manipulate the pieces in a virtual reality environment to develop a "realistic" reconstruction of the object without physically handling or gluing the fragments. The ARCHAEO-SCAN system is a flexible, affordable 3D coordinate data acquisition and geometric modeling system for acquiring surface and shape information of small to medium sized artifacts and bone fragments. The shape measurement system is being developed to enable the field archaeologist to manually sweep the non-contact sensor head across the relic or artifact surface. A series of unique data acquisition, processing, registration and surface reconstruction algorithms are then used to integrate 3D coordinate information from multiple views into a single reference frame. A novel technique for automatically creating a hexahedral mesh of the recovered fragments is presented. The 3D model acquisition system is designed to operate from a standard laptop with minimal additional hardware and proprietary software support. The captured shape data can be pre-processed and displayed on site, stored digitally on a CD, or transmitted via the Internet to the researcher's home institution.

  6. Visualizing Science Dissections in 3D: Contextualizing Student Responses to Multidimensional Learning Materials in Science Dissections

    NASA Astrophysics Data System (ADS)

    Walker, Robin Annette

    A series of dissection tasks was developed in this mixed-methods study of student self-explanations of their learning using actual and virtual multidimensional science dissections and visuo-spatial instruction. Thirty-five seventh-grade students from a science classroom (N = 20 Female/15 Male, Age =13 years) were assigned to three dissection environments instructing them to: (a) construct static paper designs of frogs, (b) perform active dissections with formaldehyde specimens, and (c) engage with interactive 3D frog visualizations and virtual simulations. This multi-methods analysis of student engagement with anchored dissection materials found learning gains on labeling exercises and lab assessments among most students. Data revealed that students who correctly utilized multimedia text and diagrams, individually and collaboratively, manipulated 3D tools more effectively and were better able to self-explain and complete their dissection work. Student questionnaire responses corroborated that they preferred learning how to dissect a frog using 3D multimedia instruction. The data were used to discuss the impact of 3D technologies, programs, and activities on student learning, spatial reasoning, and their interest in science. Implications were drawn regarding how to best integrate 3D visualizations into science curricula as innovative learning options for students, as instructional alternatives for teachers, and as mandated dissection choices for those who object to physical dissections in schools.

  7. Simulating 3D deformation using connected polygons

    NASA Astrophysics Data System (ADS)

    Tarigan, J. T.; Jaya, I.; Hardi, S. M.; Zamzami, E. M.

    2018-03-01

    In modern 3D application, interaction between user and the virtual world is one of an important factor to increase the realism. This interaction can be visualized in many forms; one of them is object deformation. There are many ways to simulate object deformation in virtual 3D world; each comes with different level of realism and performance. Our objective is to present a new method to simulate object deformation by using a graph-connected polygon. In this solution, each object contains multiple level of polygons in different level of volume. The proposed solution focusses on performance rather while maintaining the acceptable level of realism. In this paper, we present the design and implementation of our solution and show that this solution is usable in performance sensitive 3D application such as games and virtual reality.

  8. ERP effects and perceived exclusion in the Cyberball paradigm: Correlates of expectancy violation?

    PubMed

    Weschke, Sarah; Niedeggen, Michael

    2015-10-22

    A virtual ball-tossing game called Cyberball has allowed the identification of neural structures involved in the processing of social exclusion by using neurocognitive methods. However, there is still an ongoing debate if structures involved are either pain- or exclusion-specific or part of a broader network. In electrophysiological Cyberball studies we have shown that the P3b component is sensitive to exclusion manipulations, possibly modulated by the probability of ball possession of the participant (event "self") or the presumed co-players (event "other"). Since it is known from oddball studies that the P3b is not only modulated by the objective probability of an event, but also by subjective expectancy, we independently manipulated the probability of the events "self" and "other" and the expectancy for these events. Questionnaire data indicate that social need threat is only induced when the expectancy for involvement in the ball-tossing game is violated. Similarly, the P3b amplitude of both "self" and "other" events was a correlate of expectancy violation. We conclude that both the subjective report of exclusion and the P3b effect induced in the Cyberball paradigm are primarily based on a cognitive process sensitive to expectancy violations, and that the P3b is not related to the activation of an exclusion-specific neural alarm system. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. AMI: Augmented Michelson Interferometer

    NASA Astrophysics Data System (ADS)

    Furió, David; Hachet, Martin; Guillet, Jean-Paul; Bousquet, Bruno; Fleck, Stéphanie; Reuter, Patrick; Canioni, Lionel

    2015-10-01

    Experiments in optics are essential for learning and understanding physical phenomena. The problem with these experiments is that they are generally time consuming for both their construction and their maintenance, potentially dangerous through the use of laser sources, and often expensive due to high technology optical components. We propose to simulate such experiments by way of hybrid systems that exploit both spatial augmented reality and tangible interaction. In particular, we focus on one of the most popular optical experiments: the Michelson interferometer. In our approach, we target a highly interactive system where students are able to interact in real time with the Augmented Michelson Interferometer (AMI) to observe, test hypotheses and then to enhance their comprehension. Compared to a fully digital simulation, we are investigating an approach that benefits from both physical and virtual elements, and where the students experiment by manipulating 3D-printed physical replicas of optical components (e.g. lenses and mirrors). Our objective is twofold. First, we want to ensure that the students will learn with our simulator the same concepts and skills that they learn with traditional methods. Second, we hypothesis that such a system opens new opportunities to teach optics in a way that was not possible before, by manipulating concepts beyond the limits of observable physical phenomena. To reach this goal, we have built a complementary team composed of experts in the field of optics, human-computer interaction, computer graphics, sensors and actuators, and education science.

  10. Social Interactions and Instructional Artifacts: Emergent Socio-Technical Affordances and Constraints for Children's Geometric Thinking

    ERIC Educational Resources Information Center

    Evans, Michael A.; Wilkins, Jesse L. M.

    2011-01-01

    The reported exploratory study consisted primarily of classroom visits, videotaped sessions, and post-treatment interviews whereby second graders (n = 12) worked on problems in planar geometry, individually and in triads, using physical and virtual manipulatives. The goal of the study was to: 1) characterize the nature of geometric thinking found…

  11. Schema-Based Instruction with Concrete and Virtual Manipulatives to Teach Problem Solving to Students with Autism

    ERIC Educational Resources Information Center

    Root, Jenny R.; Browder, Diane M.; Saunders, Alicia F.; Lo, Ya-yu

    2017-01-01

    The current study evaluated the effects of modified schema-based instruction on the mathematical word problem solving skills of three elementary students with autism spectrum disorders and moderate intellectual disability. Participants learned to solve compare problem type with themes that related to their interests and daily experiences. In…

  12. Annual Fire, Mowing and Fertilization Effects on Two Cicada Species (Homoptera: Cicadidae) in Tallgrass Prairie

    Treesearch

    Mac A. Callaham; Matt R. Whiles; John M. Blair

    2002-01-01

    In tallgrass prairie, cicadas emerge annually, are abundant and their emergence can be an important flux of energy and nutrients. However, factors influencing the distribution and abundance of these cicadas are virtually unknown. We examined cicada emergence in plots from a long-term (13 y) experimental manipulation involving common tallgrass prairie management...

  13. Exploration of Factors that Affect the Comparative Effectiveness of Physical and Virtual Manipulatives in an Undergraduate Laboratory

    ERIC Educational Resources Information Center

    Chini, Jacquelyn J.; Madsen, Adrian; Gire, Elizabeth; Rebello, N. Sanjay; Puntambekar, Sadhana

    2012-01-01

    Recent research results have failed to support the conventionally held belief that students learn physics best from hands-on experiences with physical equipment. Rather, studies have found that students who perform similar experiments with computer simulations perform as well or better on measures of conceptual understanding than their peers who…

  14. Elementary School Teachers' Perceptions toward ICT: The Case of Using Magic Board for Teaching Mathematics

    ERIC Educational Resources Information Center

    Yuan, Yuan; Lee, Chun-Yi

    2012-01-01

    This study aims at investigating elementary school teachers' perceptions toward to the use of ICT. Magic Board, an interactive web-based environment which provides a set of virtual manipulatives for elementary mathematics, is used as the case of ICT. After participating in Magic Board workshops, 250 elementary school teachers in Taiwan responded…

  15. STS-111 Expedition Five Crew Training Clip

    NASA Technical Reports Server (NTRS)

    2002-01-01

    The STS-111 Expedition Five Crew begins with training on payload operations. Flight Engineer Peggy Whitson and Mission Specialist Sandy Magnus are shown in Shuttle Remote Manipulator System (SRMS) procedures. Flight Engineer Sergei Treschev gets suited for Neutral Neutral Buoyancy Lab (NBL) training. Virtual Reality lab training is shown with Peggy Whitson. Habitation Equipment and procedures are also presented.

  16. Designing Experiments on Thermal Interactions by Secondary-School Students in a Simulated Laboratory Environment

    ERIC Educational Resources Information Center

    Lefkos, Ioannis; Psillos, Dimitris; Hatzikraniotis, Euripides

    2011-01-01

    Background and purpose: The aim of this study was to explore the effect of investigative activities with manipulations in a virtual laboratory on students' ability to design experiments. Sample: Fourteen students in a lower secondary school in Greece attended a teaching sequence on thermal phenomena based on the use of information and…

  17. Directional control-response compatibility relationships assessed by physical simulation of an underground bolting machine.

    PubMed

    Steiner, Lisa; Burgess-Limerick, Robin; Porter, William

    2014-03-01

    The authors examine the pattern of direction errors made during the manipulation of a physical simulation of an underground coal mine bolting machine to assess the directional control-response compatibility relationships associated with the device and to compare these results to data obtained from a virtual simulation of a generic device. Directional errors during the manual control of underground coal roof bolting equipment are associated with serious injuries. Directional control-response relationships have previously been examined using a virtual simulation of a generic device; however, the applicability of these results to a specific physical device may be questioned. Forty-eight participants randomly assigned to different directional control-response relationships manipulated horizontal or vertical control levers to move a simulated bolter arm in three directions (elevation, slew, and sump) as well as to cause a light to become illuminated and raise or lower a stabilizing jack. Directional errors were recorded during the completion of 240 trials by each participant Directional error rates are increased when the control and response are in opposite directions or if the direction of the control and response are perpendicular.The pattern of direction error rates was consistent with experiments obtained from a generic device in a virtual environment. Error rates are increased by incompatible directional control-response relationships. Ensuring that the design of equipment controls maintains compatible directional control-response relationships has potential to reduce the errors made in high-risk situations, such as underground coal mining.

  18. Non-hierarchical Influence of Visual Form, Touch, and Position Cues on Embodiment, Agency, and Presence in Virtual Reality

    PubMed Central

    Pritchard, Stephen C.; Zopf, Regine; Polito, Vince; Kaplan, David M.; Williams, Mark A.

    2016-01-01

    The concept of self-representation is commonly decomposed into three component constructs (sense of embodiment, sense of agency, and sense of presence), and each is typically investigated separately across different experimental contexts. For example, embodiment has been explored in bodily illusions; agency has been investigated in hypnosis research; and presence has been primarily studied in the context of Virtual Reality (VR) technology. Given that each component involves the integration of multiple cues within and across sensory modalities, they may rely on similar underlying mechanisms. However, the degree to which this may be true remains unclear when they are independently studied. As a first step toward addressing this issue, we manipulated a range of cues relevant to these components of self-representation within a single experimental context. Using consumer-grade Oculus Rift VR technology, and a new implementation of the Virtual Hand Illusion, we systematically manipulated visual form plausibility, visual–tactile synchrony, and visual–proprioceptive spatial offset to explore their influence on self-representation. Our results show that these cues differentially influence embodiment, agency, and presence. We provide evidence that each type of cue can independently and non-hierarchically influence self-representation yet none of these cues strictly constrains or gates the influence of the others. We discuss theoretical implications for understanding self-representation as well as practical implications for VR experiment design, including the suitability of consumer-based VR technology in research settings. PMID:27826275

  19. KinImmerse: Macromolecular VR for NMR ensembles

    PubMed Central

    Block, Jeremy N; Zielinski, David J; Chen, Vincent B; Davis, Ian W; Vinson, E Claire; Brady, Rachael; Richardson, Jane S; Richardson, David C

    2009-01-01

    Background In molecular applications, virtual reality (VR) and immersive virtual environments have generally been used and valued for the visual and interactive experience – to enhance intuition and communicate excitement – rather than as part of the actual research process. In contrast, this work develops a software infrastructure for research use and illustrates such use on a specific case. Methods The Syzygy open-source toolkit for VR software was used to write the KinImmerse program, which translates the molecular capabilities of the kinemage graphics format into software for display and manipulation in the DiVE (Duke immersive Virtual Environment) or other VR system. KinImmerse is supported by the flexible display construction and editing features in the KiNG kinemage viewer and it implements new forms of user interaction in the DiVE. Results In addition to molecular visualizations and navigation, KinImmerse provides a set of research tools for manipulation, identification, co-centering of multiple models, free-form 3D annotation, and output of results. The molecular research test case analyzes the local neighborhood around an individual atom within an ensemble of nuclear magnetic resonance (NMR) models, enabling immersive visual comparison of the local conformation with the local NMR experimental data, including target curves for residual dipolar couplings (RDCs). Conclusion The promise of KinImmerse for production-level molecular research in the DiVE is shown by the locally co-centered RDC visualization developed there, which gave new insights now being pursued in wider data analysis. PMID:19222844

  20. Image processing, geometric modeling and data management for development of a virtual bone surgery system.

    PubMed

    Niu, Qiang; Chi, Xiaoyi; Leu, Ming C; Ochoa, Jorge

    2008-01-01

    This paper describes image processing, geometric modeling and data management techniques for the development of a virtual bone surgery system. Image segmentation is used to divide CT scan data into different segments representing various regions of the bone. A region-growing algorithm is used to extract cortical bone and trabecular bone structures systematically and efficiently. Volume modeling is then used to represent the bone geometry based on the CT scan data. Material removal simulation is achieved by continuously performing Boolean subtraction of the surgical tool model from the bone model. A quadtree-based adaptive subdivision technique is developed to handle the large set of data in order to achieve the real-time simulation and visualization required for virtual bone surgery. A Marching Cubes algorithm is used to generate polygonal faces from the volumetric data. Rendering of the generated polygons is performed with the publicly available VTK (Visualization Tool Kit) software. Implementation of the developed techniques consists of developing a virtual bone-drilling software program, which allows the user to manipulate a virtual drill to make holes with the use of a PHANToM device on a bone model derived from real CT scan data.

Top