Sample records for virtual device interface

  1. Integrating Virtual Worlds with Tangible User Interfaces for Teaching Mathematics: A Pilot Study.

    PubMed

    Guerrero, Graciela; Ayala, Andrés; Mateu, Juan; Casades, Laura; Alamán, Xavier

    2016-10-25

    This article presents a pilot study of the use of two new tangible interfaces and virtual worlds for teaching geometry in a secondary school. The first tangible device allows the user to control a virtual object in six degrees of freedom. The second tangible device is used to modify virtual objects, changing attributes such as position, size, rotation and color. A pilot study on using these devices was carried out at the "Florida Secundaria" high school. A virtual world was built where students used the tangible interfaces to manipulate geometrical figures in order to learn different geometrical concepts. The pilot experiment results suggest that the use of tangible interfaces and virtual worlds allowed a more meaningful learning (concepts learnt were more durable).

  2. Integrating Virtual Worlds with Tangible User Interfaces for Teaching Mathematics: A Pilot Study

    PubMed Central

    Guerrero, Graciela; Ayala, Andrés; Mateu, Juan; Casades, Laura; Alamán, Xavier

    2016-01-01

    This article presents a pilot study of the use of two new tangible interfaces and virtual worlds for teaching geometry in a secondary school. The first tangible device allows the user to control a virtual object in six degrees of freedom. The second tangible device is used to modify virtual objects, changing attributes such as position, size, rotation and color. A pilot study on using these devices was carried out at the “Florida Secundaria” high school. A virtual world was built where students used the tangible interfaces to manipulate geometrical figures in order to learn different geometrical concepts. The pilot experiment results suggest that the use of tangible interfaces and virtual worlds allowed a more meaningful learning (concepts learnt were more durable). PMID:27792132

  3. Virtual optical interfaces for the transportation industry

    NASA Astrophysics Data System (ADS)

    Hejmadi, Vic; Kress, Bernard

    2010-04-01

    We present a novel implementation of virtual optical interfaces for the transportation industry (automotive and avionics). This new implementation includes two functionalities in a single device; projection of a virtual interface and sensing of the position of the fingers on top of the virtual interface. Both functionalities are produced by diffraction of laser light. The device we are developing include both functionalities in a compact package which has no optical elements to align since all of them are pre-aligned on a single glass wafer through optical lithography. The package contains a CMOS sensor which diffractive objective lens is optimized for the projected interface color as well as for the IR finger position sensor based on structured illumination. Two versions are proposed: a version which senses the 2d position of the hand and a version which senses the hand position in 3d.

  4. Virtual reality interface devices in the reorganization of neural networks in the brain of patients with neurological diseases.

    PubMed

    Gatica-Rojas, Valeska; Méndez-Rebolledo, Guillermo

    2014-04-15

    Two key characteristics of all virtual reality applications are interaction and immersion. Systemic interaction is achieved through a variety of multisensory channels (hearing, sight, touch, and smell), permitting the user to interact with the virtual world in real time. Immersion is the degree to which a person can feel wrapped in the virtual world through a defined interface. Virtual reality interface devices such as the Nintendo® Wii and its peripheral nunchuks-balance board, head mounted displays and joystick allow interaction and immersion in unreal environments created from computer software. Virtual environments are highly interactive, generating great activation of visual, vestibular and proprioceptive systems during the execution of a video game. In addition, they are entertaining and safe for the user. Recently, incorporating therapeutic purposes in virtual reality interface devices has allowed them to be used for the rehabilitation of neurological patients, e.g., balance training in older adults and dynamic stability in healthy participants. The improvements observed in neurological diseases (chronic stroke and cerebral palsy) have been shown by changes in the reorganization of neural networks in patients' brain, along with better hand function and other skills, contributing to their quality of life. The data generated by such studies could substantially contribute to physical rehabilitation strategies.

  5. Virtual reality interface devices in the reorganization of neural networks in the brain of patients with neurological diseases

    PubMed Central

    Gatica-Rojas, Valeska; Méndez-Rebolledo, Guillermo

    2014-01-01

    Two key characteristics of all virtual reality applications are interaction and immersion. Systemic interaction is achieved through a variety of multisensory channels (hearing, sight, touch, and smell), permitting the user to interact with the virtual world in real time. Immersion is the degree to which a person can feel wrapped in the virtual world through a defined interface. Virtual reality interface devices such as the Nintendo® Wii and its peripheral nunchuks-balance board, head mounted displays and joystick allow interaction and immersion in unreal environments created from computer software. Virtual environments are highly interactive, generating great activation of visual, vestibular and proprioceptive systems during the execution of a video game. In addition, they are entertaining and safe for the user. Recently, incorporating therapeutic purposes in virtual reality interface devices has allowed them to be used for the rehabilitation of neurological patients, e.g., balance training in older adults and dynamic stability in healthy participants. The improvements observed in neurological diseases (chronic stroke and cerebral palsy) have been shown by changes in the reorganization of neural networks in patients’ brain, along with better hand function and other skills, contributing to their quality of life. The data generated by such studies could substantially contribute to physical rehabilitation strategies. PMID:25206907

  6. Simulation of Physical Experiments in Immersive Virtual Environments

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Wasfy, Tamer M.

    2001-01-01

    An object-oriented event-driven immersive Virtual environment is described for the creation of virtual labs (VLs) for simulating physical experiments. Discussion focuses on a number of aspects of the VLs, including interface devices, software objects, and various applications. The VLs interface with output devices, including immersive stereoscopic screed(s) and stereo speakers; and a variety of input devices, including body tracking (head and hands), haptic gloves, wand, joystick, mouse, microphone, and keyboard. The VL incorporates the following types of primitive software objects: interface objects, support objects, geometric entities, and finite elements. Each object encapsulates a set of properties, methods, and events that define its behavior, appearance, and functions. A container object allows grouping of several objects. Applications of the VLs include viewing the results of the physical experiment, viewing a computer simulation of the physical experiment, simulation of the experiments procedure, computational steering, and remote control of the physical experiment. In addition, the VL can be used as a risk-free (safe) environment for training. The implementation of virtual structures testing machines, virtual wind tunnels, and a virtual acoustic testing facility is described.

  7. Virtually-augmented interfaces for tactical aircraft.

    PubMed

    Haas, M W

    1995-05-01

    The term Fusion Interface is defined as a class of interface which integrally incorporates both virtual and non-virtual concepts and devices across the visual, auditory and haptic sensory modalities. A fusion interface is a multi-sensory virtually-augmented synthetic environment. A new facility has been developed within the Human Engineering Division of the Armstrong Laboratory dedicated to exploratory development of fusion-interface concepts. One of the virtual concepts to be investigated in the Fusion Interfaces for Tactical Environments facility (FITE) is the application of EEG and other physiological measures for virtual control of functions within the flight environment. FITE is a specialized flight simulator which allows efficient concept development through the use of rapid prototyping followed by direct experience of new fusion concepts. The FITE facility also supports evaluation of fusion concepts by operational fighter pilots in a high fidelity simulated air combat environment. The facility was utilized by a multi-disciplinary team composed of operational pilots, human-factors engineers, electronics engineers, computer scientists, and experimental psychologists to prototype and evaluate the first multi-sensory, virtually-augmented cockpit. The cockpit employed LCD-based head-down displays, a helmet-mounted display, three-dimensionally localized audio displays, and a haptic display. This paper will endeavor to describe the FITE facility architecture, some of the characteristics of the FITE virtual display and control devices, and the potential application of EEG and other physiological measures within the FITE facility.

  8. Comparing two types of navigational interfaces for Virtual Reality.

    PubMed

    Teixeira, Luís; Vilar, Elisângela; Duarte, Emília; Rebelo, Francisco; da Silva, Fernando Moreira

    2012-01-01

    Previous studies suggest significant differences between navigating virtual environments in a life-like walking manner (i.e., using treadmills or walk-in-place techniques) and virtual navigation (i.e., flying while really standing). The latter option, which usually involves hand-centric devices (e.g., joysticks), is the most common in Virtual Reality-based studies, mostly due to low costs, less space and technology demands. However, recently, new interaction devices, originally conceived for videogames have become available offering interesting potentialities for research. This study aimed to explore the potentialities of the Nintendo Wii Balance Board as a navigation interface in a Virtual Environment presented in an immersive Virtual Reality system. Comparing participants' performance while engaged in a simulated emergency egress allows determining the adequacy of such alternative navigation interface on the basis of empirical results. Forty university students participated in this study. Results show that participants were more efficient when performing navigation tasks using the Joystick than with the Balance Board. However there were no significantly differences in the behavioral compliance with exit signs. Therefore, this study suggests that, at least for tasks similar to the studied, the Balance Board have good potentiality to be used as a navigation interface for Virtual Reality systems.

  9. Device Control Using Gestures Sensed from EMG

    NASA Technical Reports Server (NTRS)

    Wheeler, Kevin R.

    2003-01-01

    In this paper we present neuro-electric interfaces for virtual device control. The examples presented rely upon sampling Electromyogram data from a participants forearm. This data is then fed into pattern recognition software that has been trained to distinguish gestures from a given gesture set. The pattern recognition software consists of hidden Markov models which are used to recognize the gestures as they are being performed in real-time. Two experiments were conducted to examine the feasibility of this interface technology. The first replicated a virtual joystick interface, and the second replicated a keyboard.

  10. Virtual Reality: An Overview.

    ERIC Educational Resources Information Center

    Franchi, Jorge

    1994-01-01

    Highlights of this overview of virtual reality include optics; interface devices; virtual worlds; potential applications, including medicine and archaeology; problems, including costs; current research and development; future possibilities; and a listing of vendors and suppliers of virtual reality products. (Contains 11 references.) (LRW)

  11. Human Motion Tracking and Glove-Based User Interfaces for Virtual Environments in ANVIL

    NASA Technical Reports Server (NTRS)

    Dumas, Joseph D., II

    2002-01-01

    The Army/NASA Virtual Innovations Laboratory (ANVIL) at Marshall Space Flight Center (MSFC) provides an environment where engineers and other personnel can investigate novel applications of computer simulation and Virtual Reality (VR) technologies. Among the many hardware and software resources in ANVIL are several high-performance Silicon Graphics computer systems and a number of commercial software packages, such as Division MockUp by Parametric Technology Corporation (PTC) and Jack by Unigraphics Solutions, Inc. These hardware and software platforms are used in conjunction with various VR peripheral I/O (input / output) devices, CAD (computer aided design) models, etc. to support the objectives of the MSFC Engineering Systems Department/Systems Engineering Support Group (ED42) by studying engineering designs, chiefly from the standpoint of human factors and ergonomics. One of the more time-consuming tasks facing ANVIL personnel involves the testing and evaluation of peripheral I/O devices and the integration of new devices with existing hardware and software platforms. Another important challenge is the development of innovative user interfaces to allow efficient, intuitive interaction between simulation users and the virtual environments they are investigating. As part of his Summer Faculty Fellowship, the author was tasked with verifying the operation of some recently acquired peripheral interface devices and developing new, easy-to-use interfaces that could be used with existing VR hardware and software to better support ANVIL projects.

  12. Multi-modal cockpit interface for improved airport surface operations

    NASA Technical Reports Server (NTRS)

    Arthur, Jarvis J. (Inventor); Bailey, Randall E. (Inventor); Prinzel, III, Lawrence J. (Inventor); Kramer, Lynda J. (Inventor); Williams, Steven P. (Inventor)

    2010-01-01

    A system for multi-modal cockpit interface during surface operation of an aircraft comprises a head tracking device, a processing element, and a full-color head worn display. The processing element is configured to receive head position information from the head tracking device, to receive current location information of the aircraft, and to render a virtual airport scene corresponding to the head position information and the current aircraft location. The full-color head worn display is configured to receive the virtual airport scene from the processing element and to display the virtual airport scene. The current location information may be received from one of a global positioning system or an inertial navigation system.

  13. Assessment of Application Technology of Natural User Interfaces in the Creation of a Virtual Chemical Laboratory

    ERIC Educational Resources Information Center

    Jagodzinski, Piotr; Wolski, Robert

    2015-01-01

    Natural User Interfaces (NUI) are now widely used in electronic devices such as smartphones, tablets and gaming consoles. We have tried to apply this technology in the teaching of chemistry in middle school and high school. A virtual chemical laboratory was developed in which students can simulate the performance of laboratory activities similar…

  14. Virtual workstations and telepresence interfaces: Design accommodations and prototypes for Space Station Freedom evolution

    NASA Technical Reports Server (NTRS)

    Mcgreevy, Michael W.

    1990-01-01

    An advanced human-system interface is being developed for evolutionary Space Station Freedom as part of the NASA Office of Space Station (OSS) Advanced Development Program. The human-system interface is based on body-pointed display and control devices. The project will identify and document the design accommodations ('hooks and scars') required to support virtual workstations and telepresence interfaces, and prototype interface systems will be built, evaluated, and refined. The project is a joint enterprise of Marquette University, Astronautics Corporation of America (ACA), and NASA's ARC. The project team is working with NASA's JSC and McDonnell Douglas Astronautics Company (the Work Package contractor) to ensure that the project is consistent with space station user requirements and program constraints. Documentation describing design accommodations and tradeoffs will be provided to OSS, JSC, and McDonnell Douglas, and prototype interface devices will be delivered to ARC and JSC. ACA intends to commercialize derivatives of the interface for use with computer systems developed for scientific visualization and system simulation.

  15. A Web Service and Interface for Remote Electronic Device Characterization

    ERIC Educational Resources Information Center

    Dutta, S.; Prakash, S.; Estrada, D.; Pop, E.

    2011-01-01

    A lightweight Web Service and a Web site interface have been developed, which enable remote measurements of electronic devices as a "virtual laboratory" for undergraduate engineering classes. Using standard browsers without additional plugins (such as Internet Explorer, Firefox, or even Safari on an iPhone), remote users can control a Keithley…

  16. VIRTUAL FRAME BUFFER INTERFACE

    NASA Technical Reports Server (NTRS)

    Wolfe, T. L.

    1994-01-01

    Large image processing systems use multiple frame buffers with differing architectures and vendor supplied user interfaces. This variety of architectures and interfaces creates software development, maintenance, and portability problems for application programs. The Virtual Frame Buffer Interface program makes all frame buffers appear as a generic frame buffer with a specified set of characteristics, allowing programmers to write code which will run unmodified on all supported hardware. The Virtual Frame Buffer Interface converts generic commands to actual device commands. The virtual frame buffer consists of a definition of capabilities and FORTRAN subroutines that are called by application programs. The virtual frame buffer routines may be treated as subroutines, logical functions, or integer functions by the application program. Routines are included that allocate and manage hardware resources such as frame buffers, monitors, video switches, trackballs, tablets and joysticks; access image memory planes; and perform alphanumeric font or text generation. The subroutines for the various "real" frame buffers are in separate VAX/VMS shared libraries allowing modification, correction or enhancement of the virtual interface without affecting application programs. The Virtual Frame Buffer Interface program was developed in FORTRAN 77 for a DEC VAX 11/780 or a DEC VAX 11/750 under VMS 4.X. It supports ADAGE IK3000, DEANZA IP8500, Low Resolution RAMTEK 9460, and High Resolution RAMTEK 9460 Frame Buffers. It has a central memory requirement of approximately 150K. This program was developed in 1985.

  17. Virtual Frame Buffer Interface Program

    NASA Technical Reports Server (NTRS)

    Wolfe, Thomas L.

    1990-01-01

    Virtual Frame Buffer Interface program makes all frame buffers appear as generic frame buffer with specified set of characteristics, allowing programmers to write codes that run unmodified on all supported hardware. Converts generic commands to actual device commands. Consists of definition of capabilities and FORTRAN subroutines called by application programs. Developed in FORTRAN 77 for DEC VAX 11/780 or DEC VAX 11/750 computer under VMS 4.X.

  18. Haptic interfaces: Hardware, software and human performance

    NASA Technical Reports Server (NTRS)

    Srinivasan, Mandayam A.

    1995-01-01

    Virtual environments are computer-generated synthetic environments with which a human user can interact to perform a wide variety of perceptual and motor tasks. At present, most of the virtual environment systems engage only the visual and auditory senses, and not the haptic sensorimotor system that conveys the sense of touch and feel of objects in the environment. Computer keyboards, mice, and trackballs constitute relatively simple haptic interfaces. Gloves and exoskeletons that track hand postures have more interaction capabilities and are available in the market. Although desktop and wearable force-reflecting devices have been built and implemented in research laboratories, the current capabilities of such devices are quite limited. To realize the full promise of virtual environments and teleoperation of remote systems, further developments of haptic interfaces are critical. In this paper, the status and research needs in human haptics, technology development and interactions between the two are described. In particular, the excellent performance characteristics of Phantom, a haptic interface recently developed at MIT, are highlighted. Realistic sensations of single point of contact interactions with objects of variable geometry (e.g., smooth, textured, polyhedral) and material properties (e.g., friction, impedance) in the context of a variety of tasks (e.g., needle biopsy, switch panels) achieved through this device are described and the associated issues in haptic rendering are discussed.

  19. Fast and Efficient Radiological Interventions via a Graphical User Interface Commanded Magnetic Resonance Compatible Robotic Device

    PubMed Central

    Özcan, Alpay; Christoforou, Eftychios; Brown, Daniel; Tsekos, Nikolaos

    2011-01-01

    The graphical user interface for an MR compatible robotic device has the capability of displaying oblique MR slices in 2D and a 3D virtual environment along with the representation of the robotic arm in order to swiftly complete the intervention. Using the advantages of the MR modality the device saves time and effort, is safer for the medical staff and is more comfortable for the patient. PMID:17946067

  20. Seamless 3D interaction for virtual tables, projection planes, and CAVEs

    NASA Astrophysics Data System (ADS)

    Encarnacao, L. M.; Bimber, Oliver; Schmalstieg, Dieter; Barton, Robert J., III

    2000-08-01

    The Virtual Table presents stereoscopic graphics to a user in a workbench-like setting. This device shares with other large- screen display technologies (such as data walls and surround- screen projection systems) the lack of human-centered unencumbered user interfaces and 3D interaction technologies. Such shortcomings present severe limitations to the application of virtual reality (VR) technology to time- critical applications as well as employment scenarios that involve heterogeneous groups of end-users without high levels of computer familiarity and expertise. Traditionally such employment scenarios are common in planning-related application areas such as mission rehearsal and command and control. For these applications, a high grade of flexibility with respect to the system requirements (display and I/O devices) as well as to the ability to seamlessly and intuitively switch between different interaction modalities and interaction are sought. Conventional VR techniques may be insufficient to meet this challenge. This paper presents novel approaches for human-centered interfaces to Virtual Environments focusing on the Virtual Table visual input device. It introduces new paradigms for 3D interaction in virtual environments (VE) for a variety of application areas based on pen-and-clipboard, mirror-in-hand, and magic-lens metaphors, and introduces new concepts for combining VR and augmented reality (AR) techniques. It finally describes approaches toward hybrid and distributed multi-user interaction environments and concludes by hypothesizing on possible use cases for defense applications.

  1. A Voice and Mouse Input Interface for 3D Virtual Environments

    NASA Technical Reports Server (NTRS)

    Kao, David L.; Bryson, Steve T.

    2003-01-01

    There have been many successful stories on how 3D input devices can be fully integrated into an immersive virtual environment. Electromagnetic trackers, optical trackers, gloves, and flying mice are just some of these input devices. Though we can use existing 3D input devices that are commonly used for VR applications, there are several factors that prevent us from choosing these input devices for our applications. One main factor is that most of these tracking devices are not suitable for prolonged use due to human fatigue associated with using them. A second factor is that many of them would occupy additional office space. Another factor is that many of the 3D input devices are expensive due to the unusual hardware that are required. For our VR applications, we want a user interface that would work naturally with standard equipment. In this paper, we demonstrate applications or our proposed muitimodal interface using a 3D dome display. We also show that effective data analysis can be achieved while the scientists view their data rendered inside the dome display and perform user interactions simply using the mouse and voice input. Though the sphere coordinate grid seems to be ideal for interaction using a 3D dome display, we can also use other non-spherical grids as well.

  2. Management software for a universal device communication controller: application to monitoring and computerized infusions.

    PubMed

    Coussaert, E J; Cantraine, F R

    1996-11-01

    We designed a virtual device for a local area network observing, operating and connecting devices to a personal computer. To keep the widest field of application, we proceeded by using abstraction and specification rules of software engineering in the design and implementation of the hardware and software for the Infusion Monitor. We specially built a box of hardware to interface multiple medical instruments with different communication protocols to a PC via a single serial port. We called that box the Universal Device Communication Controller (UDCC). The use of the virtual device driver is illustrated by the Infusion Monitor implemented for the anaesthesia and intensive care workstation.

  3. Web GIS in practice X: a Microsoft Kinect natural user interface for Google Earth navigation

    PubMed Central

    2011-01-01

    This paper covers the use of depth sensors such as Microsoft Kinect and ASUS Xtion to provide a natural user interface (NUI) for controlling 3-D (three-dimensional) virtual globes such as Google Earth (including its Street View mode), Bing Maps 3D, and NASA World Wind. The paper introduces the Microsoft Kinect device, briefly describing how it works (the underlying technology by PrimeSense), as well as its market uptake and application potential beyond its original intended purpose as a home entertainment and video game controller. The different software drivers available for connecting the Kinect device to a PC (Personal Computer) are also covered, and their comparative pros and cons briefly discussed. We survey a number of approaches and application examples for controlling 3-D virtual globes using the Kinect sensor, then describe Kinoogle, a Kinect interface for natural interaction with Google Earth, developed by students at Texas A&M University. Readers interested in trying out the application on their own hardware can download a Zip archive (included with the manuscript as additional files 1, 2, &3) that contains a 'Kinnogle installation package for Windows PCs'. Finally, we discuss some usability aspects of Kinoogle and similar NUIs for controlling 3-D virtual globes (including possible future improvements), and propose a number of unique, practical 'use scenarios' where such NUIs could prove useful in navigating a 3-D virtual globe, compared to conventional mouse/3-D mouse and keyboard-based interfaces. PMID:21791054

  4. Web GIS in practice X: a Microsoft Kinect natural user interface for Google Earth navigation.

    PubMed

    Boulos, Maged N Kamel; Blanchard, Bryan J; Walker, Cory; Montero, Julio; Tripathy, Aalap; Gutierrez-Osuna, Ricardo

    2011-07-26

    This paper covers the use of depth sensors such as Microsoft Kinect and ASUS Xtion to provide a natural user interface (NUI) for controlling 3-D (three-dimensional) virtual globes such as Google Earth (including its Street View mode), Bing Maps 3D, and NASA World Wind. The paper introduces the Microsoft Kinect device, briefly describing how it works (the underlying technology by PrimeSense), as well as its market uptake and application potential beyond its original intended purpose as a home entertainment and video game controller. The different software drivers available for connecting the Kinect device to a PC (Personal Computer) are also covered, and their comparative pros and cons briefly discussed. We survey a number of approaches and application examples for controlling 3-D virtual globes using the Kinect sensor, then describe Kinoogle, a Kinect interface for natural interaction with Google Earth, developed by students at Texas A&M University. Readers interested in trying out the application on their own hardware can download a Zip archive (included with the manuscript as additional files 1, 2, &3) that contains a 'Kinnogle installation package for Windows PCs'. Finally, we discuss some usability aspects of Kinoogle and similar NUIs for controlling 3-D virtual globes (including possible future improvements), and propose a number of unique, practical 'use scenarios' where such NUIs could prove useful in navigating a 3-D virtual globe, compared to conventional mouse/3-D mouse and keyboard-based interfaces.

  5. Fusion interfaces for tactical environments: An application of virtual reality technology

    NASA Technical Reports Server (NTRS)

    Haas, Michael W.

    1994-01-01

    The term Fusion Interface is defined as a class of interface which integrally incorporates both virtual and nonvirtual concepts and devices across the visual, auditory, and haptic sensory modalities. A fusion interface is a multisensory virtually-augmented synthetic environment. A new facility has been developed within the Human Engineering Division of the Armstrong Laboratory dedicated to exploratory development of fusion interface concepts. This new facility, the Fusion Interfaces for Tactical Environments (FITE) Facility is a specialized flight simulator enabling efficient concept development through rapid prototyping and direct experience of new fusion concepts. The FITE Facility also supports evaluation of fusion concepts by operation fighter pilots in an air combat environment. The facility is utilized by a multidisciplinary design team composed of human factors engineers, electronics engineers, computer scientists, experimental psychologists, and oeprational pilots. The FITE computational architecture is composed of twenty-five 80486-based microcomputers operating in real-time. The microcomputers generate out-the-window visuals, in-cockpit and head-mounted visuals, localized auditory presentations, haptic displays on the stick and rudder pedals, as well as executing weapons models, aerodynamic models, and threat models.

  6. Virtual environment architecture for rapid application development

    NASA Technical Reports Server (NTRS)

    Grinstein, Georges G.; Southard, David A.; Lee, J. P.

    1993-01-01

    We describe the MITRE Virtual Environment Architecture (VEA), a product of nearly two years of investigations and prototypes of virtual environment technology. This paper discusses the requirements for rapid prototyping, and an architecture we are developing to support virtual environment construction. VEA supports rapid application development by providing a variety of pre-built modules that can be reconfigured for each application session. The modules supply interfaces for several types of interactive I/O devices, in addition to large-screen or head-mounted displays.

  7. 3D workflow for HDR image capture of projection systems and objects for CAVE virtual environments authoring with wireless touch-sensitive devices

    NASA Astrophysics Data System (ADS)

    Prusten, Mark J.; McIntyre, Michelle; Landis, Marvin

    2006-02-01

    A 3D workflow pipeline is presented for High Dynamic Range (HDR) image capture of projected scenes or objects for presentation in CAVE virtual environments. The methods of HDR digital photography of environments vs. objects are reviewed. Samples of both types of virtual authoring being the actual CAVE environment and a sculpture are shown. A series of software tools are incorporated into a pipeline called CAVEPIPE, allowing for high-resolution objects and scenes to be composited together in natural illumination environments [1] and presented in our CAVE virtual reality environment. We also present a way to enhance the user interface for CAVE environments. The traditional methods of controlling the navigation through virtual environments include: glove, HUD's and 3D mouse devices. By integrating a wireless network that includes both WiFi (IEEE 802.11b/g) and Bluetooth (IEEE 802.15.1) protocols the non-graphical input control device can be eliminated. Therefore wireless devices can be added that would include: PDA's, Smart Phones, TabletPC's, Portable Gaming consoles, and PocketPC's.

  8. Human-scale interaction for virtual model displays: a clear case for real tools

    NASA Astrophysics Data System (ADS)

    Williams, George C.; McDowall, Ian E.; Bolas, Mark T.

    1998-04-01

    We describe a hand-held user interface for interacting with virtual environments displayed on a Virtual Model Display. The tool, constructed entirely of transparent materials, is see-through. We render a graphical counterpart of the tool on the display and map it one-to-one with the real tool. This feature, combined with a capability for touch- sensitive, discrete input, results in a useful spatial input device that is visually versatile. We discuss the tool's design and interaction techniques it supports. Briefly, we look at the human factors issues and engineering challenges presented by this tool and, in general, by the class of hand-held user interfaces that are see-through.

  9. Development of Virtual Resource Based IoT Proxy for Bridging Heterogeneous Web Services in IoT Networks.

    PubMed

    Jin, Wenquan; Kim, DoHyeun

    2018-05-26

    The Internet of Things is comprised of heterogeneous devices, applications, and platforms using multiple communication technologies to connect the Internet for providing seamless services ubiquitously. With the requirement of developing Internet of Things products, many protocols, program libraries, frameworks, and standard specifications have been proposed. Therefore, providing a consistent interface to access services from those environments is difficult. Moreover, bridging the existing web services to sensor and actuator networks is also important for providing Internet of Things services in various industry domains. In this paper, an Internet of Things proxy is proposed that is based on virtual resources to bridge heterogeneous web services from the Internet to the Internet of Things network. The proxy enables clients to have transparent access to Internet of Things devices and web services in the network. The proxy is comprised of server and client to forward messages for different communication environments using the virtual resources which include the server for the message sender and the client for the message receiver. We design the proxy for the Open Connectivity Foundation network where the virtual resources are discovered by the clients as Open Connectivity Foundation resources. The virtual resources represent the resources which expose services in the Internet by web service providers. Although the services are provided by web service providers from the Internet, the client can access services using the consistent communication protocol in the Open Connectivity Foundation network. For discovering the resources to access services, the client also uses the consistent discovery interface to discover the Open Connectivity Foundation devices and virtual resources.

  10. Multi-Robot Interfaces and Operator Situational Awareness: Study of the Impact of Immersion and Prediction

    PubMed Central

    Peña-Tapia, Elena; Martín-Barrio, Andrés; Olivares-Méndez, Miguel A.

    2017-01-01

    Multi-robot missions are a challenge for operators in terms of workload and situational awareness. These operators have to receive data from the robots, extract information, understand the situation properly, make decisions, generate the adequate commands, and send them to the robots. The consequences of excessive workload and lack of awareness can vary from inefficiencies to accidents. This work focuses on the study of future operator interfaces of multi-robot systems, taking into account relevant issues such as multimodal interactions, immersive devices, predictive capabilities and adaptive displays. Specifically, four interfaces have been designed and developed: a conventional, a predictive conventional, a virtual reality and a predictive virtual reality interface. The four interfaces have been validated by the performance of twenty-four operators that supervised eight multi-robot missions of fire surveillance and extinguishing. The results of the workload and situational awareness tests show that virtual reality improves the situational awareness without increasing the workload of operators, whereas the effects of predictive components are not significant and depend on their implementation. PMID:28749407

  11. Development of real-time motion capture system for 3D on-line games linked with virtual character

    NASA Astrophysics Data System (ADS)

    Kim, Jong Hyeong; Ryu, Young Kee; Cho, Hyung Suck

    2004-10-01

    Motion tracking method is being issued as essential part of the entertainment, medical, sports, education and industry with the development of 3-D virtual reality. Virtual human character in the digital animation and game application has been controlled by interfacing devices; mouse, joysticks, midi-slider, and so on. Those devices could not enable virtual human character to move smoothly and naturally. Furthermore, high-end human motion capture systems in commercial market are expensive and complicated. In this paper, we proposed a practical and fast motion capturing system consisting of optic sensors, and linked the data with 3-D game character with real time. The prototype experiment setup is successfully applied to a boxing game which requires very fast movement of human character.

  12. Design of virtual SCADA simulation system for pressurized water reactor

    NASA Astrophysics Data System (ADS)

    Wijaksono, Umar; Abdullah, Ade Gafar; Hakim, Dadang Lukman

    2016-02-01

    The Virtual SCADA system is a software-based Human-Machine Interface that can visualize the process of a plant. This paper described the results of the virtual SCADA system design that aims to recognize the principle of the Nuclear Power Plant type Pressurized Water Reactor. This simulation uses technical data of the Nuclear Power Plant Unit Olkiluoto 3 in Finland. This device was developed using Wonderware Intouch, which is equipped with manual books for each component, animation links, alarm systems, real time and historical trending, and security system. The results showed that in general this device can demonstrate clearly the principles of energy flow and energy conversion processes in Pressurized Water Reactors. This virtual SCADA simulation system can be used as instructional media to recognize the principle of Pressurized Water Reactor.

  13. Recommended Practices for Interactive Video Portability

    DTIC Science & Technology

    1990-10-01

    3-9 4. Implementation details 4-1 4.1 Installation issues ....................... 4-1 April 15, 1990 Release R 1.0 vii contents 4.1.1 VDI ...passed via an ASCII or binary application interface to the Virtual Device Interface ( VDI ) Management Software. ’ VDI Management, in turn, executes...the commands by calling appropriate low-level services and passes responses back to the application via the application interface. VDI Manage- ment is

  14. The Virtual Tablet: Virtual Reality as a Control System

    NASA Technical Reports Server (NTRS)

    Chronister, Andrew

    2016-01-01

    In the field of human-computer interaction, Augmented Reality (AR) and Virtual Reality (VR) have been rapidly growing areas of interest and concerted development effort thanks to both private and public research. At NASA, a number of groups have explored the possibilities afforded by AR and VR technology, among which is the IT Advanced Concepts Lab (ITACL). Within ITACL, the AVR (Augmented/Virtual Reality) Lab focuses on VR technology specifically for its use in command and control. Previous work in the AVR lab includes the Natural User Interface (NUI) project and the Virtual Control Panel (VCP) project, which created virtual three-dimensional interfaces that users could interact with while wearing a VR headset thanks to body- and hand-tracking technology. The Virtual Tablet (VT) project attempts to improve on these previous efforts by incorporating a physical surrogate which is mirrored in the virtual environment, mitigating issues with difficulty of visually determining the interface location and lack of tactile feedback discovered in the development of previous efforts. The physical surrogate takes the form of a handheld sheet of acrylic glass with several infrared-range reflective markers and a sensor package attached. Using the sensor package to track orientation and a motion-capture system to track the marker positions, a model of the surrogate is placed in the virtual environment at a position which corresponds with the real-world location relative to the user's VR Head Mounted Display (HMD). A set of control mechanisms is then projected onto the surface of the surrogate such that to the user, immersed in VR, the control interface appears to be attached to the object they are holding. The VT project was taken from an early stage where the sensor package, motion-capture system, and physical surrogate had been constructed or tested individually but not yet combined or incorporated into the virtual environment. My contribution was to combine the pieces of hardware, write software to incorporate each piece of position or orientation data into a coherent description of the object's location in space, place the virtual analogue accordingly, and project the control interface onto it, resulting in a functioning object which has both a physical and a virtual presence. Additionally, the virtual environment was enhanced with two live video feeds from cameras mounted on the robotic device being used as an example target of the virtual interface. The working VT allows users to naturally interact with a control interface with little to no training and without the issues found in previous efforts.

  15. Virtual reality applications to automated rendezvous and capture

    NASA Technical Reports Server (NTRS)

    Hale, Joseph; Oneil, Daniel

    1991-01-01

    Virtual Reality (VR) is a rapidly developing Human/Computer Interface (HCI) technology. The evolution of high-speed graphics processors and development of specialized anthropomorphic user interface devices, that more fully involve the human senses, have enabled VR technology. Recently, the maturity of this technology has reached a level where it can be used as a tool in a variety of applications. This paper provides an overview of: VR technology, VR activities at Marshall Space Flight Center (MSFC), applications of VR to Automated Rendezvous and Capture (AR&C), and identifies areas of VR technology that requires further development.

  16. Partitioning of Function in a Distributed Graphics System.

    DTIC Science & Technology

    1985-03-01

    Interface specification ( VDI ) is yet another graphi:s standardization effort of ANSI committee X31133 [7]. As shown in figure 2-2, the Virtual Device... VDI specification could be realized in a real device, or at least a "black box" which the user treats as a hardware device. ’he device drivers would...be written by the manufacturer of the graphics device, instead of the author of the graphics system. Since the VDI specification is precisely defined

  17. Assessment of Application Technology of Natural User Interfaces in the Creation of a Virtual Chemical Laboratory

    NASA Astrophysics Data System (ADS)

    Jagodziński, Piotr; Wolski, Robert

    2015-02-01

    Natural User Interfaces (NUI) are now widely used in electronic devices such as smartphones, tablets and gaming consoles. We have tried to apply this technology in the teaching of chemistry in middle school and high school. A virtual chemical laboratory was developed in which students can simulate the performance of laboratory activities similar to those that they perform in a real laboratory. Kinect sensor was used for the detection and analysis of the student's hand movements, which is an example of NUI. The studies conducted found the effectiveness of educational virtual laboratory. The extent to which the use of a teaching aid increased the students' progress in learning chemistry was examined. The results indicate that the use of NUI creates opportunities to both enhance and improve the quality of the chemistry education. Working in a virtual laboratory using the Kinect interface results in greater emotional involvement and an increased sense of self-efficacy in the laboratory work among students. As a consequence, students are getting higher marks and are more interested in the subject of chemistry.

  18. An intelligent control and virtual display system for evolutionary space station workstation design

    NASA Technical Reports Server (NTRS)

    Feng, Xin; Niederjohn, Russell J.; Mcgreevy, Michael W.

    1992-01-01

    Research and development of the Advanced Display and Computer Augmented Control System (ADCACS) for the space station Body-Ported Cupola Virtual Workstation (BP/VCWS) were pursued. The potential applications were explored of body ported virtual display and intelligent control technology for the human-system interfacing applications is space station environment. The new system is designed to enable crew members to control and monitor a variety of space operations with greater flexibility and efficiency than existing fixed consoles. The technologies being studied include helmet mounted virtual displays, voice and special command input devices, and microprocessor based intelligent controllers. Several research topics, such as human factors, decision support expert systems, and wide field of view, color displays are being addressed. The study showed the significant advantages of this uniquely integrated display and control system, and its feasibility for human-system interfacing applications in the space station command and control environment.

  19. Design of virtual SCADA simulation system for pressurized water reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wijaksono, Umar, E-mail: umar.wijaksono@student.upi.edu; Abdullah, Ade Gafar; Hakim, Dadang Lukman

    The Virtual SCADA system is a software-based Human-Machine Interface that can visualize the process of a plant. This paper described the results of the virtual SCADA system design that aims to recognize the principle of the Nuclear Power Plant type Pressurized Water Reactor. This simulation uses technical data of the Nuclear Power Plant Unit Olkiluoto 3 in Finland. This device was developed using Wonderware Intouch, which is equipped with manual books for each component, animation links, alarm systems, real time and historical trending, and security system. The results showed that in general this device can demonstrate clearly the principles ofmore » energy flow and energy conversion processes in Pressurized Water Reactors. This virtual SCADA simulation system can be used as instructional media to recognize the principle of Pressurized Water Reactor.« less

  20. Virtual Environment User Interfaces to Support RLV and Space Station Simulations in the ANVIL Virtual Reality Lab

    NASA Technical Reports Server (NTRS)

    Dumas, Joseph D., II

    1998-01-01

    Several virtual reality I/O peripherals were successfully configured and integrated as part of the author's 1997 Summer Faculty Fellowship work. These devices, which were not supported by the developers of VR software packages, use new software drivers and configuration files developed by the author to allow them to be used with simulations developed using those software packages. The successful integration of these devices has added significant capability to the ANVIL lab at MSFC. In addition, the author was able to complete the integration of a networked virtual reality simulation of the Space Shuttle Remote Manipulator System docking Space Station modules which was begun as part of his 1996 Fellowship. The successful integration of this simulation demonstrates the feasibility of using VR technology for ground-based training as well as on-orbit operations.

  1. The Impact of User-Input Devices on Virtual Desktop Trainers

    DTIC Science & Technology

    2010-09-01

    playing the game more enjoyable. Some of these changes include the design of controllers, the controller interface, and ergonomic changes made to...within subjects experimental design to evaluate young active duty Soldier’s ability to move and shoot in a virtual environment using different input...sufficient gaming proficiency, resulting in more time dedicated to training military skills. We employed a within subjects experimental design to

  2. BacNet and Analog/Digital Interfaces of the Building Controls Virtual Testbed

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nouidui, Thierry Stephane; Wetter, Michael; Li, Zhengwei

    2011-11-01

    This paper gives an overview of recent developments in the Building Controls Virtual Test Bed (BCVTB), a framework for co-simulation and hardware-in-the-loop. First, a general overview of the BCVTB is presented. Second, we describe the BACnet interface, a link which has been implemented to couple BACnet devices to the BCVTB. We present a case study where the interface was used to couple a whole building simulation program to a building control system to assess in real-time the performance of a real building. Third, we present the ADInterfaceMCC, an analog/digital interface that allows a USB-based analog/digital converter to be linked tomore » the BCVTB. In a case study, we show how the link was used to couple the analog/digital converter to a building simulation model for local loop control.« less

  3. Tools virtualization for command and control systems

    NASA Astrophysics Data System (ADS)

    Piszczek, Marek; Maciejewski, Marcin; Pomianek, Mateusz; Szustakowski, Mieczysław

    2017-10-01

    Information management is an inseparable part of the command process. The result is that the person making decisions at the command post interacts with data providing devices in various ways. Tools virtualization process can introduce a number of significant modifications in the design of solutions for management and command. The general idea involves replacing physical devices user interface with their digital representation (so-called Virtual instruments). A more advanced level of the systems "digitalization" is to use the mixed reality environments. In solutions using Augmented reality (AR) customized HMI is displayed to the operator when he approaches to each device. Identification of device is done by image recognition of photo codes. Visualization is achieved by (optical) see-through head mounted display (HMD). Control can be done for example by means of a handheld touch panel. Using the immersive virtual environment, the command center can be digitally reconstructed. Workstation requires only VR system (HMD) and access to information network. Operator can interact with devices in such a way as it would perform in real world (for example with the virtual hands). Because of their procedures (an analysis of central vision, eye tracking) MR systems offers another useful feature of reducing requirements for system data throughput. Due to the fact that at the moment we focus on the single device. Experiments carried out using Moverio BT-200 and SteamVR systems and the results of experimental application testing clearly indicate the ability to create a fully functional information system with the use of mixed reality technology.

  4. Kinematic evaluation of virtual walking trajectories.

    PubMed

    Cirio, Gabriel; Olivier, Anne-Hélène; Marchal, Maud; Pettré, Julien

    2013-04-01

    Virtual walking, a fundamental task in Virtual Reality (VR), is greatly influenced by the locomotion interface being used, by the specificities of input and output devices, and by the way the virtual environment is represented. No matter how virtual walking is controlled, the generation of realistic virtual trajectories is absolutely required for some applications, especially those dedicated to the study of walking behaviors in VR, navigation through virtual places for architecture, rehabilitation and training. Previous studies focused on evaluating the realism of locomotion trajectories have mostly considered the result of the locomotion task (efficiency, accuracy) and its subjective perception (presence, cybersickness). Few focused on the locomotion trajectory itself, but in situation of geometrically constrained task. In this paper, we study the realism of unconstrained trajectories produced during virtual walking by addressing the following question: did the user reach his destination by virtually walking along a trajectory he would have followed in similar real conditions? To this end, we propose a comprehensive evaluation framework consisting on a set of trajectographical criteria and a locomotion model to generate reference trajectories. We consider a simple locomotion task where users walk between two oriented points in space. The travel path is analyzed both geometrically and temporally in comparison to simulated reference trajectories. In addition, we demonstrate the framework over a user study which considered an initial set of common and frequent virtual walking conditions, namely different input devices, output display devices, control laws, and visualization modalities. The study provides insight into the relative contributions of each condition to the overall realism of the resulting virtual trajectories.

  5. Robotic and Virtual Reality BCIs Using Spatial Tactile and Auditory Oddball Paradigms.

    PubMed

    Rutkowski, Tomasz M

    2016-01-01

    The paper reviews nine robotic and virtual reality (VR) brain-computer interface (BCI) projects developed by the author, in collaboration with his graduate students, within the BCI-lab research group during its association with University of Tsukuba, Japan. The nine novel approaches are discussed in applications to direct brain-robot and brain-virtual-reality-agent control interfaces using tactile and auditory BCI technologies. The BCI user intentions are decoded from the brainwaves in realtime using a non-invasive electroencephalography (EEG) and they are translated to a symbiotic robot or virtual reality agent thought-based only control. A communication protocol between the BCI output and the robot or the virtual environment is realized in a symbiotic communication scenario using an user datagram protocol (UDP), which constitutes an internet of things (IoT) control scenario. Results obtained from healthy users reproducing simple brain-robot and brain-virtual-agent control tasks in online experiments support the research goal of a possibility to interact with robotic devices and virtual reality agents using symbiotic thought-based BCI technologies. An offline BCI classification accuracy boosting method, using a previously proposed information geometry derived approach, is also discussed in order to further support the reviewed robotic and virtual reality thought-based control paradigms.

  6. Robotic and Virtual Reality BCIs Using Spatial Tactile and Auditory Oddball Paradigms

    PubMed Central

    Rutkowski, Tomasz M.

    2016-01-01

    The paper reviews nine robotic and virtual reality (VR) brain–computer interface (BCI) projects developed by the author, in collaboration with his graduate students, within the BCI–lab research group during its association with University of Tsukuba, Japan. The nine novel approaches are discussed in applications to direct brain-robot and brain-virtual-reality-agent control interfaces using tactile and auditory BCI technologies. The BCI user intentions are decoded from the brainwaves in realtime using a non-invasive electroencephalography (EEG) and they are translated to a symbiotic robot or virtual reality agent thought-based only control. A communication protocol between the BCI output and the robot or the virtual environment is realized in a symbiotic communication scenario using an user datagram protocol (UDP), which constitutes an internet of things (IoT) control scenario. Results obtained from healthy users reproducing simple brain-robot and brain-virtual-agent control tasks in online experiments support the research goal of a possibility to interact with robotic devices and virtual reality agents using symbiotic thought-based BCI technologies. An offline BCI classification accuracy boosting method, using a previously proposed information geometry derived approach, is also discussed in order to further support the reviewed robotic and virtual reality thought-based control paradigms. PMID:27999538

  7. Screening-Engineered Field-Effect Solar Cells

    DTIC Science & Technology

    2012-01-01

    virtually any semiconductor, including the promising but hard-to- dope metal oxides, sulfides, and phosphides.3 Prototype SFPV devices have been...MIS interface. Unfortu- nately, MIS cells, though sporting impressive efficiencies,4−6 typically have short operating lifetimes due to surface state...instability at the MIS interface.7 Methods aimed at direct field- effect “ doping ” of semiconductors, in which the voltage is externally applied to a gate

  8. Exploring the simulation requirements for virtual regional anesthesia training

    NASA Astrophysics Data System (ADS)

    Charissis, V.; Zimmer, C. R.; Sakellariou, S.; Chan, W.

    2010-01-01

    This paper presents an investigation towards the simulation requirements for virtual regional anaesthesia training. To this end we have developed a prototype human-computer interface designed to facilitate Virtual Reality (VR) augmenting educational tactics for regional anaesthesia training. The proposed interface system, aims to compliment nerve blocking techniques methods. The system is designed to operate in real-time 3D environment presenting anatomical information and enabling the user to explore the spatial relation of different human parts without any physical constrains. Furthermore the proposed system aims to assist the trainee anaesthetists so as to build a mental, three-dimensional map of the anatomical elements and their depictive relationship to the Ultra-Sound imaging which is used for navigation of the anaesthetic needle. Opting for a sophisticated approach of interaction, the interface elements are based on simplified visual representation of real objects, and can be operated through haptic devices and surround auditory cues. This paper discusses the challenges involved in the HCI design, introduces the visual components of the interface and presents a tentative plan of future work which involves the development of realistic haptic feedback and various regional anaesthesia training scenarios.

  9. Evaluation of Wearable Haptic Systems for the Fingers in Augmented Reality Applications.

    PubMed

    Maisto, Maurizio; Pacchierotti, Claudio; Chinello, Francesco; Salvietti, Gionata; De Luca, Alessandro; Prattichizzo, Domenico

    2017-01-01

    Although Augmented Reality (AR) has been around for almost five decades, only recently we have witnessed AR systems and applications entering in our everyday life. Representative examples of this technological revolution are the smartphone games "Pokémon GO" and "Ingress" or the Google Translate real-time sign interpretation app. Even if AR applications are already quite compelling and widespread, users are still not able to physically interact with the computer-generated reality. In this respect, wearable haptics can provide the compelling illusion of touching the superimposed virtual objects without constraining the motion or the workspace of the user. In this paper, we present the experimental evaluation of two wearable haptic interfaces for the fingers in three AR scenarios, enrolling 38 participants. In the first experiment, subjects were requested to write on a virtual board using a real chalk. The haptic devices provided the interaction forces between the chalk and the board. In the second experiment, subjects were asked to pick and place virtual and real objects. The haptic devices provided the interaction forces due to the weight of the virtual objects. In the third experiment, subjects were asked to balance a virtual sphere on a real cardboard. The haptic devices provided the interaction forces due to the weight of the virtual sphere rolling on the cardboard. Providing haptic feedback through the considered wearable device significantly improved the performance of all the considered tasks. Moreover, subjects significantly preferred conditions providing wearable haptic feedback.

  10. Emerging CAE technologies and their role in Future Ambient Intelligence Environments

    NASA Astrophysics Data System (ADS)

    Noor, Ahmed K.

    2011-03-01

    Dramatic improvements are on the horizon in Computer Aided Engineering (CAE) and various simulation technologies. The improvements are due, in part, to the developments in a number of leading-edge technologies and their synergistic combinations/convergence. The technologies include ubiquitous, cloud, and petascale computing; ultra high-bandwidth networks, pervasive wireless communication; knowledge based engineering; networked immersive virtual environments and virtual worlds; novel human-computer interfaces; and powerful game engines and facilities. This paper describes the frontiers and emerging simulation technologies, and their role in the future virtual product creation and learning/training environments. The environments will be ambient intelligence environments, incorporating a synergistic combination of novel agent-supported visual simulations (with cognitive learning and understanding abilities); immersive 3D virtual world facilities; development chain management systems and facilities (incorporating a synergistic combination of intelligent engineering and management tools); nontraditional methods; intelligent, multimodal and human-like interfaces; and mobile wireless devices. The Virtual product creation environment will significantly enhance the productivity and will stimulate creativity and innovation in future global virtual collaborative enterprises. The facilities in the learning/training environment will provide timely, engaging, personalized/collaborative and tailored visual learning.

  11. Prototype of haptic device for sole of foot using magnetic field sensitive elastomer

    NASA Astrophysics Data System (ADS)

    Kikuchi, T.; Masuda, Y.; Sugiyama, M.; Mitsumata, T.; Ohori, S.

    2013-02-01

    Walking is one of the most popular activities and a healthy aerobic exercise for the elderly. However, if they have physical and / or cognitive disabilities, sometimes it is challenging to go somewhere they don't know well. The final goal of this study is to develop a virtual reality walking system that allows users to walk in virtual worlds fabricated with computer graphics. We focus on a haptic device that can perform various plantar pressures on users' soles of feet as an additional sense in the virtual reality walking. In this study, we discuss a use of a magnetic field sensitive elastomer (MSE) as a working material for the haptic interface on the sole. The first prototype with MSE was developed and evaluated in this work. According to the measurement of planter pressures, it was found that this device can perform different pressures on the sole of a light-weight user by applying magnetic field on the MSE. The result also implied necessities of the improvement of the magnetic circuit and the basic structure of the mechanism of the device.

  12. First-principles study of the effects of Silicon doping on the Schottky barrier of TiSi2/Si interfaces

    NASA Astrophysics Data System (ADS)

    Wang, Han; Silva, Eduardo; West, Damien; Sun, Yiyang; Restrepo, Oscar; Zhang, Shengbai; Kota, Murali

    As scaling of semiconductor devices is pursued in order to improve power efficiency, quantum effects due to the reduced dimensions on devices have become dominant factors in power, performance, and area scaling. In particular, source/drain contact resistance has become a limiting factor in the overall device power efficiency and performance. As a consequence, techniques such as heavy doping of source and drain have been explored to reduce the contact resistance, thereby shrinking the width of depletion region and lowering the Schottky barrier height. In this work, we study the relation between doping in Silicon and the Schottky barrier of a TiSi2/Si interface with first-principles calculation. Virtual Crystal Approximation (VCA) is used to calculate the average potential of the interface with varying doping concentration, while the I-V curve for the corresponding interface is calculated with a generalized one-dimensional transfer matrix method. The relation between substitutional and interstitial Boron and Phosphorus dopant near the interface, and their effect on tuning the Schottky barrier is studied. These studies provide insight to the type of doping and the effect of dopant segregation to optimize metal-semiconductor interface resistance.

  13. A convertor and user interface to import CAD files into worldtoolkit virtual reality systems

    NASA Technical Reports Server (NTRS)

    Wang, Peter Hor-Ching

    1996-01-01

    Virtual Reality (VR) is a rapidly developing human-to-computer interface technology. VR can be considered as a three-dimensional computer-generated Virtual World (VW) which can sense particular aspects of a user's behavior, allow the user to manipulate the objects interactively, and render the VW at real-time accordingly. The user is totally immersed in the virtual world and feel the sense of transforming into that VW. NASA/MSFC Computer Application Virtual Environments (CAVE) has been developing the space-related VR applications since 1990. The VR systems in CAVE lab are based on VPL RB2 system which consists of a VPL RB2 control tower, an LX eyephone, an Isotrak polhemus sensor, two Fastrak polhemus sensors, a folk of Bird sensor, and two VPL DG2 DataGloves. A dynamics animator called Body Electric from VPL is used as the control system to interface with all the input/output devices and to provide the network communications as well as VR programming environment. The RB2 Swivel 3D is used as the modelling program to construct the VW's. A severe limitation of the VPL VR system is the use of RB2 Swivel 3D, which restricts the files to a maximum of 1020 objects and doesn't have the advanced graphics texture mapping. The other limitation is that the VPL VR system is a turn-key system which does not provide the flexibility for user to add new sensors and C language interface. Recently, NASA/MSFC CAVE lab provides VR systems built on Sense8 WorldToolKit (WTK) which is a C library for creating VR development environments. WTK provides device drivers for most of the sensors and eyephones available on the VR market. WTK accepts several CAD file formats, such as Sense8 Neutral File Format, AutoCAD DXF and 3D Studio file format, Wave Front OBJ file format, VideoScape GEO file format, Intergraph EMS stereolithographics and CATIA Stereolithographics STL file formats. WTK functions are object-oriented in their naming convention, are grouped into classes, and provide easy C language interface. Using a CAD or modelling program to build a VW for WTK VR applications, we typically construct the stationary universe with all the geometric objects except the dynamic objects, and create each dynamic object in an individual file.

  14. Multi-degree of freedom joystick for virtual reality simulation.

    PubMed

    Head, M J; Nelson, C A; Siu, K C

    2013-11-01

    A modular control interface and simulated virtual reality environment were designed and created in order to determine how the kinematic architecture of a control interface affects minimally invasive surgery training. A user is able to selectively determine the kinematic configuration of an input device (number, type and location of degrees of freedom) for a specific surgical simulation through the use of modular joints and constraint components. Furthermore, passive locking was designed and implemented through the use of inflated latex tubing around rotational joints in order to allow a user to step away from a simulation without unwanted tool motion. It is believed that these features will facilitate improved simulation of a variety of surgical procedures and, thus, improve surgical skills training.

  15. Portable Virtual Training Units

    NASA Technical Reports Server (NTRS)

    Malone, Reagan; Johnston, Alan

    2015-01-01

    The Mission Operations Lab initiated a project to design, develop, deliver, test, and validate a unique training system for astronaut and ground support personnel. In an effort to keep training costs low, virtual training units (VTUs) have been designed based on images of actual hardware and manipulated by a touch screen style interface for ground support personnel training. This project helped modernized the training system and materials by integrating them with mobile devices for training when operators or crew are unavailable to physically train in the facility. This project also tested the concept of a handheld remote device to control integrated trainers using International Space Station (ISS) training simulators as a platform. The portable VTU can interface with the full-sized VTU, allowing a trainer co-located with a trainee to remotely manipulate a VTU and evaluate a trainee's response. This project helped determine if it is useful, cost effective, and beneficial for the instructor to have a portable handheld device to control the behavior of the models during training. This project has advanced NASA Marshall Space Flight Center's (MSFC's) VTU capabilities with modern and relevant technology to support space flight training needs of today and tomorrow.

  16. An augmented reality tool for learning spatial anatomy on mobile devices.

    PubMed

    Jain, Nishant; Youngblood, Patricia; Hasel, Matthew; Srivastava, Sakti

    2017-09-01

    Augmented Realty (AR) offers a novel method of blending virtual and real anatomy for intuitive spatial learning. Our first aim in the study was to create a prototype AR tool for mobile devices. Our second aim was to complete a technical evaluation of our prototype AR tool focused on measuring the system's ability to accurately render digital content in the real world. We imported Computed Tomography (CT) data derived virtual surface models into a 3D Unity engine environment and implemented an AR algorithm to display these on mobile devices. We investigated the accuracy of the virtual renderings by comparing a physical cube with an identical virtual cube for dimensional accuracy. Our comparative study confirms that our AR tool renders 3D virtual objects with a high level of accuracy as evidenced by the degree of similarity between measurements of the dimensions of a virtual object (a cube) and the corresponding physical object. We developed an inexpensive and user-friendly prototype AR tool for mobile devices that creates highly accurate renderings. This prototype demonstrates an intuitive, portable, and integrated interface for spatial interaction with virtual anatomical specimens. Integrating this AR tool with a library of CT derived surface models provides a platform for spatial learning in the anatomy curriculum. The segmentation methodology implemented to optimize human CT data for mobile viewing can be extended to include anatomical variations and pathologies. The ability of this inexpensive educational platform to deliver a library of interactive, 3D models to students worldwide demonstrates its utility as a supplemental teaching tool that could greatly benefit anatomical instruction. Clin. Anat. 30:736-741, 2017. © 2017Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  17. Mobile Learning: At the Tipping Point

    ERIC Educational Resources Information Center

    Franklin, Teresa

    2011-01-01

    Mobile technologies are interfacing with all aspects of our lives including Web 2.0 tools and applications, immersive virtual world environments, and online environments to present educational opportunities for 24/7 learning at the learner's discretion. Mobile devices are allowing educators to build new community learning ecosystems for and by…

  18. Evaluating the Usability of Pinchigator, a system for Navigating Virtual Worlds using Pinch Gloves

    NASA Technical Reports Server (NTRS)

    Hamilton, George S.; Brookman, Stephen; Dumas, Joseph D. II; Tilghman, Neal

    2003-01-01

    Appropriate design of two dimensional user interfaces (2D U/I) utilizing the well known WIMP (Window, Icon, Menu, Pointing device) environment for computer software is well studied and guidance can be found in several standards. Three-dimensional U/I design is not nearly so mature as 2D U/I, and standards bodies have not reached consensus on what makes a usable interface. This is especially true when the tools for interacting with the virtual environment may include stereo viewing, real time trackers and pinch gloves instead of just a mouse & keyboard. Over the last several years the authors have created a 3D U/I system dubbed Pinchigator for navigating virtual worlds based on the dVise dV/Mockup visualization software, Fakespace Pinch Gloves and Pohlemus trackers. The current work is to test the usability of the system on several virtual worlds, suggest improvements to increase Pinchigator s usability, and then to generalize about what was learned and how those lessons might be applied to improve other 3D U/I systems.

  19. Closed-loop dialog model of face-to-face communication with a photo-real virtual human

    NASA Astrophysics Data System (ADS)

    Kiss, Bernadette; Benedek, Balázs; Szijárto, Gábor; Takács, Barnabás

    2004-01-01

    We describe an advanced Human Computer Interaction (HCI) model that employs photo-realistic virtual humans to provide digital media users with information, learning services and entertainment in a highly personalized and adaptive manner. The system can be used as a computer interface or as a tool to deliver content to end-users. We model the interaction process between the user and the system as part of a closed loop dialog taking place between the participants. This dialog, exploits the most important characteristics of a face-to-face communication process, including the use of non-verbal gestures and meta communication signals to control the flow of information. Our solution is based on a Virtual Human Interface (VHI) technology that was specifically designed to be able to create emotional engagement between the virtual agent and the user, thus increasing the efficiency of learning and/or absorbing any information broadcasted through this device. The paper reviews the basic building blocks and technologies needed to create such a system and discusses its advantages over other existing methods.

  20. Rehabilitation of activities of daily living in virtual environments with intuitive user interface and force feedback.

    PubMed

    Chiang, Vico Chung-Lim; Lo, King-Hung; Choi, Kup-Sze

    2017-10-01

    To investigate the feasibility of using a virtual rehabilitation system with intuitive user interface and force feedback to improve the skills in activities of daily living (ADL). A virtual training system equipped with haptic devices was developed for the rehabilitation of three ADL tasks - door unlocking, water pouring and meat cutting. Twenty subjects with upper limb disabilities, supervised by two occupational therapists, received a four-session training using the system. The task completion time and the amount of water poured into a virtual glass were recorded. The performance of the three tasks in reality was assessed before and after the virtual training. Feedback of the participants was collected with questionnaires after the study. The completion time of the virtual tasks decreased during the training (p < 0.01) while the percentage of water successfully poured increased (p = 0.051). The score of the Borg scale of perceived exertion was 1.05 (SD = 1.85; 95% CI =  0.18-1.92) and that of the task specific feedback questionnaire was 31 (SD =  4.85; 95% CI =  28.66-33.34). The feedback of the therapists suggested a positive rehabilitation effect. The participants had positive perception towards the system. The system can potentially be used as a tool to complement conventional rehabilitation approaches of ADL. Implications for rehabilitation Rehabilitation of activities of daily living can be facilitated using computer-assisted approaches. The existing approaches focus on cognitive training rather than the manual skills. A virtual training system with intuitive user interface and force feedback was designed to improve the learning of the manual skills. The study shows that system could be used as a training tool to complement conventional rehabilitation approaches.

  1. Design and Calibration of a New 6 DOF Haptic Device

    PubMed Central

    Qin, Huanhuan; Song, Aiguo; Liu, Yuqing; Jiang, Guohua; Zhou, Bohe

    2015-01-01

    For many applications such as tele-operational robots and interactions with virtual environments, it is better to have performance with force feedback than without. Haptic devices are force reflecting interfaces. They can also track human hand positions simultaneously. A new 6 DOF (degree-of-freedom) haptic device was designed and calibrated in this study. It mainly contains a double parallel linkage, a rhombus linkage, a rotating mechanical structure and a grasping interface. Benefited from the unique design, it is a hybrid structure device with a large workspace and high output capability. Therefore, it is capable of multi-finger interactions. Moreover, with an adjustable base, operators can change different postures without interrupting haptic tasks. To investigate the performance regarding position tracking accuracy and static output forces, we conducted experiments on a three-dimensional electric sliding platform and a digital force gauge, respectively. Displacement errors and force errors are calculated and analyzed. To identify the capability and potential of the device, four application examples were programmed. PMID:26690449

  2. State of the art in nuclear telerobotics: focus on the man/machine connection

    NASA Astrophysics Data System (ADS)

    Greaves, Amna E.

    1995-12-01

    The interface between the human controller and remotely operated device is a crux of telerobotic investigation today. This human-to-machine connection is the means by which we communicate our commands to the device, as well as the medium for decision-critical feedback to the operator. The amount of information transferred through the user interface is growing. This can be seen as a direct result of our need to support added complexities, as well as a rapidly expanding domain of applications. A user interface, or UI, is therefore subject to increasing demands to present information in a meaningful manner to the user. Virtual reality, and multi degree-of-freedom input devices lend us the ability to augment the man/machine interface, and handle burgeoning amounts of data in a more intuitive and anthropomorphically correct manner. Along with the aid of 3-D input and output devices, there are several visual tools that can be employed as part of a graphical UI that enhance and accelerate our comprehension of the data being presented. Thus an advanced UI that features these improvements would reduce the amount of fatigue on the teleoperator, increase his level of safety, facilitate learning, augment his control, and potentially reduce task time. This paper investigates the cutting edge concepts and enhancements that lead to the next generation of telerobotic interface systems.

  3. Novel graphical environment for virtual and real-world operations of tracked mobile manipulators

    NASA Astrophysics Data System (ADS)

    Chen, ChuXin; Trivedi, Mohan M.; Azam, Mir; Lassiter, Nils T.

    1993-08-01

    A simulation, animation, visualization and interactive control (SAVIC) environment has been developed for the design and operation of an integrated mobile manipulator system. This unique system possesses the abilities for (1) multi-sensor simulation, (2) kinematics and locomotion animation, (3) dynamic motion and manipulation animation, (4) transformation between real and virtual modes within the same graphics system, (5) ease in exchanging software modules and hardware devices between real and virtual world operations, and (6) interfacing with a real robotic system. This paper describes a working system and illustrates the concepts by presenting the simulation, animation and control methodologies for a unique mobile robot with articulated tracks, a manipulator, and sensory modules.

  4. Virtual coach technology for supporting self-care.

    PubMed

    Ding, Dan; Liu, Hsin-Yi; Cooper, Rosemarie; Cooper, Rory A; Smailagic, Asim; Siewiorek, Dan

    2010-02-01

    "Virtual Coach" refers to a coaching program or device aiming to guide users through tasks for the purpose of prompting positive behavior or assisting with learning new skills. This article reviews virtual coach interventions with the purpose of guiding rehabilitation professionals to comprehend more effectively the essential components of such interventions, the underlying technologies and their integration, and example applications. A design space of virtual coach interventions including self-monitoring, context awareness, interface modality, and coaching strategies were identified and discussed to address when, how, and what coaching messages to deliver in an automated and intelligent way. Example applications that address various health-related issues also are provided to illustrate how a virtual coach intervention is developed and evaluated. Finally, the article provides some insight into addressing key challenges and opportunities in designing and implementing virtual coach interventions. It is expected that more virtual coach interventions will be developed in the field of rehabilitation to support self-care and prevent secondary conditions in individuals with disabilities.

  5. Modeling and Design of an Electro-Rheological Fluid Based Haptic System for Tele-Operation of Space Robots

    NASA Technical Reports Server (NTRS)

    Mavroidis, Constantinos; Pfeiffer, Charles; Paljic, Alex; Celestino, James; Lennon, Jamie; Bar-Cohen, Yoseph

    2000-01-01

    For many years, the robotic community sought to develop robots that can eventually operate autonomously and eliminate the need for human operators. However, there is an increasing realization that there are some tasks that human can perform significantly better but, due to associated hazards, distance, physical limitations and other causes, only robot can be employed to perform these tasks. Remotely performing these types of tasks requires operating robots as human surrogates. While current "hand master" haptic systems are able to reproduce the feeling of rigid objects, they present great difficulties in emulating the feeling of remote/virtual stiffness. In addition, they tend to be heavy, cumbersome and usually they only allow limited operator workspace. In this paper a novel haptic interface is presented to enable human-operators to "feel" and intuitively mirror the stiffness/forces at remote/virtual sites enabling control of robots as human-surrogates. This haptic interface is intended to provide human operators intuitive feeling of the stiffness and forces at remote or virtual sites in support of space robots performing dexterous manipulation tasks (such as operating a wrench or a drill). Remote applications are referred to the control of actual robots whereas virtual applications are referred to simulated operations. The developed haptic interface will be applicable to IVA operated robotic EVA tasks to enhance human performance, extend crew capability and assure crew safety. The electrically controlled stiffness is obtained using constrained ElectroRheological Fluids (ERF), which changes its viscosity under electrical stimulation. Forces applied at the robot end-effector due to a compliant environment will be reflected to the user using this ERF device where a change in the system viscosity will occur proportionally to the force to be transmitted. In this paper, we will present the results of our modeling, simulation, and initial testing of such an electrorheological fluid (ERF) based haptic device.

  6. Video game interfaces for interactive lower and upper member therapy.

    PubMed

    Uribe-Quevedo, Alvaro; Perez-Gutierrez, Byron; Alves, Silas

    2013-01-01

    With recent advances in electronics and mechanics, a new trend in interaction is taking place changing how we interact with our environment, daily tasks and other people. Even though sensor based technologies and tracking systems have been around for several years, recently they have become affordable and used in several areas such as physical and mental rehabilitation, educational applications, physical exercises, and natural interactions, among others. This work presents the integration of two mainstream videogame interfaces as tools for developing an interactive lower and upper member therapy tool. The goal is to study the potential of these devices as complementing didactic elements for improving and following user performance during a series of exercises with virtual and real devices.

  7. An optical brain computer interface for environmental control.

    PubMed

    Ayaz, Hasan; Shewokis, Patricia A; Bunce, Scott; Onaral, Banu

    2011-01-01

    A brain computer interface (BCI) is a system that translates neurophysiological signals detected from the brain to supply input to a computer or to control a device. Volitional control of neural activity and its real-time detection through neuroimaging modalities are key constituents of BCI systems. The purpose of this study was to develop and test a new BCI design that utilizes intention-related cognitive activity within the dorsolateral prefrontal cortex using functional near infrared (fNIR) spectroscopy. fNIR is a noninvasive, safe, portable and affordable optical technique with which to monitor hemodynamic changes, in the brain's cerebral cortex. Because of its portability and ease of use, fNIR is amenable to deployment in ecologically valid natural working environments. We integrated a control paradigm in a computerized 3D virtual environment to augment interactivity. Ten healthy participants volunteered for a two day study in which they navigated a virtual environment with keyboard inputs, but were required to use the fNIR-BCI for interaction with virtual objects. Results showed that participants consistently utilized the fNIR-BCI with an overall success rate of 84% and volitionally increased their cerebral oxygenation level to trigger actions within the virtual environment.

  8. Possible applications of the LEAP motion controller for more interactive simulated experiments in augmented or virtual reality

    NASA Astrophysics Data System (ADS)

    Wozniak, Peter; Vauderwange, Oliver; Mandal, Avikarsha; Javahiraly, Nicolas; Curticapean, Dan

    2016-09-01

    Practical exercises are a crucial part of many curricula. Even simple exercises can improve the understanding of the underlying subject. Most experimental setups require special hardware. To carry out e. g. a lens experiments the students need access to an optical bench, various lenses, light sources, apertures and a screen. In our previous publication we demonstrated the use of augmented reality visualization techniques in order to let the students prepare with a simulated experimental setup. Within the context of our intended blended learning concept we want to utilize augmented or virtual reality techniques for stationary laboratory exercises. Unlike applications running on mobile devices, stationary setups can be extended more easily with additional interfaces and thus allow for more complex interactions and simulations in virtual reality (VR) and augmented reality (AR). The most significant difference is the possibility to allow interactions beyond touching a screen. The LEAP Motion controller is a small inexpensive device that allows for the tracking of the user's hands and fingers in three dimensions. It is conceivable to allow the user to interact with the simulation's virtual elements by the user's very hand position, movement and gesture. In this paper we evaluate possible applications of the LEAP Motion controller for simulated experiments in augmented and virtual reality. We pay particular attention to the devices strengths and weaknesses and want to point out useful and less useful application scenarios.

  9. Advanced Maintenance Simulation by Means of Hand-Based Haptic Interfaces

    NASA Astrophysics Data System (ADS)

    Nappi, Michele; Paolino, Luca; Ricciardi, Stefano; Sebillo, Monica; Vitiello, Giuliana

    Aerospace industry has been involved in virtual simulation for design and testing since the birth of virtual reality. Today this industry is showing a growing interest in the development of haptic-based maintenance training applications, which represent the most advanced way to simulate maintenance and repair tasks within a virtual environment by means of a visual-haptic approach. The goal is to allow the trainee to experiment the service procedures not only as a workflow reproduced at a visual level but also in terms of the kinaesthetic feedback involved with the manipulation of tools and components. This study, conducted in collaboration with aerospace industry specialists, is aimed to the development of an immersive virtual capable of immerging the trainees into a virtual environment where mechanics and technicians can perform maintenance simulation or training tasks by directly manipulating 3D virtual models of aircraft parts while perceiving force feedback through the haptic interface. The proposed system is based on ViRstperson, a virtual reality engine under development at the Italian Center for Aerospace Research (CIRA) to support engineering and technical activities such as design-time maintenance procedure validation, and maintenance training. This engine has been extended to support haptic-based interaction, enabling a more complete level of interaction, also in terms of impedance control, and thus fostering the development of haptic knowledge in the user. The user’s “sense of touch” within the immersive virtual environment is simulated through an Immersion CyberForce® hand-based force-feedback device. Preliminary testing of the proposed system seems encouraging.

  10. Remotely accessible laboratory for MEMS testing

    NASA Astrophysics Data System (ADS)

    Sivakumar, Ganapathy; Mulsow, Matthew; Melinger, Aaron; Lacouture, Shelby; Dallas, Tim E.

    2010-02-01

    We report on the construction of a remotely accessible and interactive laboratory for testing microdevices (aka: MicroElectroMechancial Systems - MEMS). Enabling expanded utilization of microdevices for research, commercial, and educational purposes is very important for driving the creation of future MEMS devices and applications. Unfortunately, the relatively high costs associated with MEMS devices and testing infrastructure makes widespread access to the world of MEMS difficult. The creation of a virtual lab to control and actuate MEMS devices over the internet helps spread knowledge to a larger audience. A host laboratory has been established that contains a digital microscope, microdevices, controllers, and computers that can be logged into through the internet. The overall layout of the tele-operated MEMS laboratory system can be divided into two major parts: the server side and the client side. The server-side is present at Texas Tech University, and hosts a server machine that runs the Linux operating system and is used for interfacing the MEMS lab with the outside world via internet. The controls from the clients are transferred to the lab side through the server interface. The server interacts with the electronics required to drive the MEMS devices using a range of National Instruments hardware and LabView Virtual Instruments. An optical microscope (100 ×) with a CCD video camera is used to capture images of the operating MEMS. The server broadcasts the live video stream over the internet to the clients through the website. When the button is pressed on the website, the MEMS device responds and the video stream shows the movement in close to real time.

  11. Loop Group Parakeet Virtual Cable Concept Demonstrator

    NASA Astrophysics Data System (ADS)

    Dowsett, T.; McNeill, T. C.; Reynolds, A. B.; Blair, W. D.

    2002-07-01

    The Parakeet Virtual Cable (PVC) concept demonstrator uses the Ethernet Local Area Network (LAN) laid for the Battle Command Support System (BCSS) to connect the Parakeet DVT(DA) (voice terminal) to the Parakeet multiplexer. This currently requires pairs of PVC interface units to be installed for each DVT(DA) . To reduce the cost of a PVC installation, the concept of a Loop Group Parakeet Virtual Cable (LGPVC) was proposed. This device was designed to replace the up to 30 PVC boxes and the multiplexer at the multiplexer side of a PVC installation. While the demonstrator is largely complete, testing has revealed an incomplete understanding of how to emulate the proprietary handshaking occurring between the circuit switch and the multiplexer. The LGPVC concept cannot yet be demonstrated.

  12. Virtual- and real-world operation of mobile robotic manipulators: integrated simulation, visualization, and control environment

    NASA Astrophysics Data System (ADS)

    Chen, ChuXin; Trivedi, Mohan M.

    1992-03-01

    This research is focused on enhancing the overall productivity of an integrated human-robot system. A simulation, animation, visualization, and interactive control (SAVIC) environment has been developed for the design and operation of an integrated robotic manipulator system. This unique system possesses the abilities for multisensor simulation, kinematics and locomotion animation, dynamic motion and manipulation animation, transformation between real and virtual modes within the same graphics system, ease in exchanging software modules and hardware devices between real and virtual world operations, and interfacing with a real robotic system. This paper describes a working system and illustrates the concepts by presenting the simulation, animation, and control methodologies for a unique mobile robot with articulated tracks, a manipulator, and sensory modules.

  13. Implementing virtual reality interfaces for the geosciences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bethel, W.; Jacobsen, J.; Austin, A.

    1996-06-01

    For the past few years, a multidisciplinary team of computer and earth scientists at Lawrence Berkeley National Laboratory has been exploring the use of advanced user interfaces, commonly called {open_quotes}Virtual Reality{close_quotes} (VR), coupled with visualization and scientific computing software. Working closely with industry, these efforts have resulted in an environment in which VR technology is coupled with existing visualization and computational tools. VR technology may be thought of as a user interface. It is useful to think of a spectrum, ranging the gamut from command-line interfaces to completely immersive environments. In the former, one uses the keyboard to enter threemore » or six-dimensional parameters. In the latter, three or six-dimensional information is provided by trackers contained either in hand-held devices or attached to the user in some fashion, e.g. attached to a head-mounted display. Rich, extensible and often complex languages are a vehicle whereby the user controls parameters to manipulate object position and location in a virtual world, but the keyboard is the obstacle in that typing is cumbersome, error-prone and typically slow. In the latter, the user can interact with these parameters by means of motor skills which are highly developed. Two specific geoscience application areas will be highlighted. In the first, we have used VR technology to manipulate three-dimensional input parameters, such as the spatial location of injection or production wells in a reservoir simulator. In the second, we demonstrate how VR technology has been used to manipulate visualization tools, such as a tool for computing streamlines via manipulation of a {open_quotes}rake.{close_quotes} The rake is presented to the user in the form of a {open_quotes}virtual well{close_quotes} icon, and provides parameters used by the streamlines algorithm.« less

  14. HTC Vive MeVisLab integration via OpenVR for medical applications

    PubMed Central

    Egger, Jan; Gall, Markus; Wallner, Jürgen; Boechat, Pedro; Hann, Alexander; Li, Xing; Chen, Xiaojun; Schmalstieg, Dieter

    2017-01-01

    Virtual Reality, an immersive technology that replicates an environment via computer-simulated reality, gets a lot of attention in the entertainment industry. However, VR has also great potential in other areas, like the medical domain, Examples are intervention planning, training and simulation. This is especially of use in medical operations, where an aesthetic outcome is important, like for facial surgeries. Alas, importing medical data into Virtual Reality devices is not necessarily trivial, in particular, when a direct connection to a proprietary application is desired. Moreover, most researcher do not build their medical applications from scratch, but rather leverage platforms like MeVisLab, MITK, OsiriX or 3D Slicer. These platforms have in common that they use libraries like ITK and VTK, and provide a convenient graphical interface. However, ITK and VTK do not support Virtual Reality directly. In this study, the usage of a Virtual Reality device for medical data under the MeVisLab platform is presented. The OpenVR library is integrated into the MeVisLab platform, allowing a direct and uncomplicated usage of the head mounted display HTC Vive inside the MeVisLab platform. Medical data coming from other MeVisLab modules can directly be connected per drag-and-drop to the Virtual Reality module, rendering the data inside the HTC Vive for immersive virtual reality inspection. PMID:28323840

  15. HTC Vive MeVisLab integration via OpenVR for medical applications.

    PubMed

    Egger, Jan; Gall, Markus; Wallner, Jürgen; Boechat, Pedro; Hann, Alexander; Li, Xing; Chen, Xiaojun; Schmalstieg, Dieter

    2017-01-01

    Virtual Reality, an immersive technology that replicates an environment via computer-simulated reality, gets a lot of attention in the entertainment industry. However, VR has also great potential in other areas, like the medical domain, Examples are intervention planning, training and simulation. This is especially of use in medical operations, where an aesthetic outcome is important, like for facial surgeries. Alas, importing medical data into Virtual Reality devices is not necessarily trivial, in particular, when a direct connection to a proprietary application is desired. Moreover, most researcher do not build their medical applications from scratch, but rather leverage platforms like MeVisLab, MITK, OsiriX or 3D Slicer. These platforms have in common that they use libraries like ITK and VTK, and provide a convenient graphical interface. However, ITK and VTK do not support Virtual Reality directly. In this study, the usage of a Virtual Reality device for medical data under the MeVisLab platform is presented. The OpenVR library is integrated into the MeVisLab platform, allowing a direct and uncomplicated usage of the head mounted display HTC Vive inside the MeVisLab platform. Medical data coming from other MeVisLab modules can directly be connected per drag-and-drop to the Virtual Reality module, rendering the data inside the HTC Vive for immersive virtual reality inspection.

  16. A Web-based cost-effective training tool with possible application to brain injury rehabilitation.

    PubMed

    Wang, Peijun; Kreutzer, Ina Anna; Bjärnemo, Robert; Davies, Roy C

    2004-06-01

    Virtual reality (VR) has provoked enormous interest in the medical community. In particular, VR offers therapists new approaches for improving rehabilitation effects. However, most of these VR assistant tools are not very portable, extensible or economical. Due to the vast amount of 3D data, they are not suitable for Internet transfer. Furthermore, in order to run these VR systems smoothly, special hardware devices are needed. As a result, existing VR assistant tools tend to be available in hospitals but not in patients' homes. To overcome these disadvantages, as a case study, this paper proposes a Web-based Virtual Ticket Machine, called WBVTM, using VRML [VRML Consortium, The Virtual Reality Modeling Language: International Standard ISO/IEC DIS 14772-1, 1997, available at ], Java and EAI (External Authoring Interface) [Silicon Graphics, Inc., The External Authoring Interface (EAI), available at ], to help people with acquired brain injury (ABI) to relearn basic living skills at home at a low cost. As these technologies are open standard and feature usability on the Internet, WBVTM achieves the goals of portability, easy accessibility and cost-effectiveness.

  17. Interaction devices for hands-on desktop design

    NASA Astrophysics Data System (ADS)

    Ju, Wendy; Madsen, Sally; Fiene, Jonathan; Bolas, Mark T.; McDowall, Ian E.; Faste, Rolf

    2003-05-01

    Starting with a list of typical hand actions - such as touching or twisting - a collection of physical input device prototypes was created to study better ways of engaging the body and mind in the computer aided design process. These devices were interchangeably coupled with a graphics system to allow for rapid exploration of the interplay between the designer's intent, body motions, and the resulting on-screen design. User testing showed that a number of key considerations should influence the future development of such devices: coupling between the physical and virtual worlds, tactile feedback, and scale. It is hoped that these explorations contribute to the greater goal of creating user interface devices that increase the fluency, productivity and joy of computer-augmented design.

  18. Tactile Data Entry System

    NASA Technical Reports Server (NTRS)

    Adams, Richard J.

    2015-01-01

    The patent-pending Glove-Enabled Computer Operations (GECO) design leverages extravehicular activity (EVA) glove design features as platforms for instrumentation and tactile feedback, enabling the gloves to function as human-computer interface devices. Flexible sensors in each finger enable control inputs that can be mapped to any number of functions (e.g., a mouse click, a keyboard strike, or a button press). Tracking of hand motion is interpreted alternatively as movement of a mouse (change in cursor position on a graphical user interface) or a change in hand position on a virtual keyboard. Programmable vibro-tactile actuators aligned with each finger enrich the interface by creating the haptic sensations associated with control inputs, such as recoil of a button press.

  19. Mobile access to virtual randomization for investigator-initiated trials.

    PubMed

    Deserno, Thomas M; Keszei, András P

    2017-08-01

    Background/aims Randomization is indispensable in clinical trials in order to provide unbiased treatment allocation and a valid statistical inference. Improper handling of allocation lists can be avoided using central systems, for example, human-based services. However, central systems are unaffordable for investigator-initiated trials and might be inaccessible from some places, where study subjects need allocations. We propose mobile access to virtual randomization, where the randomization lists are non-existent and the appropriate allocation is computed on demand. Methods The core of the system architecture is an electronic data capture system or a clinical trial management system, which is extended by an R interface connecting the R server using the Java R Interface. Mobile devices communicate via the representational state transfer web services. Furthermore, a simple web-based setup allows configuring the appropriate statistics by non-statisticians. Our comprehensive R script supports simple randomization, restricted randomization using a random allocation rule, block randomization, and stratified randomization for un-blinded, single-blinded, and double-blinded trials. For each trial, the electronic data capture system or the clinical trial management system stores the randomization parameters and the subject assignments. Results Apps are provided for iOS and Android and subjects are randomized using smartphones. After logging onto the system, the user selects the trial and the subject, and the allocation number and treatment arm are displayed instantaneously and stored in the core system. So far, 156 subjects have been allocated from mobile devices serving five investigator-initiated trials. Conclusion Transforming pre-printed allocation lists into virtual ones ensures the correct conduct of trials and guarantees a strictly sequential processing in all trial sites. Covering 88% of all randomization models that are used in recent trials, virtual randomization becomes available for investigator-initiated trials and potentially for large multi-center trials.

  20. Seven Capital Devices for the Future of Stroke Rehabilitation

    PubMed Central

    Iosa, M.; Morone, G.; Fusco, A.; Bragoni, M.; Coiro, P.; Multari, M.; Venturiero, V.; De Angelis, D.; Pratesi, L.; Paolucci, S.

    2012-01-01

    Stroke is the leading cause of long-term disability for adults in industrialized societies. Rehabilitation's efforts are tended to avoid long-term impairments, but, actually, the rehabilitative outcomes are still poor. Novel tools based on new technologies have been developed to improve the motor recovery. In this paper, we have taken into account seven promising technologies that can improve rehabilitation of patients with stroke in the early future: (1) robotic devices for lower and upper limb recovery, (2) brain computer interfaces, (3) noninvasive brain stimulators, (4) neuroprostheses, (5) wearable devices for quantitative human movement analysis, (6) virtual reality, and (7) tablet-pc used for neurorehabilitation. PMID:23304640

  1. A virtual work space for both hands manipulation with coherency between kinesthetic and visual sensation

    NASA Technical Reports Server (NTRS)

    Ishii, Masahiro; Sukanya, P.; Sato, Makoto

    1994-01-01

    This paper describes the construction of a virtual work space for tasks performed by two handed manipulation. We intend to provide a virtual environment that encourages users to accomplish tasks as they usually act in a real environment. Our approach uses a three dimensional spatial interface device that allows the user to handle virtual objects by hand and be able to feel some physical properties such as contact, weight, etc. We investigated suitable conditions for constructing our virtual work space by simulating some basic assembly work, a face and fit task. We then selected the conditions under which the subjects felt most comfortable in performing this task and set up our virtual work space. Finally, we verified the possibility of performing more complex tasks in this virtual work space by providing simple virtual models and then let the subjects create new models by assembling these components. The subjects can naturally perform assembly operations and accomplish the task. Our evaluation shows that this virtual work space has the potential to be used for performing tasks that require two-handed manipulation or cooperation between both hands in a natural manner.

  2. A Virtual Reality Simulator Prototype for Learning and Assessing Phaco-sculpting Skills

    NASA Astrophysics Data System (ADS)

    Choi, Kup-Sze

    This paper presents a virtual reality based simulator prototype for learning phacoemulsification in cataract surgery, with focus on the skills required for making a cross-shape trench in cataractous lens by an ultrasound probe during the phaco-sculpting procedure. An immersive virtual environment is created with 3D models of the lens and surgical tools. Haptic device is also used as 3D user interface. Phaco-sculpting is simulated by interactively deleting the constituting tetrahedrons of the lens model. Collisions between the virtual probe and the lens are effectively identified by partitioning the space containing the lens hierarchically with an octree. The simulator can be programmed to collect real-time quantitative user data for reviewing and assessing trainee's performance in an objective manner. A game-based learning environment can be created on top of the simulator by incorporating gaming elements based on the quantifiable performance metrics.

  3. Perception and Haptic Rendering of Friction Moments.

    PubMed

    Kawasaki, H; Ohtuka, Y; Koide, S; Mouri, T

    2011-01-01

    This paper considers moments due to friction forces on the human fingertip. A computational technique called the friction moment arc method is presented. The method computes the static and/or dynamic friction moment independent of a friction force calculation. In addition, a new finger holder to display friction moment is presented. This device incorporates a small brushless motor and disk, and connects the human's finger to an interface finger of the five-fingered haptic interface robot HIRO II. Subjects' perception of friction moment while wearing the finger holder, as well as perceptions during object manipulation in a virtual reality environment, were evaluated experimentally.

  4. A virtual reality interface for pre-planning of surgical operations based on a customized model of the patient

    NASA Astrophysics Data System (ADS)

    Witkowski, Marcin; Lenar, Janusz; Sitnik, Robert; Verdonschot, Nico

    2012-03-01

    We present a human-computer interface that enables the operator to plan a surgical procedure on the musculoskeletal (MS) model of the patient's lower limbs, send the modified model to the bio-mechanical analysis module, and export the scenario parameters to the surgical navigation system. The interface provides the operator with tools for: importing customized MS model of the patient, cutting bones and manipulating/removal of bony fragments, repositioning muscle insertion points, muscle removal and placing implants. After planning the operator exports the modified MS model for bio-mechanical analysis of the functional outcome. If the simulation result is satisfactory the exported scenario data may be directly used during the actual surgery. The advantages of the developed interface are the possibility of installing it in various hardware configurations and coherent operation regardless of the devices used. The hardware configurations proposed to be used with the interface are: (a) a standard computer keyboard and mouse, and a 2-D display, (b) a touch screen as a single device for both input and output, or (c) a 3-D display and a haptic device for natural manipulation of 3-D objects. The interface may be utilized in two main fields. Experienced surgeons may use it to simulate their intervention plans and prepare input data for a surgical navigation system while student or novice surgeons can use it for simulating results of their hypothetical procedure. The interface has been developed in the TLEMsafe project (www.tlemsafe.eu) funded by the European Commission FP7 program.

  5. Measuring Presence in Virtual Environments

    DTIC Science & Technology

    1994-10-01

    viewpoint to change what they see, or to reposition their head to affect binaural hearing, or to search the environment haptically, they will experience a...increase presence in an alternate environment. For example a head mounted display that isolates the user from the real world may increase the sense...movement interface devices such as treadmills and trampolines , different gloves, and auditory equipment. Even as a low end technological implementation of

  6. A haptic device for guide wire in interventional radiology procedures.

    PubMed

    Moix, Thomas; Ilic, Dejan; Bleuler, Hannes; Zoethout, Jurjen

    2006-01-01

    Interventional Radiology (IR) is a minimally invasive procedure where thin tubular instruments, guide wires and catheters, are steered through the patient's vascular system under X-ray imaging. In order to perform these procedures, a radiologist has to be trained to master hand-eye coordination, instrument manipulation and procedure protocols. The existing simulation systems all have major drawbacks: the use of modified instruments, unrealistic insertion lengths, high inertia of the haptic device that creates a noticeably degraded dynamic behavior or excessive friction that is not properly compensated for. In this paper we propose a quality training environment dedicated to IR. The system is composed of a virtual reality (VR) simulation of the patient's anatomy linked to a robotic interface providing haptic force feedback. This paper focuses on the requirements, design and prototyping of a specific haptic interface for guide wires.

  7. A Bidirectional Brain-Machine Interface Algorithm That Approximates Arbitrary Force-Fields

    PubMed Central

    Semprini, Marianna; Mussa-Ivaldi, Ferdinando A.; Panzeri, Stefano

    2014-01-01

    We examine bidirectional brain-machine interfaces that control external devices in a closed loop by decoding motor cortical activity to command the device and by encoding the state of the device by delivering electrical stimuli to sensory areas. Although it is possible to design this artificial sensory-motor interaction while maintaining two independent channels of communication, here we propose a rule that closes the loop between flows of sensory and motor information in a way that approximates a desired dynamical policy expressed as a field of forces acting upon the controlled external device. We previously developed a first implementation of this approach based on linear decoding of neural activity recorded from the motor cortex into a set of forces (a force field) applied to a point mass, and on encoding of position of the point mass into patterns of electrical stimuli delivered to somatosensory areas. However, this previous algorithm had the limitation that it only worked in situations when the position-to-force map to be implemented is invertible. Here we overcome this limitation by developing a new non-linear form of the bidirectional interface that can approximate a virtually unlimited family of continuous fields. The new algorithm bases both the encoding of position information and the decoding of motor cortical activity on an explicit map between spike trains and the state space of the device computed with Multi-Dimensional-Scaling. We present a detailed computational analysis of the performance of the interface and a validation of its robustness by using synthetic neural responses in a simulated sensory-motor loop. PMID:24626393

  8. A training platform for many-dimensional prosthetic devices using a virtual reality environment

    PubMed Central

    Putrino, David; Wong, Yan T.; Weiss, Adam; Pesaran, Bijan

    2014-01-01

    Brain machine interfaces (BMIs) have the potential to assist in the rehabilitation of millions of patients worldwide. Despite recent advancements in BMI technology for the restoration of lost motor function, a training environment to restore full control of the anatomical segments of an upper limb extremity has not yet been presented. Here, we develop a virtual upper limb prosthesis with 27 independent dimensions, the anatomical dimensions of the human arm and hand, and deploy the virtual prosthesis as an avatar in a virtual reality environment (VRE) that can be controlled in real-time. The prosthesis avatar accepts kinematic control inputs that can be captured from movements of the arm and hand as well as neural control inputs derived from processed neural signals. We characterize the system performance under kinematic control using a commercially available motion capture system. We also present the performance under kinematic control achieved by two non-human primates (Macaca Mulatta) trained to use the prosthetic avatar to perform reaching and grasping tasks. This is the first virtual prosthetic device that is capable of emulating all the anatomical movements of a healthy upper limb in real-time. Since the system accepts both neural and kinematic inputs for a variety of many-dimensional skeletons, we propose it provides a customizable training platform for the acquisition of many-dimensional neural prosthetic control. PMID:24726625

  9. Mixed-Dimensionality VLSI-Type Configurable Tools for Virtual Prototyping of Biomicrofluidic Devices and Integrated Systems

    NASA Astrophysics Data System (ADS)

    Makhijani, Vinod B.; Przekwas, Andrzej J.

    2002-10-01

    This report presents results of a DARPA/MTO Composite CAD Project aimed to develop a comprehensive microsystem CAD environment, CFD-ACE+ Multiphysics, for bio and microfluidic devices and complete microsystems. The project began in July 1998, and was a three-year team effort between CFD Research Corporation, California Institute of Technology (CalTech), University of California, Berkeley (UCB), and Tanner Research, with Mr. Don Verlee from Abbott Labs participating as a consultant on the project. The overall objective of this project was to develop, validate and demonstrate several applications of a user-configurable VLSI-type mixed-dimensionality software tool for design of biomicrofluidics devices and integrated systems. The developed tool would provide high fidelity 3-D multiphysics modeling capability, l-D fluidic circuits modeling, and SPICE interface for system level simulations, and mixed-dimensionality design. It would combine tools for layouts and process fabrication, geometric modeling, and automated grid generation, and interfaces to EDA tools (e.g. Cadence) and MCAD tools (e.g. ProE).

  10. Delivery of inhaled drugs for infants and small children: a commentary on present and future needs.

    PubMed

    Fink, James B

    2012-11-01

    Although the manufacture of inhaled medications is a multibillion dollar industry, virtually no pharmaceutical drug/device combination has been approved for inhalation across the range of pediatric patient ages and sizes. The clinician who treats neonates, infants, or toddlers is often faced with the dilemma of prescribing inhaled medications that may be disease appropriate but have not been approved for use in patients in these age categories. Their use is thus technically "off label," with limited empirical data to guide both dose and device selection. This dilemma requires the prescribing physician to go beyond the limitations of the product label, often without benefit of appropriately designed clinical trials, in an attempt to select safe and effective doses for use with these smallest of patients. The vast majority of drugs approved for inhalation were studied by using aerosol devices designed for older children and adults using a mouthpiece interface, which may not be practical for use in infants and patients aged <4 years. The selection of the most age-appropriate device and interface is critical for the effective administration of the prescribed therapy. In the absence of industry-sponsored clinical trials in neonates, infants, and toddlers, in vitro and in vivo strategies may help guide age-appropriate dosing, device, and interface selection to better inform clinical practice. In this commentary, the challenges in developing and prescribing effective formulations for aerosol delivery across the range of pediatric ages and sizes are explored, with guidance for device and interface selection. Recommendations for future collaborative sharing of in vitro models and age-specific breathing patterns between academic and industry researchers could help regulators and clinicians better understand the impact age and size have on pulmonary drug delivery. Copyright © 2012 Elsevier HS Journals, Inc. All rights reserved.

  11. Usability Studies In Virtual And Traditional Computer Aided Design Environments For Spatial Awareness

    DTIC Science & Technology

    2017-08-08

    Usability Studies In Virtual And Traditional Computer Aided Design Environments For Spatial Awareness Dr. Syed Adeel Ahmed, Xavier University of...virtual environment with wand interfaces compared directly with a workstation non-stereoscopic traditional CAD interface with keyboard and mouse. In...navigate through a virtual environment. The wand interface provides a significantly improved means of interaction. This study quantitatively measures the

  12. Applications of virtual reality technology in pathology.

    PubMed

    Grimes, G J; McClellan, S A; Goldman, J; Vaughn, G L; Conner, D A; Kujawski, E; McDonald, J; Winokur, T; Fleming, W

    1997-01-01

    TelePath(SM) a telerobotic system utilizing virtual microscope concepts based on high quality still digital imaging and aimed at real-time support for surgery by remote diagnosis of frozen sections. Many hospitals and clinics have an application for the remote practice of pathology, particularly in the area of reading frozen sections in support of surgery, commonly called anatomic pathology. The goal is to project the expertise of the pathologist into the remote setting by giving the pathologist access to the microscope slides with an image quality and human interface comparable to what the pathologist would experience at a real rather than a virtual microscope. A working prototype of a virtual microscope has been defined and constructed which has the needed performance in both the image quality and human interface areas for a pathologist to work remotely. This is accomplished through the use of telerobotics and an image quality which provides the virtual microscope the same diagnostic capabilities as a real microscope. The examination of frozen sections is performed a two-dimensional world. The remote pathologist is in a virtual world with the same capabilities as a "real" microscope, but response times may be slower depending on the specific computing and telecommunication environments. The TelePath system has capabilities far beyond a normal biological microscope, such as the ability to create a low power image of the entire sample using multiple images digitally matched together; the ability to digitally retrace a viewing trajectory; and the ability to archive images using CD ROM and other mass storage devices.

  13. Virtual Character Animation Based on Affordable Motion Capture and Reconfigurable Tangible Interfaces.

    PubMed

    Lamberti, Fabrizio; Paravati, Gianluca; Gatteschi, Valentina; Cannavo, Alberto; Montuschi, Paolo

    2018-05-01

    Software for computer animation is generally characterized by a steep learning curve, due to the entanglement of both sophisticated techniques and interaction methods required to control 3D geometries. This paper proposes a tool designed to support computer animation production processes by leveraging the affordances offered by articulated tangible user interfaces and motion capture retargeting solutions. To this aim, orientations of an instrumented prop are recorded together with animator's motion in the 3D space and used to quickly pose characters in the virtual environment. High-level functionalities of the animation software are made accessible via a speech interface, thus letting the user control the animation pipeline via voice commands while focusing on his or her hands and body motion. The proposed solution exploits both off-the-shelf hardware components (like the Lego Mindstorms EV3 bricks and the Microsoft Kinect, used for building the tangible device and tracking animator's skeleton) and free open-source software (like the Blender animation tool), thus representing an interesting solution also for beginners approaching the world of digital animation for the first time. Experimental results in different usage scenarios show the benefits offered by the designed interaction strategy with respect to a mouse & keyboard-based interface both for expert and non-expert users.

  14. Design of virtual three-dimensional instruments for sound control

    NASA Astrophysics Data System (ADS)

    Mulder, Axel Gezienus Elith

    An environment for designing virtual instruments with 3D geometry has been prototyped and applied to real-time sound control and design. It enables a sound artist, musical performer or composer to design an instrument according to preferred or required gestural and musical constraints instead of constraints based only on physical laws as they apply to an instrument with a particular geometry. Sounds can be created, edited or performed in real-time by changing parameters like position, orientation and shape of a virtual 3D input device. The virtual instrument can only be perceived through a visualization and acoustic representation, or sonification, of the control surface. No haptic representation is available. This environment was implemented using CyberGloves, Polhemus sensors, an SGI Onyx and by extending a real- time, visual programming language called Max/FTS, which was originally designed for sound synthesis. The extension involves software objects that interface the sensors and software objects that compute human movement and virtual object features. Two pilot studies have been performed, involving virtual input devices with the behaviours of a rubber balloon and a rubber sheet for the control of sound spatialization and timbre parameters. Both manipulation and sonification methods affect the naturalness of the interaction. Informal evaluation showed that a sonification inspired by the physical world appears natural and effective. More research is required for a natural sonification of virtual input device features such as shape, taking into account possible co- articulation of these features. While both hands can be used for manipulation, left-hand-only interaction with a virtual instrument may be a useful replacement for and extension of the standard keyboard modulation wheel. More research is needed to identify and apply manipulation pragmatics and movement features, and to investigate how they are co-articulated, in the mapping of virtual object parameters. While the virtual instruments can be adapted to exploit many manipulation gestures, further work is required to reduce the need for technical expertise to realize adaptations. Better virtual object simulation techniques and faster sensor data acquisition will improve the performance of virtual instruments. The design environment which has been developed should prove useful as a (musical) instrument prototyping tool and as a tool for researching the optimal adaptation of machines to humans.

  15. Fast, cheap and in control: spectral imaging with handheld devices

    NASA Astrophysics Data System (ADS)

    Gooding, Edward A.; Deutsch, Erik R.; Huehnerhoff, Joseph; Hajian, Arsen R.

    2017-05-01

    Remote sensing has moved out of the laboratory and into the real world. Instruments using reflection or Raman imaging modalities become faster, cheaper and more powerful annually. Enabling technologies include virtual slit spectrometer design, high power multimode diode lasers, fast open-loop scanning systems, low-noise IR-sensitive array detectors and low-cost computers with touchscreen interfaces. High-volume manufacturing assembles these components into inexpensive portable or handheld devices that make possible sophisticated decision-making based on robust data analytics. Examples include threat, hazmat and narcotics detection; remote gas sensing; biophotonic screening; environmental remediation and a host of other applications.

  16. Training Spatial Knowledge Acquisition Using Virtual Environments

    DTIC Science & Technology

    2000-04-03

    bands of textures and tiles them together to form long, continuous swaths of texture. This paper summarizes these tools and their function, and...which allows it to control these devices. It also includes a texture- tiling application to precisely line up frames from the camera to create wall...that textures exist a priori and has no support for cropping and tiling textures within the program, much less an interface to hardware specifically

  17. A Multi-Finger Interface with MR Actuators for Haptic Applications.

    PubMed

    Qin, Huanhuan; Song, Aiguo; Gao, Zhan; Liu, Yuqing; Jiang, Guohua

    2018-01-01

    Haptic devices with multi-finger input are highly desirable in providing realistic and natural feelings when interacting with the remote or virtual environment. Compared with the conventional actuators, MR (Magneto-rheological) actuators are preferable options in haptics because of larger passive torque and torque-volume ratios. Among the existing haptic MR actuators, most of them are still bulky and heavy. If they were smaller and lighter, they would become more suitable for haptics. In this paper, a small-scale yet powerful MR actuator was designed to build a multi-finger interface for the 6 DOF haptic device. The compact structure was achieved by adopting the multi-disc configuration. Based on this configuration, the MR actuator can generate the maximum torque of 480 N.mm with dimensions of only 36 mm diameter and 18 mm height. Performance evaluation showed that it can exhibit a relatively high dynamic range and good response characteristics when compared with some other haptic MR actuators. The multi-finger interface is equipped with three MR actuators and can provide up to 8 N passive force to the thumb, index and middle fingers, respectively. An application example was used to demonstrate the effectiveness and potential of this new MR actuator based interface.

  18. Virtual Instrumentation for a Fiber-Optics-Based Artificial Nerve

    NASA Technical Reports Server (NTRS)

    Lyons, Donald R.; Kyaw, Thet Mon; Griffin, DeVon (Technical Monitor)

    2001-01-01

    A LabView-based computer interface for fiber-optic artificial nerves has been devised as a Masters thesis project. This project involves the use of outputs from wavelength multiplexed optical fiber sensors (artificial nerves), which are capable of producing dense optical data outputs for physical measurements. The potential advantages of using optical fiber sensors for sensory function restoration is the fact that well defined WDM-modulated signals can be transmitted to and from the sensing region allowing networked units to replace low-level nerve functions for persons desirous of "intelligent artificial limbs." Various FO sensors can be designed with high sensitivity and the ability to be interfaced with a wide range of devices including miniature shielded electrical conversion units. Our Virtual Instrument (VI) interface software package was developed using LabView's "Laboratory Virtual Instrument Engineering Workbench" package. The virtual instrument has been configured to arrange and encode the data to develop an intelligent response in the form of encoded digitized signal outputs. The architectural layout of our nervous system is such that different touch stimuli from different artificial fiber-optic nerve points correspond to gratings of a distinct resonant wavelength and physical location along the optical fiber. Thus, when an automated, tunable diode laser sends scans, the wavelength spectrum of the artificial nerve, it triggers responses that are encoded with different touch stimuli by way wavelength shifts in the reflected Bragg resonances. The reflected light is detected and a resulting analog signal is fed into ADC1 board and DAQ card. Finally, the software has been written such that the experimenter is able to set the response range during data acquisition.

  19. Natural gesture interfaces

    NASA Astrophysics Data System (ADS)

    Starodubtsev, Illya

    2017-09-01

    The paper describes the implementation of the system of interaction with virtual objects based on gestures. The paper describes the common problems of interaction with virtual objects, specific requirements for the interfaces for virtual and augmented reality.

  20. A Mobile Virtual Butler to Bridge the Gap between Users and Ambient Assisted Living: A Smart Home Case Study

    PubMed Central

    Costa, Nuno; Domingues, Patricio; Fdez-Riverola, Florentino; Pereira, António

    2014-01-01

    Ambient Intelligence promises to transform current spaces into electronic environments that are responsive, assistive and sensitive to human presence. Those electronic environments will be fully populated with dozens, hundreds or even thousands of connected devices that share information and thus become intelligent. That massive wave of electronic devices will also invade everyday objects, turning them into smart entities, keeping their native features and characteristics while seamlessly promoting them to a new class of thinking and reasoning everyday objects. Although there are strong expectations that most of the users' needs can be fulfilled without their intervention, there are still situations where interaction is required. This paper presents work being done in the field of human-computer interaction, focusing on smart home environments, while being a part of a larger project called Aging Inside a Smart Home. This initiative arose as a way to deal with a large scourge of our country, where lots of elderly persons live alone in their homes, often with limited or no physical mobility. The project relies on the mobile agent computing paradigm in order to create a Virtual Butler that provides the interface between the elderly and the smart home infrastructure. The Virtual Butler is receptive to user questions, answering them according to the context and knowledge of the AISH. It is also capable of interacting with the user whenever it senses that something has gone wrong, notifying next of kin and/or medical services, etc. The Virtual Butler is aware of the user location and moves to the computing device which is closest to the user, in order to be always present. Its avatar can also run in handheld devices keeping its main functionality in order to track user when s/he goes out. According to the evaluation carried out, the Virtual Butler is assessed as a very interesting and loved digital friend, filling the gap between the user and the smart home. The evaluation also showed that the Virtual Butler concept can be easily ported to other types of possible smart and assistive environments like airports, hospitals, shopping malls, offices, etc. PMID:25102342

  1. A mobile Virtual Butler to bridge the gap between users and ambient assisted living: a Smart Home case study.

    PubMed

    Costa, Nuno; Domingues, Patricio; Fdez-Riverola, Florentino; Pereira, António

    2014-08-06

    Ambient Intelligence promises to transform current spaces into electronic environments that are responsive, assistive and sensitive to human presence. Those electronic environments will be fully populated with dozens, hundreds or even thousands of connected devices that share information and thus become intelligent. That massive wave of electronic devices will also invade everyday objects, turning them into smart entities, keeping their native features and characteristics while seamlessly promoting them to a new class of thinking and reasoning everyday objects. Although there are strong expectations that most of the users' needs can be fulfilled without their intervention, there are still situations where interaction is required. This paper presents work being done in the field of human-computer interaction, focusing on smart home environments, while being a part of a larger project called Aging Inside a Smart Home. This initiative arose as a way to deal with a large scourge of our country, where lots of elderly persons live alone in their homes, often with limited or no physical mobility. The project relies on the mobile agent computing paradigm in order to create a Virtual Butler that provides the interface between the elderly and the smart home infrastructure. The Virtual Butler is receptive to user questions, answering them according to the context and knowledge of the AISH. It is also capable of interacting with the user whenever it senses that something has gone wrong, notifying next of kin and/or medical services, etc. The Virtual Butler is aware of the user location and moves to the computing device which is closest to the user, in order to be always present. Its avatar can also run in handheld devices keeping its main functionality in order to track user when s/he goes out. According to the evaluation carried out, the Virtual Butler is assessed as a very interesting and loved digital friend, filling the gap between the user and the smart home. The evaluation also showed that the Virtual Butler concept can be easily ported to other types of possible smart and assistive environments like airports, hospitals, shopping malls, offices, etc.

  2. Virtual button interface

    DOEpatents

    Jones, Jake S.

    1999-01-01

    An apparatus and method of issuing commands to a computer by a user interfacing with a virtual reality environment. To issue a command, the user directs gaze at a virtual button within the virtual reality environment, causing a perceptible change in the virtual button, which then sends a command corresponding to the virtual button to the computer, optionally after a confirming action is performed by the user, such as depressing a thumb switch.

  3. Virtual surface characteristics of a tactile display using magneto-rheological fluids.

    PubMed

    Lee, Chul-Hee; Jang, Min-Gyu

    2011-01-01

    Virtual surface characteristics of tactile displays are investigated to characterize the feeling of human touch for a haptic interface application. In order to represent the tactile feeling, a prototype tactile display incorporating Magneto-Rheological (MR) fluid has been developed. Tactile display devices simulate the finger's skin to feel the sensations of contact such as compliance, friction, and topography of the surface. Thus, the tactile display can provide information on the surface of an organic tissue to the surgeon in virtual reality. In order to investigate the compliance feeling of a human finger's touch, normal force responses of a tactile display under various magnetic fields have been assessed. Also, shearing friction force responses of the tactile display are investigated to simulate the action of finger dragging on the surface. Moreover, different matrix arrays of magnetic poles are applied to form the virtual surface topography. From the results, different tactile feelings are observed according to the applied magnetic field strength as well as the arrays of magnetic poles combinations. This research presents a smart tactile display technology for virtual surfaces.

  4. Inertial Sensor-Based Touch and Shake Metaphor for Expressive Control of 3D Virtual Avatars

    PubMed Central

    Patil, Shashidhar; Chintalapalli, Harinadha Reddy; Kim, Dubeom; Chai, Youngho

    2015-01-01

    In this paper, we present an inertial sensor-based touch and shake metaphor for expressive control of a 3D virtual avatar in a virtual environment. An intuitive six degrees-of-freedom wireless inertial motion sensor is used as a gesture and motion control input device with a sensor fusion algorithm. The algorithm enables user hand motions to be tracked in 3D space via magnetic, angular rate, and gravity sensors. A quaternion-based complementary filter is implemented to reduce noise and drift. An algorithm based on dynamic time-warping is developed for efficient recognition of dynamic hand gestures with real-time automatic hand gesture segmentation. Our approach enables the recognition of gestures and estimates gesture variations for continuous interaction. We demonstrate the gesture expressivity using an interactive flexible gesture mapping interface for authoring and controlling a 3D virtual avatar and its motion by tracking user dynamic hand gestures. This synthesizes stylistic variations in a 3D virtual avatar, producing motions that are not present in the motion database using hand gesture sequences from a single inertial motion sensor. PMID:26094629

  5. A haptic interface for virtual simulation of endoscopic surgery.

    PubMed

    Rosenberg, L B; Stredney, D

    1996-01-01

    Virtual reality can be described as a convincingly realistic and naturally interactive simulation in which the user is given a first person illusion of being immersed within a computer generated environment While virtual reality systems offer great potential to reduce the cost and increase the quality of medical training, many technical challenges must be overcome before such simulation platforms offer effective alternatives to more traditional training means. A primary challenge in developing effective virtual reality systems is designing the human interface hardware which allows rich sensory information to be presented to users in natural ways. When simulating a given manual procedure, task specific human interface requirements dictate task specific human interface hardware. The following paper explores the design of human interface hardware that satisfies the task specific requirements of virtual reality simulation of Endoscopic surgical procedures. Design parameters were derived through direct cadaver studies and interviews with surgeons. Final hardware design is presented.

  6. Usability Studies in Virtual and Traditional Computer Aided Design Environments for Fault Identification

    DTIC Science & Technology

    2017-08-08

    Usability Studies In Virtual And Traditional Computer Aided Design Environments For Fault Identification Dr. Syed Adeel Ahmed, Xavier University...virtual environment with wand interfaces compared directly with a workstation non-stereoscopic traditional CAD interface with keyboard and mouse. In...the differences in interaction when compared with traditional human computer interfaces. This paper provides analysis via usability study methods

  7. Virtual button interface

    DOEpatents

    Jones, J.S.

    1999-01-12

    An apparatus and method of issuing commands to a computer by a user interfacing with a virtual reality environment are disclosed. To issue a command, the user directs gaze at a virtual button within the virtual reality environment, causing a perceptible change in the virtual button, which then sends a command corresponding to the virtual button to the computer, optionally after a confirming action is performed by the user, such as depressing a thumb switch. 4 figs.

  8. Network and user interface for PAT DOME virtual motion environment system

    NASA Technical Reports Server (NTRS)

    Worthington, J. W.; Duncan, K. M.; Crosier, W. G.

    1993-01-01

    The Device for Orientation and Motion Environments Preflight Adaptation Trainer (DOME PAT) provides astronauts a virtual microgravity sensory environment designed to help alleviate tye symptoms of space motion sickness (SMS). The system consists of four microcomputers networked to provide real time control, and an image generator (IG) driving a wide angle video display inside a dome structure. The spherical display demands distortion correction. The system is currently being modified with a new graphical user interface (GUI) and a new Silicon Graphics IG. This paper will concentrate on the new GUI and the networking scheme. The new GUI eliminates proprietary graphics hardware and software, and instead makes use of standard and low cost PC video (CGA) and off the shelf software (Microsoft's Quick C). Mouse selection for user input is supported. The new Silicon Graphics IG requires an Ethernet interface. The microcomputer known as the Real Time Controller (RTC), which has overall control of the system and is written in Ada, was modified to use the free public domain NCSA Telnet software for Ethernet communications with the Silicon Graphics IG. The RTC also maintains the original ARCNET communications through Novell Netware IPX with the rest of the system. The Telnet TCP/IP protocol was first used for real-time communication, but because of buffering problems the Telnet datagram (UDP) protocol needed to be implemented. Since the Telnet modules are written in C, the Adap pragma 'Interface' was used to interface with the network calls.

  9. LabVIEW Interface for PCI-SpaceWire Interface Card

    NASA Technical Reports Server (NTRS)

    Lux, James; Loya, Frank; Bachmann, Alex

    2005-01-01

    This software provides a LabView interface to the NT drivers for the PCISpaceWire card, which is a peripheral component interface (PCI) bus interface that conforms to the IEEE-1355/ SpaceWire standard. As SpaceWire grows in popularity, the ability to use SpaceWire links within LabVIEW will be important to electronic ground support equipment vendors. In addition, there is a need for a high-level LabVIEW interface to the low-level device- driver software supplied with the card. The LabVIEW virtual instrument (VI) provides graphical interfaces to support all (1) SpaceWire link functions, including message handling and routing; (2) monitoring as a passive tap using specialized hardware; and (3) low-level access to satellite mission-control subsystem functions. The software is supplied in a zip file that contains LabVIEW VI files, which provide various functions of the PCI-SpaceWire card, as well as higher-link-level functions. The VIs are suitably named according to the matching function names in the driver manual. A number of test programs also are provided to exercise various functions.

  10. Air-condition Control System of Weaving Workshop Based on LabVIEW

    NASA Astrophysics Data System (ADS)

    Song, Jian

    The project of air-condition measurement and control system based on LabVIEW is put forward for the sake of controlling effectively the environmental targets in the weaving workshop. In this project, which is based on the virtual instrument technology and in which LabVIEW development platform by NI is adopted, the system is constructed on the basis of the virtual instrument technology. It is composed of the upper PC, central control nodes based on CC2530, sensor nodes, sensor modules and executive device. Fuzzy control algorithm is employed to achieve the accuracy control of the temperature and humidity. A user-friendly man-machine interaction interface is designed with virtual instrument technology at the core of the software. It is shown by experiments that the measurement and control system can run stably and reliably and meet the functional requirements for controlling the weaving workshop.

  11. Virtual Reality-Enhanced Extinction of Phobias and Post-Traumatic Stress.

    PubMed

    Maples-Keller, Jessica L; Yasinski, Carly; Manjin, Nicole; Rothbaum, Barbara Olasov

    2017-07-01

    Virtual reality (VR) refers to an advanced technological communication interface in which the user is actively participating in a computer-generated 3-dimensional virtual world that includes computer sensory input devices used to simulate real-world interactive experiences. VR has been used within psychiatric treatment for anxiety disorders, particularly specific phobias and post-traumatic stress disorder, given several advantages that VR provides for use within treatment for these disorders. Exposure therapy for anxiety disorder is grounded in fear-conditioning models, in which extinction learning involves the process through which conditioned fear responses decrease or are inhibited. The present review will provide an overview of extinction training and anxiety disorder treatment, advantages for using VR within extinction training, a review of the literature regarding the effectiveness of VR within exposure therapy for specific phobias and post-traumatic stress disorder, and limitations and future directions of the extant empirical literature.

  12. Grasping trajectories in a virtual environment adhere to Weber's law.

    PubMed

    Ozana, Aviad; Berman, Sigal; Ganel, Tzvi

    2018-06-01

    Virtual-reality and telerobotic devices simulate local motor control of virtual objects within computerized environments. Here, we explored grasping kinematics within a virtual environment and tested whether, as in normal 3D grasping, trajectories in the virtual environment are performed analytically, violating Weber's law with respect to object's size. Participants were asked to grasp a series of 2D objects using a haptic system, which projected their movements to a virtual space presented on a computer screen. The apparatus also provided object-specific haptic information upon "touching" the edges of the virtual targets. The results showed that grasping movements performed within the virtual environment did not produce the typical analytical trajectory pattern obtained during 3D grasping. Unlike as in 3D grasping, grasping trajectories in the virtual environment adhered to Weber's law, which indicates relative resolution in size processing. In addition, the trajectory patterns differed from typical trajectories obtained during 3D grasping, with longer times to complete the movement, and with maximum grip apertures appearing relatively early in the movement. The results suggest that grasping movements within a virtual environment could differ from those performed in real space, and are subjected to irrelevant effects of perceptual information. Such atypical pattern of visuomotor control may be mediated by the lack of complete transparency between the interface and the virtual environment in terms of the provided visual and haptic feedback. Possible implications of the findings to movement control within robotic and virtual environments are further discussed.

  13. Open core control software for surgical robots.

    PubMed

    Arata, Jumpei; Kozuka, Hiroaki; Kim, Hyung Wook; Takesue, Naoyuki; Vladimirov, B; Sakaguchi, Masamichi; Tokuda, Junichi; Hata, Nobuhiko; Chinzei, Kiyoyuki; Fujimoto, Hideo

    2010-05-01

    In these days, patients and doctors in operation room are surrounded by many medical devices as resulting from recent advancement of medical technology. However, these cutting-edge medical devices are working independently and not collaborating with each other, even though the collaborations between these devices such as navigation systems and medical imaging devices are becoming very important for accomplishing complex surgical tasks (such as a tumor removal procedure while checking the tumor location in neurosurgery). On the other hand, several surgical robots have been commercialized, and are becoming common. However, these surgical robots are not open for collaborations with external medical devices in these days. A cutting-edge "intelligent surgical robot" will be possible in collaborating with surgical robots, various kinds of sensors, navigation system and so on. On the other hand, most of the academic software developments for surgical robots are "home-made" in their research institutions and not open to the public. Therefore, open source control software for surgical robots can be beneficial in this field. From these perspectives, we developed Open Core Control software for surgical robots to overcome these challenges. In general, control softwares have hardware dependencies based on actuators, sensors and various kinds of internal devices. Therefore, these control softwares cannot be used on different types of robots without modifications. However, the structure of the Open Core Control software can be reused for various types of robots by abstracting hardware dependent parts. In addition, network connectivity is crucial for collaboration between advanced medical devices. The OpenIGTLink is adopted in Interface class which plays a role to communicate with external medical devices. At the same time, it is essential to maintain the stable operation within the asynchronous data transactions through network. In the Open Core Control software, several techniques for this purpose were introduced. Virtual fixture is well known technique as a "force guide" for supporting operators to perform precise manipulation by using a master-slave robot. The virtual fixture for precise and safety surgery was implemented on the system to demonstrate an idea of high-level collaboration between a surgical robot and a navigation system. The extension of virtual fixture is not a part of the Open Core Control system, however, the function such as virtual fixture cannot be realized without a tight collaboration between cutting-edge medical devices. By using the virtual fixture, operators can pre-define an accessible area on the navigation system, and the area information can be transferred to the robot. In this manner, the surgical console generates the reflection force when the operator tries to get out from the pre-defined accessible area during surgery. The Open Core Control software was implemented on a surgical master-slave robot and stable operation was observed in a motion test. The tip of the surgical robot was displayed on a navigation system by connecting the surgical robot with a 3D position sensor through the OpenIGTLink. The accessible area was pre-defined before the operation, and the virtual fixture was displayed as a "force guide" on the surgical console. In addition, the system showed stable performance in a duration test with network disturbance. In this paper, a design of the Open Core Control software for surgical robots and the implementation of virtual fixture were described. The Open Core Control software was implemented on a surgical robot system and showed stable performance in high-level collaboration works. The Open Core Control software is developed to be a widely used platform of surgical robots. Safety issues are essential for control software of these complex medical devices. It is important to follow the global specifications such as a FDA requirement "General Principles of Software Validation" or IEC62304. For following these regulations, it is important to develop a self-test environment. Therefore, a test environment is now under development to test various interference in operation room such as a noise of electric knife by considering safety and test environment regulations such as ISO13849 and IEC60508. The Open Core Control software is currently being developed software in open-source manner and available on the Internet. A communization of software interface is becoming a major trend in this field. Based on this perspective, the Open Core Control software can be expected to bring contributions in this field.

  14. The use of virtual ground to control transmembrane voltages and measure bilayer currents in serial arrays of droplet interface bilayers

    NASA Astrophysics Data System (ADS)

    Sarles, Stephen A.

    2013-09-01

    The droplet interface bilayer (DIB) is a simple technique for constructing a stable lipid bilayer at the interface of two lipid-encased water droplets submerged in oil. Networks of DIBs formed by connecting more than two droplets constitute a new form of modular biomolecular smart material, where the transduction properties of a single lipid bilayer can affect the actions performed at other interface bilayers in the network via diffusion through the aqueous environments of shared droplet connections. The passive electrical properties of a lipid bilayer and the arrangement of droplets that determine the paths for transport in the network require specific electrical control to stimulate and interrogate each bilayer. Here, we explore the use of virtual ground for electrodes inserted into specific droplets in the network and employ a multichannel patch clamp amplifier to characterize bilayer formation and ion-channel activity in a serial DIB array. Analysis of serial connections of DIBs is discussed to understand how assigning electrode connections to the measurement device can be used to measure activity across all lipid membranes within a network. Serial arrays of DIBs are assembled using the regulated attachment method within a multi-compartment flexible substrate, and wire-type electrodes inserted into each droplet compartment of the substrate enable the application of voltage and measurement of current in each droplet in the array.

  15. Development of a novel haptic glove for improving finger dexterity in poststroke rehabilitation.

    PubMed

    Lin, Chi-Ying; Tsai, Chia-Min; Shih, Pei-Cheng; Wu, Hsiao-Ching

    2015-01-01

    Almost all stroke patients experience a certain degree of fine motor impairment, and impeded finger movement may limit activities in daily life. Thus, to improve the quality of life of stroke patients, designing an efficient training device for fine motor rehabilitation is crucial. This study aimed to develop a novel fine motor training glove that integrates a virtual-reality based interactive environment with vibrotactile feedback for more effective post stroke hand rehabilitation. The proposed haptic rehabilitation device is equipped with small DC vibration motors for vibrotactile feedback stimulation and piezoresistive thin-film force sensors for motor function evaluation. Two virtual-reality based games ``gopher hitting'' and ``musical note hitting'' were developed as a haptic interface. According to the designed rehabilitation program, patients intuitively push and practice their fingers to improve the finger isolation function. Preliminary tests were conducted to assess the feasibility of the developed haptic rehabilitation system and to identify design concerns regarding the practical use in future clinical testing.

  16. Future Cyborgs: Human-Machine Interface for Virtual Reality Applications

    DTIC Science & Technology

    2007-04-01

    FUTURE CYBORGS : HUMAN-MACHINE INTERFACE FOR VIRTUAL REALITY APPLICATIONS Robert R. Powell, Major, USAF April 2007 Blue Horizons...SUBTITLE Future Cyborgs : Human-Machine Interface for Virtual Reality Applications 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER...Nicholas Negroponte, Being Digital (New York: Alfred A Knopf, Inc, 1995), 123. 23 Ibid. 24 Andy Clark, Natural-Born Cyborgs (New York: Oxford

  17. Pseudo 1-D Micro/Nanofluidic Device for Exact Electrokinetic Responses.

    PubMed

    Kim, Junsuk; Kim, Ho-Young; Lee, Hyomin; Kim, Sung Jae

    2016-06-28

    Conventionally, a 1-D micro/nanofluidic device, whose nanochannel bridged two microchannels, was widely chosen in the fundamental electrokinetic studies; however, the configuration had intrinsic limitations of the time-consuming and labor intensive tasks of filling and flushing the microchannel due to the high fluidic resistance of the nanochannel bridge. In this work, a pseudo 1-D micro/nanofluidic device incorporating air valves at each microchannel was proposed for mitigating these limitations. High Laplace pressure formed at liquid/air interface inside the microchannels played as a virtual valve only when the electrokinetic operations were conducted. The identical electrokinetic behaviors of the propagation of ion concentration polarization layer and current-voltage responses were obtained in comparison with the conventional 1-D micro/nanofluidic device by both experiments and numerical simulations. Therefore, the suggested pseudo 1-D micro/nanofluidic device owned not only experimental conveniences but also exact electrokinetic responses.

  18. Simple two-electrode biosignal amplifier.

    PubMed

    Dobrev, D; Neycheva, T; Mudrov, N

    2005-11-01

    A simple, cost effective circuit for a two-electrode non-differential biopotential amplifier is proposed. It uses a 'virtual ground' transimpedance amplifier and a parallel RC network for input common mode current equalisation, while the signal input impedance preserves its high value. With this innovative interface circuit, a simple non-inverting amplifier fully emulates high CMRR differential. The amplifier equivalent CMRR (typical range from 70-100 dB) is equal to the open loop gain of the operational amplifier used in the transimpedance interface stage. The circuit has very simple structure and utilises a small number of popular components. The amplifier is intended for use in various two-electrode applications, such as Holter-type monitors, defibrillators, ECG monitors, biotelemetry devices etc.

  19. Towards open-source, low-cost haptics for surgery simulation.

    PubMed

    Suwelack, Stefan; Sander, Christian; Schill, Julian; Serf, Manuel; Danz, Marcel; Asfour, Tamim; Burger, Wolfgang; Dillmann, Rüdiger; Speidel, Stefanie

    2014-01-01

    In minimally invasive surgery (MIS), virtual reality (VR) training systems have become a promising education tool. However, the adoption of these systems in research and clinical settings is still limited by the high costs of dedicated haptics hardware for MIS. In this paper, we present ongoing research towards an open-source, low-cost haptic interface for MIS simulation. We demonstrate the basic mechanical design of the device, the sensor setup as well as its software integration.

  20. A web-based platform for virtual screening.

    PubMed

    Watson, Paul; Verdonk, Marcel; Hartshorn, Michael J

    2003-09-01

    A fully integrated, web-based, virtual screening platform has been developed to allow rapid virtual screening of large numbers of compounds. ORACLE is used to store information at all stages of the process. The system includes a large database of historical compounds from high throughput screenings (HTS) chemical suppliers, ATLAS, containing over 3.1 million unique compounds with their associated physiochemical properties (ClogP, MW, etc.). The database can be screened using a web-based interface to produce compound subsets for virtual screening or virtual library (VL) enumeration. In order to carry out the latter task within ORACLE a reaction data cartridge has been developed. Virtual libraries can be enumerated rapidly using the web-based interface to the cartridge. The compound subsets can be seamlessly submitted for virtual screening experiments, and the results can be viewed via another web-based interface allowing ad hoc querying of the virtual screening data stored in ORACLE.

  1. Multi-modal virtual environment research at Armstrong Laboratory

    NASA Technical Reports Server (NTRS)

    Eggleston, Robert G.

    1995-01-01

    One mission of the Paul M. Fitts Human Engineering Division of Armstrong Laboratory is to improve the user interface for complex systems through user-centered exploratory development and research activities. In support of this goal, many current projects attempt to advance and exploit user-interface concepts made possible by virtual reality (VR) technologies. Virtual environments may be used as a general purpose interface medium, an alternative display/control method, a data visualization and analysis tool, or a graphically based performance assessment tool. An overview is given of research projects within the division on prototype interface hardware/software development, integrated interface concept development, interface design and evaluation tool development, and user and mission performance evaluation tool development.

  2. Interfacial Molecular Packing Determines Exciton Dynamics in Molecular Heterostructures: The Case of Pentacene-Perfluoropentacene.

    PubMed

    Rinn, Andre; Breuer, Tobias; Wiegand, Julia; Beck, Michael; Hübner, Jens; Döring, Robin C; Oestreich, Michael; Heimbrodt, Wolfram; Witte, Gregor; Chatterjee, Sangam

    2017-12-06

    The great majority of electronic and optoelectronic devices depend on interfaces between p-type and n-type semiconductors. Finding matching donor-acceptor systems in molecular semiconductors remains a challenging endeavor because structurally compatible molecules may not necessarily be suitable with respect to their optical and electronic properties, and the large exciton binding energy in these materials may favor bound electron-hole pairs rather than free carriers or charge transfer at an interface. Regardless, interfacial charge-transfer exciton states are commonly considered as an intermediate step to achieve exciton dissociation. The formation efficiency and decay dynamics of such states will strongly depend on the molecular makeup of the interface, especially the relative alignment of donor and acceptor molecules. Structurally well-defined pentacene-perfluoropentacene heterostructures of different molecular orientations are virtually ideal model systems to study the interrelation between molecular packing motifs at the interface and their electronic properties. Comparing the emission dynamics of the heterosystems and the corresponding unitary films enables accurate assignment of every observable emission signal in the heterosystems. These heterosystems feature two characteristic interface-specific luminescence channels at around 1.4 and 1.5 eV that are not observed in the unitary samples. Their emission strength strongly depends on the molecular alignment of the respective donor and acceptor molecules, emphasizing the importance of structural control for device construction.

  3. Virtual Reality Simulation of the International Space Welding Experiment

    NASA Technical Reports Server (NTRS)

    Phillips, James A.

    1996-01-01

    Virtual Reality (VR) is a set of breakthrough technologies that allow a human being to enter and fully experience a 3-dimensional, computer simulated environment. A true virtual reality experience meets three criteria: (1) It involves 3-dimensional computer graphics; (2) It includes real-time feedback and response to user actions; and (3) It must provide a sense of immersion. Good examples of a virtual reality simulator are the flight simulators used by all branches of the military to train pilots for combat in high performance jet fighters. The fidelity of such simulators is extremely high -- but so is the price tag, typically millions of dollars. Virtual reality teaching and training methods are manifestly effective, and we have therefore implemented a VR trainer for the International Space Welding Experiment. My role in the development of the ISWE trainer consisted of the following: (1) created texture-mapped models of the ISWE's rotating sample drum, technology block, tool stowage assembly, sliding foot restraint, and control panel; (2) developed C code for control panel button selection and rotation of the sample drum; (3) In collaboration with Tim Clark (Antares Virtual Reality Systems), developed a serial interface box for the PC and the SGI Indigo so that external control devices, similar to ones actually used on the ISWE, could be used to control virtual objects in the ISWE simulation; (4) In collaboration with Peter Wang (SFFP) and Mark Blasingame (Boeing), established the interference characteristics of the VIM 1000 head-mounted-display and tested software filters to correct the problem; (5) In collaboration with Peter Wang and Mark Blasingame, established software and procedures for interfacing the VPL DataGlove and the Polhemus 6DOF position sensors to the SGI Indigo serial ports. The majority of the ISWE modeling effort was conducted on a PC-based VR Workstation, described below.

  4. Milestone Completion Report WBS 1.3.5.05 ECP/VTK-m FY17Q2 [MS-17/01] Better Dynamic Types Design SDA05-1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moreland, Kenneth D.

    The FY17Q2 milestone of the ECP/VTK-m project, which is the first milestone, includes the completion of design documents for the introduction of virtual methods into the VTK-m framework. Specifically, the ability from within the code of a device (e.g. GPU or Xeon Phi) to jump to a virtual method specified at run time. This change will enable us to drastically reduce the compile time and the executable code size for the VTK-m library. Our first design introduced the idea of adding virtual functions to classes that are used during algorithm execution. (Virtual methods were previously banned from the so calledmore » execution environment.) The design was straightforward. VTK-m already has the generic concepts of an “array handle” that provides a uniform interface to memory of different structures and an “array portal” that provides generic access to said memory. These array handles and portals use C++ templating to adjust them to different memory structures. This composition provides a powerful ability to adapt to data sources, but requires knowing static types. The proposed design creates a template specialization of an array portal that decorates another array handle while hiding its type. In this way we can wrap any type of static array handle and then feed it to a single compiled instance of a function. The second design focused on the mechanics of implementing virtual methods on parallel devices with a focus on CUDA. Our initial experiments on CUDA showed a very large overhead for using virtual C++ classes with virtual methods, the standard approach. Instead, we are using an alternate method provided by C that uses function pointers. With the completion of this milestone, we are able to move to the implementation of objects with virtual (like) methods. The upshot will be much faster compile times and much smaller library/executable sizes.« less

  5. Stability effects of singularities in force-controlled robotic assist devices

    NASA Astrophysics Data System (ADS)

    Luecke, Greg R.

    2002-02-01

    Force feedback is being used as an interface between humans and material handling equipment to provide an intuitive method to control large and bulky payloads. Powered actuation in the lift assist device compensates for the inertial characteristics of the manipulator and the payload to provide effortless control and handling of manufacturing parts, components, and assemblies. The use of these Intelligent Assist Devices (IAD) is being explored to prevent worker injury, enhance material handling performance, and increase productivity in the workplace. The IAD also provides the capability to shape and control motion in the workspace during routine operations. Virtual barriers can be developed to protect fixed objects in the workspace, and regions can be programmed that attract the work piece to a certain position and orientation. However, the robot is still under complete control of the human operator, with the trajectory being determined and commanded using the judgment of the operator to complete a given task. In many cases, the IAD is built in a configuration that may have singular points inside the workspace. These singularities can cause problems when the unstructured trajectory commands from the human cause interaction between the IAD and the virtual walls and fixtures at positions close to these singularities. The research presented here explores the stability effects of the interactions between the powered manipulator and the virtual surfaces when controlled by the operator. Because of the flexible nature of the human decisions determining the real time work piece paths, manipulator singularities that occur in conjunction with the virtual surfaces raise stability issues in the performance around these singularities. We examine these stability issues in the context of a particular IAD configuration, and present analytic results for the performance and stability of these systems in response to the real-time trajectory modification of the human operator.

  6. Using virtual environment technology for preadapting astronauts to the novel sensory conditions of microgravity

    NASA Technical Reports Server (NTRS)

    Duncan, K. M.; Harm, D. L.; Crosier, W. G.; Worthington, J. W.

    1993-01-01

    A unique training device is being developed at the Johnson Space Center Neurosciences Laboratory to help reduce or eliminate Space Motion Sickness (SMS) and spatial orientation disturbances that occur during spaceflight. The Device for Orientation and Motion Environments Preflight Adaptation Trainer (DOME PAT) uses virtual reality technology to simulate some sensory rearrangements experienced by astronauts in microgravity. By exposing a crew member to this novel environment preflight, it is expected that he/she will become partially adapted, and thereby suffer fewer symptoms inflight. The DOME PAT is a 3.7 m spherical dome, within which a 170 by 100 deg field of view computer-generated visual database is projected. The visual database currently in use depicts the interior of a Shuttle spacelab. The trainee uses a six degree-of-freedom, isometric force hand controller to navigate through the virtual environment. Alternatively, the trainee can be 'moved' about within the virtual environment by the instructor, or can look about within the environment by wearing a restraint that controls scene motion in response to head movements. The computer system is comprised of four personal computers that provide the real time control and user interface, and two Silicon Graphics computers that generate the graphical images. The image generator computers use custom algorithms to compensate for spherical image distortion, while maintaining a video update rate of 30 Hz. The DOME PAT is the first such system known to employ virtual reality technology to reduce the untoward effects of the sensory rearrangement associated with exposure to microgravity, and it does so in a very cost-effective manner.

  7. Remote laboratories for optical metrology: from the lab to the cloud

    NASA Astrophysics Data System (ADS)

    Osten, W.; Wilke, M.; Pedrini, G.

    2012-10-01

    The idea of remote and virtual metrology has been reported already in 2000 with a conceptual illustration by use of comparative digital holography, aimed at the comparison of two nominally identical but physically different objects, e.g., master and sample, in industrial inspection processes. However, the concept of remote and virtual metrology can be extended far beyond this. For example, it does not only allow for the transmission of static holograms over the Internet, but also provides an opportunity to communicate with and eventually control the physical set-up of a remote metrology system. Furthermore, the metrology system can be modeled in the environment of a 3D virtual reality using CAD or similar technology, providing a more intuitive interface to the physical setup within the virtual world. An engineer or scientist who would like to access the remote real world system can log on to the virtual system, moving and manipulating the setup through an avatar and take the desired measurements. The real metrology system responds to the interaction between the avatar and the 3D virtual representation, providing a more intuitive interface to the physical setup within the virtual world. The measurement data are stored and interpreted automatically for appropriate display within the virtual world, providing the necessary feedback to the experimenter. Such a system opens up many novel opportunities in industrial inspection such as the remote master-sample-comparison and the virtual assembling of parts that are fabricated at different places. Moreover, a multitude of new techniques can be envisaged. To them belong modern ways for documenting, efficient methods for metadata storage, the possibility for remote reviewing of experimental results, the adding of real experiments to publications by providing remote access to the metadata and to the experimental setup via Internet, the presentation of complex experiments in classrooms and lecture halls, the sharing of expensive and complex infrastructure within international collaborations, the implementation of new ways for the remote test of new devices, for their maintenance and service, and many more. The paper describes the idea of remote laboratories and illustrates the potential of the approach on selected examples with special attention to optical metrology.

  8. Building intuitive 3D interfaces for virtual reality systems

    NASA Astrophysics Data System (ADS)

    Vaidya, Vivek; Suryanarayanan, Srikanth; Seitel, Mathias; Mullick, Rakesh

    2007-03-01

    An exploration of techniques for developing intuitive, and efficient user interfaces for virtual reality systems. Work seeks to understand which paradigms from the better-understood world of 2D user interfaces remain viable within 3D environments. In order to establish this a new user interface was created that applied various understood principles of interface design. A user study was then performed where it was compared with an earlier interface for a series of medical visualization tasks.

  9. Virtual microscopy and digital pathology in training and education.

    PubMed

    Hamilton, Peter W; Wang, Yinhai; McCullough, Stephen J

    2012-04-01

    Traditionally, education and training in pathology has been delivered using textbooks, glass slides and conventional microscopy. Over the last two decades, the number of web-based pathology resources has expanded dramatically with centralized pathological resources being delivered to many students simultaneously. Recently, whole slide imaging technology allows glass slides to be scanned and viewed on a computer screen via dedicated software. This technology is referred to as virtual microscopy and has created enormous opportunities in pathological training and education. Students are able to learn key histopathological skills, e.g. to identify areas of diagnostic relevance from an entire slide, via a web-based computer environment. Students no longer need to be in the same room as the slides. New human-computer interfaces are also being developed using more natural touch technology to enhance the manipulation of digitized slides. Several major initiatives are also underway introducing online competency and diagnostic decision analysis using virtual microscopy and have important future roles in accreditation and recertification. Finally, researchers are investigating how pathological decision-making is achieved using virtual microscopy and modern eye-tracking devices. Virtual microscopy and digital pathology will continue to improve how pathology training and education is delivered. © 2012 The Authors APMIS © 2012 APMIS.

  10. Challenges to the development of complex virtual reality surgical simulations.

    PubMed

    Seymour, N E; Røtnes, J S

    2006-11-01

    Virtual reality simulation in surgical training has become more widely used and intensely investigated in an effort to develop safer, more efficient, measurable training processes. The development of virtual reality simulation of surgical procedures has begun, but well-described technical obstacles must be overcome to permit varied training in a clinically realistic computer-generated environment. These challenges include development of realistic surgical interfaces and physical objects within the computer-generated environment, modeling of realistic interactions between objects, rendering of the surgical field, and development of signal processing for complex events associated with surgery. Of these, the realistic modeling of tissue objects that are fully responsive to surgical manipulations is the most challenging. Threats to early success include relatively limited resources for development and procurement, as well as smaller potential for return on investment than in other simulation industries that face similar problems. Despite these difficulties, steady progress continues to be made in these areas. If executed properly, virtual reality offers inherent advantages over other training systems in creating a realistic surgical environment and facilitating measurement of surgeon performance. Once developed, complex new virtual reality training devices must be validated for their usefulness in formative training and assessment of skill to be established.

  11. Virtual reality as a new trend in mechanical and electrical engineering education

    NASA Astrophysics Data System (ADS)

    Kamińska, Dorota; Sapiński, Tomasz; Aitken, Nicola; Rocca, Andreas Della; Barańska, Maja; Wietsma, Remco

    2017-12-01

    In their daily practice, academics frequently face lack of access to modern equipment and devices, which are currently in use on the market. Moreover, many students have problems with understanding issues connected to mechanical and electrical engineering due to the complexity, necessity of abstract thinking and the fact that those concepts are not fully tangible. Many studies indicate that virtual reality can be successfully used as a training tool in various domains, such as development, health-care, the military or school education. In this paper, an interactive training strategy for mechanical and electrical engineering education shall be proposed. The prototype of the software consists of a simple interface, meaning it is easy for comprehension and use. Additionally, the main part of the prototype allows the user to virtually manipulate a 3D object that should be analyzed and studied. Initial studies indicate that the use of virtual reality can contribute to improving the quality and efficiency of higher education, as well as qualifications, competencies and the skills of graduates, and increase their competitiveness in the labour market.

  12. Kinematic/Dynamic Characteristics for Visual and Kinesthetic Virtual Environments

    NASA Technical Reports Server (NTRS)

    Bortolussi, Michael R. (Compiler); Adelstein, B. D.; Gold, Miriam

    1996-01-01

    Work was carried out on two topics of principal importance to current progress in virtual environment research at NASA Ames and elsewhere. The first topic was directed at maximizing the temporal dynamic response of visually presented Virtual Environments (VEs) through reorganization and optimization of system hardware and software. The final results of this portion of the work was a VE system in the Advanced Display and Spatial Perception Laboratory at NASA Ames capable of updating at 60 Hz (the maximum hardware refresh rate) with latencies approaching 30 msec. In the course of achieving this system performance, specialized hardware and software tools for measurement of VE latency and analytic models correlating update rate and latency for different system configurations were developed. The second area of activity was the preliminary development and analysis of a novel kinematic architecture for three Degree Of Freedom (DOF) haptic interfaces--devices that provide force feedback for manipulative interaction with virtual and remote environments. An invention disclosure was filed on this work and a patent application is being pursued by NASA Ames. Activities in these two areas are expanded upon below.

  13. Control devices and steering strategies in pathway surgery.

    PubMed

    Fan, Chunman; Jelínek, Filip; Dodou, Dimitra; Breedveld, Paul

    2015-02-01

    For pathway surgery, that is, minimally invasive procedures carried out transluminally or through instrument-created pathways, handheld maneuverable instruments are being developed. As the accompanying control interfaces of such instruments have not been optimized for intuitive manipulation, we investigated the effect of control mode (1DoF or 2DoF), and control device (joystick or handgrip) on human performance in a navigation task. The experiments were conducted using the Endo-PaC (Endoscopic-Path Controller), a simulator that emulates the shaft and handle of a maneuverable instrument, combined with custom-developed software animating pathway surgical scenarios. Participants were asked to guide a virtual instrument without collisions toward a target located at the end of a virtual curved tunnel. The performance was assessed in terms of task completion time, path length traveled by the virtual instrument, motion smoothness, collision metrics, subjective workload, and personal preference. The results indicate that 2DoF control leads to faster task completion and fewer collisions with the tunnel wall combined with a strong subjective preference compared with 1DoF control. Handgrip control appeared to be more intuitive to master than joystick control. However, the participants experienced greater physical demand and had longer path lengths with handgrip than joystick control. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. Suitability of digital camcorders for virtual reality image data capture

    NASA Astrophysics Data System (ADS)

    D'Apuzzo, Nicola; Maas, Hans-Gerd

    1998-12-01

    Today's consumer market digital camcorders offer features which make them appear quite interesting devices for virtual reality data capture. The paper compares a digital camcorder with an analogue camcorder and a machine vision type CCD camera and discusses the suitability of these three cameras for virtual reality applications. Besides the discussion of technical features of the cameras, this includes a detailed accuracy test in order to define the range of applications. In combination with the cameras, three different framegrabbers are tested. The geometric accuracy potential of all three cameras turned out to be surprisingly large, and no problems were noticed in the radiometric performance. On the other hand, some disadvantages have to be reported: from the photogrammetrists point of view, the major disadvantage of most camcorders is the missing possibility to synchronize multiple devices, limiting the suitability for 3-D motion data capture. Moreover, the standard video format contains interlacing, which is also undesirable for all applications dealing with moving objects or moving cameras. Further disadvantages are computer interfaces with functionality, which is still suboptimal. While custom-made solutions to these problems are probably rather expensive (and will make potential users turn back to machine vision like equipment), this functionality could probably be included by the manufacturers at almost zero cost.

  15. Adding tactile realism to a virtual reality laparoscopic surgical simulator with a cost-effective human interface device

    NASA Astrophysics Data System (ADS)

    Mack, Ian W.; Potts, Stephen; McMenemy, Karen R.; Ferguson, R. S.

    2006-02-01

    The laparoscopic technique for performing abdominal surgery requires a very high degree of skill in the medical practitioner. Much interest has been focused on using computer graphics to provide simulators for training surgeons. Unfortunately, these tend to be complex and have a very high cost, which limits availability and restricts the length of time over which individuals can practice their skills. With computer game technology able to provide the graphics required for a surgical simulator, the cost does not have to be high. However, graphics alone cannot serve as a training simulator. Human interface hardware, the equivalent of the force feedback joystick for a flight simulator game, is required to complete the system. This paper presents a design for a very low cost device to address this vital issue. The design encompasses: the mechanical construction, the electronic interfaces and the software protocols to mimic a laparoscopic surgical set-up. Thus the surgeon has the capability of practicing two-handed procedures with the possibility of force feedback. The force feedback and collision detection algorithms allow surgeons to practice realistic operating theatre procedures with a good degree of authenticity.

  16. The virtual windtunnel: Visualizing modern CFD datasets with a virtual environment

    NASA Technical Reports Server (NTRS)

    Bryson, Steve

    1993-01-01

    This paper describes work in progress on a virtual environment designed for the visualization of pre-computed fluid flows. The overall problems involved in the visualization of fluid flow are summarized, including computational, data management, and interface issues. Requirements for a flow visualization are summarized. Many aspects of the implementation of the virtual windtunnel were uniquely determined by these requirements. The user interface is described in detail.

  17. The Multimission Image Processing Laboratory's virtual frame buffer interface

    NASA Technical Reports Server (NTRS)

    Wolfe, T.

    1984-01-01

    Large image processing systems use multiple frame buffers with differing architectures and vendor supplied interfaces. This variety of architectures and interfaces creates software development, maintenance and portability problems for application programs. Several machine-dependent graphics standards such as ANSI Core and GKS are available, but none of them are adequate for image processing. Therefore, the Multimission Image Processing laboratory project has implemented a programmer level virtual frame buffer interface. This interface makes all frame buffers appear as a generic frame buffer with a specified set of characteristics. This document defines the virtual frame uffer interface and provides information such as FORTRAN subroutine definitions, frame buffer characteristics, sample programs, etc. It is intended to be used by application programmers and system programmers who are adding new frame buffers to a system.

  18. Towards a real-time interface between a biomimetic model of sensorimotor cortex and a robotic arm

    PubMed Central

    Dura-Bernal, Salvador; Chadderdon, George L; Neymotin, Samuel A; Francis, Joseph T; Lytton, William W

    2015-01-01

    Brain-machine interfaces can greatly improve the performance of prosthetics. Utilizing biomimetic neuronal modeling in brain machine interfaces (BMI) offers the possibility of providing naturalistic motor-control algorithms for control of a robotic limb. This will allow finer control of a robot, while also giving us new tools to better understand the brain’s use of electrical signals. However, the biomimetic approach presents challenges in integrating technologies across multiple hardware and software platforms, so that the different components can communicate in real-time. We present the first steps in an ongoing effort to integrate a biomimetic spiking neuronal model of motor learning with a robotic arm. The biomimetic model (BMM) was used to drive a simple kinematic two-joint virtual arm in a motor task requiring trial-and-error convergence on a single target. We utilized the output of this model in real time to drive mirroring motion of a Barrett Technology WAM robotic arm through a user datagram protocol (UDP) interface. The robotic arm sent back information on its joint positions, which was then used by a visualization tool on the remote computer to display a realistic 3D virtual model of the moving robotic arm in real time. This work paves the way towards a full closed-loop biomimetic brain-effector system that can be incorporated in a neural decoder for prosthetic control, to be used as a platform for developing biomimetic learning algorithms for controlling real-time devices. PMID:26709323

  19. A microbased shared virtual world prototype

    NASA Technical Reports Server (NTRS)

    Pitts, Gerald; Robinson, Mark; Strange, Steve

    1993-01-01

    Virtual reality (VR) allows sensory immersion and interaction with a computer-generated environment. The user adopts a physical interface with the computer, through Input/Output devices such as a head-mounted display, data glove, mouse, keyboard, or monitor, to experience an alternate universe. What this means is that the computer generates an environment which, in its ultimate extension, becomes indistinguishable from the real world. 'Imagine a wraparound television with three-dimensional programs, including three-dimensional sound, and solid objects that you can pick up and manipulate, even feel with your fingers and hands.... 'Imagine that you are the creator as well as the consumer of your artificial experience, with the power to use a gesture or word to remold the world you see and hear and feel. That part is not fiction... three-dimensional computer graphics, input/output devices, computer models that constitute a VR system make it possible, today, to immerse yourself in an artificial world and to reach in and reshape it.' Our research's goal was to propose a feasibility experiment in the construction of a networked virtual reality system, making use of current personal computer (PC) technology. The prototype was built using Borland C compiler, running on an IBM 486 33 MHz and a 386 33 MHz. Each game currently is represented as an IPX client on a non-dedicated Novell server. We initially posed the two questions: (1) Is there a need for networked virtual reality? (2) In what ways can the technology be made available to the most people possible?

  20. Local and Remote Cooperation With Virtual and Robotic Agents: A P300 BCI Study in Healthy and People Living With Spinal Cord Injury.

    PubMed

    Tidoni, Emmanuele; Abu-Alqumsan, Mohammad; Leonardis, Daniele; Kapeller, Christoph; Fusco, Gabriele; Guger, Cristoph; Hintermuller, Cristoph; Peer, Angelika; Frisoli, Antonio; Tecchia, Franco; Bergamasco, Massimo; Aglioti, Salvatore Maria

    2017-09-01

    The development of technological applications that allow people to control and embody external devices within social interaction settings represents a major goal for current and future brain-computer interface (BCI) systems. Prior research has suggested that embodied systems may ameliorate BCI end-user's experience and accuracy in controlling external devices. Along these lines, we developed an immersive P300-based BCI application with a head-mounted display for virtual-local and robotic-remote social interactions and explored in a group of healthy participants the role of proprioceptive feedback in the control of a virtual surrogate (Study 1). Moreover, we compared the performance of a small group of people with spinal cord injury (SCI) to a control group of healthy subjects during virtual and robotic social interactions (Study 2), where both groups received a proprioceptive stimulation. Our attempt to combine immersive environments, BCI technologies and neuroscience of body ownership suggests that providing realistic multisensory feedback still represents a challenge. Results have shown that healthy and people living with SCI used the BCI within the immersive scenarios with good levels of performance (as indexed by task accuracy, optimizations calls and Information Transfer Rate) and perceived control of the surrogates. Proprioceptive feedback did not contribute to alter performance measures and body ownership sensations. Further studies are necessary to test whether sensorimotor experience represents an opportunity to improve the use of future embodied BCI applications.

  1. Augmented reality on poster presentations, in the field and in the classroom

    NASA Astrophysics Data System (ADS)

    Hawemann, Friedrich; Kolawole, Folarin

    2017-04-01

    Augmented reality (AR) is the direct addition of virtual information through an interface to a real-world environment. In practice, through a mobile device such as a tablet or smartphone, information can be projected onto a target- for example, an image on a poster. Mobile devices are widely distributed today such that augmented reality is easily accessible to almost everyone. Numerous studies have shown that multi-dimensional visualization is essential for efficient perception of the spatial, temporal and geometrical configuration of geological structures and processes. Print media, such as posters and handouts lack the ability to display content in the third and fourth dimensions, which might be in space-domain as seen in three-dimensional (3-D) objects, or time-domain (four-dimensional, 4-D) expressible in the form of videos. Here, we show that augmented reality content can be complimentary to geoscience poster presentations, hands-on material and in the field. In the latter example, location based data is loaded and for example, a virtual geological profile can be draped over a real-world landscape. In object based AR, the application is trained to recognize an image or object through the camera of the user's mobile device, such that specific content is automatically downloaded and displayed on the screen of the device, and positioned relative to the trained image or object. We used ZapWorks, a commercially-available software application to create and present examples of content that is poster-based, in which important supplementary information is presented as interactive virtual images, videos and 3-D models. We suggest that the flexibility and real-time interactivity offered by AR makes it an invaluable tool for effective geoscience poster presentation, class-room and field geoscience learning.

  2. Anthropomorphic teleoperation: Controlling remote manipulators with the DataGlove

    NASA Technical Reports Server (NTRS)

    Hale, J. P., II

    1992-01-01

    A two phase effort was conducted to assess the capabilities and limitations of the DataGlove, a lightweight glove input device that can output signals in real-time based on hand shape, orientation, and movement. The first phase was a period for system integration, checkout, and familiarization in a virtual environment. The second phase was a formal experiment using the DataGlove as input device to control the protoflight manipulator arm (PFMA) - a large telerobotic arm with an 8-ft reach. The first phase was used to explore and understand how the DataGlove functions in a virtual environment, build a virtual PFMA, and consider and select a reasonable teleoperation control methodology. Twelve volunteers (six males and six females) participated in a 2 x 3 (x 2) full-factorial formal experiment using the DataGlove to control the PFMA in a simple retraction, slewing, and insertion task. Two within-subjects variables, time delay (0, 1, and 2 seconds) and PFMA wrist flexibility (rigid/flexible), were manipulated. Gender served as a blocking variable. A main effect of time delay was found for slewing and total task times. Correlations among questionnaire responses, and between questionnaire responses and session mean scores and gender were computed. The experimental data were also compared with data collected in another study that used a six degree-of-freedom handcontroller to control the PFMA in the same task. It was concluded that the DataGlove is a legitimate teleoperations input device that provides a natural, intuitive user interface. From an operational point of view, it compares favorably with other 'standard' telerobotic input devices and should be considered in future trades in teleoperation systems' designs.

  3. In situ pre-growth calibration using reflectance as a control strategy for MOCVD fabrication of device structures

    NASA Astrophysics Data System (ADS)

    Breiland, William G.; Hou, Hong Q.; Chui, Herman C.; Hammons, Burrel E.

    1997-04-01

    In situ normal incidence reflectance, combined with a virtual interface model, is being used routinely on a commercial metal organic chemical vapor deposition reactor to measure growth rates of compound semiconductor films. The technique serves as a pre-growth calibration tool analogous to the use of reflection high-energy electron diffraction in molecular beam epitaxy as well as a real-time monitor throughout the run. An application of the method to the growth of a vertical cavity surface emitting laser (VCSEL) device structure is presented. All necessary calibration information can be obtained using a single run lasting less than 1 h. Working VCSEL devices are obtained on the first try after calibration. Repeated runs have yielded ±0.3% reproducibility of the Fabry-Perot cavity wavelength over the course of more than 100 runs.

  4. Integration of an intelligent systems behavior simulator and a scalable soldier-machine interface

    NASA Astrophysics Data System (ADS)

    Johnson, Tony; Manteuffel, Chris; Brewster, Benjamin; Tierney, Terry

    2007-04-01

    As the Army's Future Combat Systems (FCS) introduce emerging technologies and new force structures to the battlefield, soldiers will increasingly face new challenges in workload management. The next generation warfighter will be responsible for effectively managing robotic assets in addition to performing other missions. Studies of future battlefield operational scenarios involving the use of automation, including the specification of existing and proposed technologies, will provide significant insight into potential problem areas regarding soldier workload. The US Army Tank Automotive Research, Development, and Engineering Center (TARDEC) is currently executing an Army technology objective program to analyze and evaluate the effect of automated technologies and their associated control devices with respect to soldier workload. The Human-Robotic Interface (HRI) Intelligent Systems Behavior Simulator (ISBS) is a human performance measurement simulation system that allows modelers to develop constructive simulations of military scenarios with various deployments of interface technologies in order to evaluate operator effectiveness. One such interface is TARDEC's Scalable Soldier-Machine Interface (SMI). The scalable SMI provides a configurable machine interface application that is capable of adapting to several hardware platforms by recognizing the physical space limitations of the display device. This paper describes the integration of the ISBS and Scalable SMI applications, which will ultimately benefit both systems. The ISBS will be able to use the Scalable SMI to visualize the behaviors of virtual soldiers performing HRI tasks, such as route planning, and the scalable SMI will benefit from stimuli provided by the ISBS simulation environment. The paper describes the background of each system and details of the system integration approach.

  5. Interfacial characterization of flexible hybrid electronics

    NASA Astrophysics Data System (ADS)

    Najafian, Sara; Amirkhizi, Alireza V.; Stapleton, Scott

    2018-03-01

    Flexible Hybrid Electronics (FHEs) are the new generation of electronics combining flexible plastic film substrates with electronic devices. Besides the electrical features, design improvements of FHEs depend on the prediction of their mechanical and failure behavior. Debonding of electronic components from the flexible substrate is one of the most common and critical failures of these devices, therefore, the experimental determination of material and interface properties is of great importance in the prediction of failure mechanisms. Traditional interface characterization involves isolated shear and normal mode tests such as the double cantilever beam (DCB) and end notch flexure (ENF) tests. However, due to the thin, flexible nature of the materials and manufacturing restrictions, tests mirroring traditional interface characterization experiments may not always be possible. The ideal goal of this research is to design experiments such that each mode of fracture is isolated. However, due to the complex nonlinear nature of the response and small geometries of FHEs, design of the proper tests to characterize the interface properties can be significantly time and cost consuming. Hence numerical modeling has been implemented to design these novel characterization experiments. This research involves loading case and specimen geometry parametric studies using numerical modeling to design future experiments where either shear or normal fracture modes are dominant. These virtual experiments will provide a foundation for designing similar tests for many different types of flexible electronics and predicting the failure mechanism independent of the specific FHE materials.

  6. Open core control software for surgical robots

    PubMed Central

    Kozuka, Hiroaki; Kim, Hyung Wook; Takesue, Naoyuki; Vladimirov, B.; Sakaguchi, Masamichi; Tokuda, Junichi; Hata, Nobuhiko; Chinzei, Kiyoyuki; Fujimoto, Hideo

    2010-01-01

    Object In these days, patients and doctors in operation room are surrounded by many medical devices as resulting from recent advancement of medical technology. However, these cutting-edge medical devices are working independently and not collaborating with each other, even though the collaborations between these devices such as navigation systems and medical imaging devices are becoming very important for accomplishing complex surgical tasks (such as a tumor removal procedure while checking the tumor location in neurosurgery). On the other hand, several surgical robots have been commercialized, and are becoming common. However, these surgical robots are not open for collaborations with external medical devices in these days. A cutting-edge “intelligent surgical robot” will be possible in collaborating with surgical robots, various kinds of sensors, navigation system and so on. On the other hand, most of the academic software developments for surgical robots are “home-made” in their research institutions and not open to the public. Therefore, open source control software for surgical robots can be beneficial in this field. From these perspectives, we developed Open Core Control software for surgical robots to overcome these challenges. Materials and methods In general, control softwares have hardware dependencies based on actuators, sensors and various kinds of internal devices. Therefore, these control softwares cannot be used on different types of robots without modifications. However, the structure of the Open Core Control software can be reused for various types of robots by abstracting hardware dependent parts. In addition, network connectivity is crucial for collaboration between advanced medical devices. The OpenIGTLink is adopted in Interface class which plays a role to communicate with external medical devices. At the same time, it is essential to maintain the stable operation within the asynchronous data transactions through network. In the Open Core Control software, several techniques for this purpose were introduced. Virtual fixture is well known technique as a “force guide” for supporting operators to perform precise manipulation by using a master–slave robot. The virtual fixture for precise and safety surgery was implemented on the system to demonstrate an idea of high-level collaboration between a surgical robot and a navigation system. The extension of virtual fixture is not a part of the Open Core Control system, however, the function such as virtual fixture cannot be realized without a tight collaboration between cutting-edge medical devices. By using the virtual fixture, operators can pre-define an accessible area on the navigation system, and the area information can be transferred to the robot. In this manner, the surgical console generates the reflection force when the operator tries to get out from the pre-defined accessible area during surgery. Results The Open Core Control software was implemented on a surgical master–slave robot and stable operation was observed in a motion test. The tip of the surgical robot was displayed on a navigation system by connecting the surgical robot with a 3D position sensor through the OpenIGTLink. The accessible area was pre-defined before the operation, and the virtual fixture was displayed as a “force guide” on the surgical console. In addition, the system showed stable performance in a duration test with network disturbance. Conclusion In this paper, a design of the Open Core Control software for surgical robots and the implementation of virtual fixture were described. The Open Core Control software was implemented on a surgical robot system and showed stable performance in high-level collaboration works. The Open Core Control software is developed to be a widely used platform of surgical robots. Safety issues are essential for control software of these complex medical devices. It is important to follow the global specifications such as a FDA requirement “General Principles of Software Validation” or IEC62304. For following these regulations, it is important to develop a self-test environment. Therefore, a test environment is now under development to test various interference in operation room such as a noise of electric knife by considering safety and test environment regulations such as ISO13849 and IEC60508. The Open Core Control software is currently being developed software in open-source manner and available on the Internet. A communization of software interface is becoming a major trend in this field. Based on this perspective, the Open Core Control software can be expected to bring contributions in this field. PMID:20033506

  7. Army Training: Efforts to Adjust Training Requirements Should Consider the Use of Virtual Training Devices

    DTIC Science & Technology

    2016-08-01

    ARMY TRAINING Efforts to Adjust Training Requirements Should Consider the Use of Virtual Training Devices Report...Requirements Should Consider the Use of Virtual Training Devices What GAO Found In 2010, the Army began modifying its training priorities and goals to...until fiscal year 2017. The Army has taken some steps to improve the integration of virtual training devices into operational training, but gaps in

  8. Virtual interface environment workstations

    NASA Technical Reports Server (NTRS)

    Fisher, S. S.; Wenzel, E. M.; Coler, C.; Mcgreevy, M. W.

    1988-01-01

    A head-mounted, wide-angle, stereoscopic display system controlled by operator position, voice and gesture has been developed at NASA's Ames Research Center for use as a multipurpose interface environment. This Virtual Interface Environment Workstation (VIEW) system provides a multisensory, interactive display environment in which a user can virtually explore a 360-degree synthesized or remotely sensed environment and can viscerally interact with its components. Primary applications of the system are in telerobotics, management of large-scale integrated information systems, and human factors research. System configuration, research scenarios, and research directions are described.

  9. Human Machine Interfaces for Teleoperators and Virtual Environments

    NASA Technical Reports Server (NTRS)

    Durlach, Nathaniel I. (Compiler); Sheridan, Thomas B. (Compiler); Ellis, Stephen R. (Compiler)

    1991-01-01

    In Mar. 1990, a meeting organized around the general theme of teleoperation research into virtual environment display technology was conducted. This is a collection of conference-related fragments that will give a glimpse of the potential of the following fields and how they interplay: sensorimotor performance; human-machine interfaces; teleoperation; virtual environments; performance measurement and evaluation methods; and design principles and predictive models.

  10. Asynchronous P300-based brain-computer interface to control a virtual environment: initial tests on end users.

    PubMed

    Aloise, Fabio; Schettini, Francesca; Aricò, Pietro; Salinari, Serenella; Guger, Christoph; Rinsma, Johanna; Aiello, Marco; Mattia, Donatella; Cincotti, Febo

    2011-10-01

    Motor disability and/or ageing can prevent individuals from fully enjoying home facilities, thus worsening their quality of life. Advances in the field of accessible user interfaces for domotic appliances can represent a valuable way to improve the independence of these persons. An asynchronous P300-based Brain-Computer Interface (BCI) system was recently validated with the participation of healthy young volunteers for environmental control. In this study, the asynchronous P300-based BCI for the interaction with a virtual home environment was tested with the participation of potential end-users (clients of a Frisian home care organization) with limited autonomy due to ageing and/or motor disabilities. System testing revealed that the minimum number of stimulation sequences needed to achieve correct classification had a higher intra-subject variability in potential end-users with respect to what was previously observed in young controls. Here we show that the asynchronous modality performed significantly better as compared to the synchronous mode in continuously adapting its speed to the users' state. Furthermore, the asynchronous system modality confirmed its reliability in avoiding misclassifications and false positives, as previously shown in young healthy subjects. The asynchronous modality may contribute to filling the usability gap between BCI systems and traditional input devices, representing an important step towards their use in the activities of daily living.

  11. Rapid prototyping 3D virtual world interfaces within a virtual factory environment

    NASA Technical Reports Server (NTRS)

    Kosta, Charles Paul; Krolak, Patrick D.

    1993-01-01

    On-going work into user requirements analysis using CLIPS (NASA/JSC) expert systems as an intelligent event simulator has led to research into three-dimensional (3D) interfaces. Previous work involved CLIPS and two-dimensional (2D) models. Integral to this work was the development of the University of Massachusetts Lowell parallel version of CLIPS, called PCLIPS. This allowed us to create both a Software Bus and a group problem-solving environment for expert systems development. By shifting the PCLIPS paradigm to use the VEOS messaging protocol we have merged VEOS (HlTL/Seattle) and CLIPS into a distributed virtual worlds prototyping environment (VCLIPS). VCLIPS uses the VEOS protocol layer to allow multiple experts to cooperate on a single problem. We have begun to look at the control of a virtual factory. In the virtual factory there are actors and objects as found in our Lincoln Logs Factory of the Future project. In this artificial reality architecture there are three VCLIPS entities in action. One entity is responsible for display and user events in the 3D virtual world. Another is responsible for either simulating the virtual factory or communicating with the real factory. The third is a user interface expert. The interface expert maps user input levels, within the current prototype, to control information for the factory. The interface to the virtual factory is based on a camera paradigm. The graphics subsystem generates camera views of the factory on standard X-Window displays. The camera allows for view control and object control. Control or the factory is accomplished by the user reaching into the camera views to perform object interactions. All communication between the separate CLIPS expert systems is done through VEOS.

  12. Applying mixed reality to simulate vulnerable populations for practicing clinical communication skills.

    PubMed

    Chuah, Joon Hao; Lok, Benjamin; Black, Erik

    2013-04-01

    Health sciences students often practice and are evaluated on interview and exam skills by working with standardized patients (people that role play having a disease or condition). However, standardized patients do not exist for certain vulnerable populations such as children and the intellectually disabled. As a result, students receive little to no exposure to vulnerable populations before becoming working professionals. To address this problem and thereby increase exposure to vulnerable populations, we propose using virtual humans to simulate members of vulnerable populations. We created a mixed reality pediatric patient that allowed students to practice pediatric developmental exams. Practicing several exams is necessary for students to understand how to properly interact with and correctly assess a variety of children. Practice also increases a student's confidence in performing the exam. Effective practice requires students to treat the virtual child realistically. Treating the child realistically might be affected by how the student and virtual child physically interact, so we created two object interaction interfaces - a natural interface and a mouse-based interface. We tested the complete mixed reality exam and also compared the two object interaction interfaces in a within-subjects user study with 22 participants. Our results showed that the participants accepted the virtual child as a child and treated it realistically. Participants also preferred the natural interface, but the interface did not affect how realistically participants treated the virtual child.

  13. Toward Optimization of Gaze-Controlled Human-Computer Interaction: Application to Hindi Virtual Keyboard for Stroke Patients.

    PubMed

    Meena, Yogesh Kumar; Cecotti, Hubert; Wong-Lin, Kongfatt; Dutta, Ashish; Prasad, Girijesh

    2018-04-01

    Virtual keyboard applications and alternative communication devices provide new means of communication to assist disabled people. To date, virtual keyboard optimization schemes based on script-specific information, along with multimodal input access facility, are limited. In this paper, we propose a novel method for optimizing the position of the displayed items for gaze-controlled tree-based menu selection systems by considering a combination of letter frequency and command selection time. The optimized graphical user interface layout has been designed for a Hindi language virtual keyboard based on a menu wherein 10 commands provide access to type 88 different characters, along with additional text editing commands. The system can be controlled in two different modes: eye-tracking alone and eye-tracking with an access soft-switch. Five different keyboard layouts have been presented and evaluated with ten healthy participants. Furthermore, the two best performing keyboard layouts have been evaluated with eye-tracking alone on ten stroke patients. The overall performance analysis demonstrated significantly superior typing performance, high usability (87% SUS score), and low workload (NASA TLX with 17 scores) for the letter frequency and time-based organization with script specific arrangement design. This paper represents the first optimized gaze-controlled Hindi virtual keyboard, which can be extended to other languages.

  14. A Virtual Object-Location Task for Children: Gender and Videogame Experience Influence Navigation; Age Impacts Memory and Completion Time.

    PubMed

    Rodriguez-Andres, David; Mendez-Lopez, Magdalena; Juan, M-Carmen; Perez-Hernandez, Elena

    2018-01-01

    The use of virtual reality-based tasks for studying memory has increased considerably. Most of the studies that have looked at child population factors that influence performance on such tasks have been focused on cognitive variables. However, little attention has been paid to the impact of non-cognitive skills. In the present paper, we tested 52 typically-developing children aged 5-12 years in a virtual object-location task. The task assessed their spatial short-term memory for the location of three objects in a virtual city. The virtual task environment was presented using a 3D application consisting of a 120″ stereoscopic screen and a gamepad interface. Measures of learning and displacement indicators in the virtual environment, 3D perception, satisfaction, and usability were obtained. We assessed the children's videogame experience, their visuospatial span, their ability to build blocks, and emotional and behavioral outcomes. The results indicate that learning improved with age. Significant effects on the speed of navigation were found favoring boys and those more experienced with videogames. Visuospatial skills correlated mainly with ability to recall object positions, but the correlation was weak. Longer paths were related with higher scores of withdrawal behavior, attention problems, and a lower visuospatial span. Aggressiveness and experience with the device used for interaction were related with faster navigation. However, the correlations indicated only weak associations among these variables.

  15. Prospective study of device-related complications in intensive care unit detected by virtual autopsy.

    PubMed

    Wichmann, D; Heinemann, A; Zähler, S; Vogel, H; Höpker, W; Püschel, K; Kluge, S

    2018-06-01

    There has been increasing use of invasive techniques, such as extracorporeal organ support, in intensive care units (ICU), and declining autopsy rates. Thus, new measures are needed to maintain high-quality standards. We investigated the potential of computed tomography (CT)-based virtual autopsy to substitute for medical autopsy in this setting. We investigated the potential of virtual autopsy by post-mortem CT to identify complications associated with medical devices in a prospective study of patients who had died in the ICU. Clinical records were reviewed to determine the number and types of medical devices used, and findings from medical and virtual autopsies, related and unrelated to the medical devices, were compared. Medical and virtual autopsies could be performed in 61 patients (Group M/V), and virtual autopsy only in 101 patients (Group V). In Group M/V, 41 device-related complications and 30 device malpositions were identified, but only with a low inter-method agreement. Major findings unrelated to a device were identified in about 25% of patients with a high level of agreement between methods. In Group V, 8 device complications and 36 device malpositions were identified. Device-related complications are frequent in ICU patients. Virtual and medical autopsies showed clear differences in the detection of complications and device malpositions. Both methods should supplement each other rather than one alone for quality control of medical devices in the ICU. Further studies should focus on the identification of special patient populations in which virtual autopsy might be of particular benefit. NCT01541982. Copyright © 2018 British Journal of Anaesthesia. Published by Elsevier Ltd. All rights reserved.

  16. The Input-Interface of Webcam Applied in 3D Virtual Reality Systems

    ERIC Educational Resources Information Center

    Sun, Huey-Min; Cheng, Wen-Lin

    2009-01-01

    Our research explores a virtual reality application based on Web camera (Webcam) input-interface. The interface can replace with the mouse to control direction intention of a user by the method of frame difference. We divide a frame into nine grids from Webcam and make use of the background registration to compute the moving object. In order to…

  17. Heterogeneous Deformable Modeling of Bio-Tissues and Haptic Force Rendering for Bio-Object Modeling

    NASA Astrophysics Data System (ADS)

    Lin, Shiyong; Lee, Yuan-Shin; Narayan, Roger J.

    This paper presents a novel technique for modeling soft biological tissues as well as the development of an innovative interface for bio-manufacturing and medical applications. Heterogeneous deformable models may be used to represent the actual internal structures of deformable biological objects, which possess multiple components and nonuniform material properties. Both heterogeneous deformable object modeling and accurate haptic rendering can greatly enhance the realism and fidelity of virtual reality environments. In this paper, a tri-ray node snapping algorithm is proposed to generate a volumetric heterogeneous deformable model from a set of object interface surfaces between different materials. A constrained local static integration method is presented for simulating deformation and accurate force feedback based on the material properties of a heterogeneous structure. Biological soft tissue modeling is used as an example to demonstrate the proposed techniques. By integrating the heterogeneous deformable model into a virtual environment, users can both observe different materials inside a deformable object as well as interact with it by touching the deformable object using a haptic device. The presented techniques can be used for surgical simulation, bio-product design, bio-manufacturing, and medical applications.

  18. The Use of Virtual Reality Technology in the Treatment of Anxiety and Other Psychiatric Disorders.

    PubMed

    Maples-Keller, Jessica L; Bunnell, Brian E; Kim, Sae-Jin; Rothbaum, Barbara O

    After participating in this activity, learners should be better able to:• Evaluate the literature regarding the effectiveness of incorporating virtual reality (VR) in the treatment of psychiatric disorders• Assess the use of exposure-based intervention for anxiety disorders ABSTRACT: Virtual reality (VR) allows users to experience a sense of presence in a computer-generated, three-dimensional environment. Sensory information is delivered through a head-mounted display and specialized interface devices. These devices track head movements so that the movements and images change in a natural way with head motion, allowing for a sense of immersion. VR, which allows for controlled delivery of sensory stimulation via the therapist, is a convenient and cost-effective treatment. This review focuses on the available literature regarding the effectiveness of incorporating VR within the treatment of various psychiatric disorders, with particular attention to exposure-based intervention for anxiety disorders. A systematic literature search was conducted in order to identify studies implementing VR-based treatment for anxiety or other psychiatric disorders. This article reviews the history of the development of VR-based technology and its use within psychiatric treatment, the empirical evidence for VR-based treatment, and the benefits for using VR for psychiatric research and treatment. It also presents recommendations for how to incorporate VR into psychiatric care and discusses future directions for VR-based treatment and clinical research.

  19. Opportunities of hydrostatically coupled dielectric elastomer actuators for haptic interfaces

    NASA Astrophysics Data System (ADS)

    Carpi, Federico; Frediani, Gabriele; De Rossi, Danilo

    2011-04-01

    As a means to improve versatility and safety of dielectric elastomer actuators (DEAs) for several fields of application, so-called 'hydrostatically coupled' DEAs (HC-DEAs) have recently been described. HC-DEAs are based on an incompressible fluid that mechanically couples a DE-based active part to a passive part interfaced to the load, so as to enable hydrostatic transmission. This paper presents ongoing developments of HC-DEAs and potential applications in the field of haptics. Three specific examples are considered. The first deals with a wearable tactile display used to provide users with tactile feedback during electronic navigation in virtual environments. The display consists of HCDEAs arranged in contact with finger tips. As a second example, an up-scaled prototype version of an 8-dots refreshable cell for dynamic Braille displays is shown. Each Braille dot consists of a miniature HC-DEA, with a diameter lower than 2 mm. The third example refers to a device for finger rehabilitation, conceived to work as a sort of active version of a rehabilitation squeezing ball. The device is designed to dynamically change its compliance according to an electric control. The three examples of applications intend to show the potential of the new technology and the prospective opportunities for haptic interfaces.

  20. Integrated Information Support System (IISS). Volume 8. User Interface Subsystem. Part 14. Virtual Terminal Unit Test Plan

    DTIC Science & Technology

    1990-09-30

    Dynamics Research Corporation: Jones, L.. Glandorf, F. 3a. TYPE OF REPORT 113b. TIME COVERED 114. DATE OF REPORT (Yr.,Mo..Day) 15. PAGE COUNT Final...specific software modules written for each type of real terminal supported. Virtual Terminal Interface: the callable interface to the Virtual Terminal...2000;60000;2;0;100;100;5000;0;0;0;0;0;10 "v-Testing2- DVF - View Fill Area: <ESC>[5;1;2000;50000;20000;30000;20000;50000; 2000;30000&v DVM - View Markers: <ESC

  1. Owgis 2.0: Open Source Java Application that Builds Web GIS Interfaces for Desktop Andmobile Devices

    NASA Astrophysics Data System (ADS)

    Zavala Romero, O.; Chassignet, E.; Zavala-Hidalgo, J.; Pandav, H.; Velissariou, P.; Meyer-Baese, A.

    2016-12-01

    OWGIS is an open source Java and JavaScript application that builds easily configurable Web GIS sites for desktop and mobile devices. The current version of OWGIS generates mobile interfaces based on HTML5 technology and can be used to create mobile applications. The style of the generated websites can be modified using COMPASS, a well known CSS Authoring Framework. In addition, OWGIS uses several Open Geospatial Consortium standards to request datafrom the most common map servers, such as GeoServer. It is also able to request data from ncWMS servers, allowing the websites to display 4D data from NetCDF files. This application is configured by XML files that define which layers, geographic datasets, are displayed on the Web GIS sites. Among other features, OWGIS allows for animations; streamlines from vector data; virtual globe display; vertical profiles and vertical transects; different color palettes; the ability to download data; and display text in multiple languages. OWGIS users are mainly scientists in the oceanography, meteorology and climate fields.

  2. Study on high breakdown voltage GaN-based vertical field effect transistor with interfacial charge engineering for power applications

    NASA Astrophysics Data System (ADS)

    Du, Jiangfeng; Liu, Dong; Liu, Yong; Bai, Zhiyuan; Jiang, Zhiguang; Liu, Yang; Yu, Qi

    2017-11-01

    A high voltage GaN-based vertical field effect transistor with interfacial charge engineering (GaN ICE-VFET) is proposed and its breakdown mechanism is presented. This vertical FET features oxide trenches which show a fixed negative charge at the oxide/GaN interface. In the off-state, firstly, the trench oxide layer acts as a field plate; secondly, the n-GaN buffer layer is inverted along the oxide/GaN interface and thus a vertical hole layer is formed, which acts as a virtual p-pillar and laterally depletes the n-buffer pillar. Both of them modulate electric field distribution in the device and significantly increase the breakdown voltage (BV). Compared with a conventional GaN vertical FET, the BV of GaN ICE-VFET is increased from 1148 V to 4153 V with the same buffer thickness of 20 μm. Furthermore, the proposed device achieves a great improvement in the tradeoff between BV and on-resistance; and its figure of merit even exceeds the GaN one-dimensional limit.

  3. Continuous Three-Dimensional Control of a Virtual Helicopter Using a Motor Imagery Based Brain-Computer Interface

    PubMed Central

    Doud, Alexander J.; Lucas, John P.; Pisansky, Marc T.; He, Bin

    2011-01-01

    Brain-computer interfaces (BCIs) allow a user to interact with a computer system using thought. However, only recently have devices capable of providing sophisticated multi-dimensional control been achieved non-invasively. A major goal for non-invasive BCI systems has been to provide continuous, intuitive, and accurate control, while retaining a high level of user autonomy. By employing electroencephalography (EEG) to record and decode sensorimotor rhythms (SMRs) induced from motor imaginations, a consistent, user-specific control signal may be characterized. Utilizing a novel method of interactive and continuous control, we trained three normal subjects to modulate their SMRs to achieve three-dimensional movement of a virtual helicopter that is fast, accurate, and continuous. In this system, the virtual helicopter's forward-backward translation and elevation controls were actuated through the modulation of sensorimotor rhythms that were converted to forces applied to the virtual helicopter at every simulation time step, and the helicopter's angle of left or right rotation was linearly mapped, with higher resolution, from sensorimotor rhythms associated with other motor imaginations. These different resolutions of control allow for interplay between general intent actuation and fine control as is seen in the gross and fine movements of the arm and hand. Subjects controlled the helicopter with the goal of flying through rings (targets) randomly positioned and oriented in a three-dimensional space. The subjects flew through rings continuously, acquiring as many as 11 consecutive rings within a five-minute period. In total, the study group successfully acquired over 85% of presented targets. These results affirm the effective, three-dimensional control of our motor imagery based BCI system, and suggest its potential applications in biological navigation, neuroprosthetics, and other applications. PMID:22046274

  4. The Adaptive Effects Of Virtual Interfaces: Vestibulo-Ocular Reflex and Simulator Sickness.

    DTIC Science & Technology

    1998-08-07

    rearrangement: a pattern of stimulation differing from that existing as a result of normal interactions with the real world. Stimulus rearrangements can...is immersive and interactive . virtual interface: a system of transducers, signal processors, computer hardware and software that create an... interactive medium through which: 1) information is transmitted to the senses in the form of two- and three dimensional virtual images and 2) psychomotor

  5. Agent-Based Intelligent Interface for Wheelchair Movement Control

    PubMed Central

    Barriuso, Alberto L.; De Paz, Juan F.

    2018-01-01

    People who suffer from any kind of motor difficulty face serious complications to autonomously move in their daily lives. However, a growing number research projects which propose different powered wheelchairs control systems are arising. Despite of the interest of the research community in the area, there is no platform that allows an easy integration of various control methods that make use of heterogeneous sensors and computationally demanding algorithms. In this work, an architecture based on virtual organizations of agents is proposed that makes use of a flexible and scalable communication protocol that allows the deployment of embedded agents in computationally limited devices. In order to validate the proper functioning of the proposed system, it has been integrated into a conventional wheelchair and a set of alternative control interfaces have been developed and deployed, including a portable electroencephalography system, a voice interface or as specifically designed smartphone application. A set of tests were conducted to test both the platform adequacy and the accuracy and ease of use of the proposed control systems yielding positive results that can be useful in further wheelchair interfaces design and implementation. PMID:29751603

  6. Evaluation of Pseudo-Haptic Interactions with Soft Objects in Virtual Environments.

    PubMed

    Li, Min; Sareh, Sina; Xu, Guanghua; Ridzuan, Maisarah Binti; Luo, Shan; Xie, Jun; Wurdemann, Helge; Althoefer, Kaspar

    2016-01-01

    This paper proposes a pseudo-haptic feedback method conveying simulated soft surface stiffness information through a visual interface. The method exploits a combination of two feedback techniques, namely visual feedback of soft surface deformation and control of the indenter avatar speed, to convey stiffness information of a simulated surface of a soft object in virtual environments. The proposed method was effective in distinguishing different sizes of virtual hard nodules integrated into the simulated soft bodies. To further improve the interactive experience, the approach was extended creating a multi-point pseudo-haptic feedback system. A comparison with regards to (a) nodule detection sensitivity and (b) elapsed time as performance indicators in hard nodule detection experiments to a tablet computer incorporating vibration feedback was conducted. The multi-point pseudo-haptic interaction is shown to be more time-efficient than the single-point pseudo-haptic interaction. It is noted that multi-point pseudo-haptic feedback performs similarly well when compared to a vibration-based feedback method based on both performance measures elapsed time and nodule detection sensitivity. This proves that the proposed method can be used to convey detailed haptic information for virtual environmental tasks, even subtle ones, using either a computer mouse or a pressure sensitive device as an input device. This pseudo-haptic feedback method provides an opportunity for low-cost simulation of objects with soft surfaces and hard inclusions, as, for example, occurring in ever more realistic video games with increasing emphasis on interaction with the physical environment and minimally invasive surgery in the form of soft tissue organs with embedded cancer nodules. Hence, the method can be used in many low-budget applications where haptic sensation is required, such as surgeon training or video games, either using desktop computers or portable devices, showing reasonably high fidelity in conveying stiffness perception to the user.

  7. Diversity of devices along with diversity of data formats as a new challenge in global teaching and learning system

    NASA Astrophysics Data System (ADS)

    Sultana, Razia; Christ, Andreas; Meyrueis, Patrick

    2014-07-01

    The popularity of mobile communication devices is increasing day by day among students, especially for e-learning activities. "Always-ready-to-use" feature of mobile devices is a key motivation for students to use it even in a short break for a short time. This leads to new requirements regarding learning content presentation, user interfaces, and system architecture for heterogeneous devices. To support diverse devices is not enough to establish global teaching and learning system, it is equally important to support various formats of data along with different sort of devices having different capabilities in terms of processing power, display size, supported data formats, operating system, access method of data etc. Not only the existing data formats but also upcoming data formats, such as due to research results in the area of optics and photonics, virtual reality etc should be considered. This paper discusses the importance, risk and challenges of supporting heterogeneous devices to provide heterogeneous data as a learning content to make global teaching and learning system literally come true at anytime and anywhere. We proposed and implemented a sustainable architecture to support device and data format independent learning system.

  8. The effect of extended sensory range via the EyeCane sensory substitution device on the characteristics of visionless virtual navigation.

    PubMed

    Maidenbaum, Shachar; Levy-Tzedek, Shelly; Chebat, Daniel Robert; Namer-Furstenberg, Rinat; Amedi, Amir

    2014-01-01

    Mobility training programs for helping the blind navigate through unknown places with a White-Cane significantly improve their mobility. However, what is the effect of new assistive technologies, offering more information to the blind user, on the underlying premises of these programs such as navigation patterns? We developed the virtual-EyeCane, a minimalistic sensory substitution device translating single-point-distance into auditory cues identical to the EyeCane's in the real world. We compared performance in virtual environments when using the virtual-EyeCane, a virtual-White-Cane, no device and visual navigation. We show that the characteristics of virtual-EyeCane navigation differ from navigation with a virtual-White-Cane or no device, and that virtual-EyeCane users complete more levels successfully, taking shorter paths and with less collisions than these groups, and we demonstrate the relative similarity of virtual-EyeCane and visual navigation patterns. This suggests that additional distance information indeed changes navigation patterns from virtual-White-Cane use, and brings them closer to visual navigation.

  9. Open multi-agent control architecture to support virtual-reality-based man-machine interfaces

    NASA Astrophysics Data System (ADS)

    Freund, Eckhard; Rossmann, Juergen; Brasch, Marcel

    2001-10-01

    Projective Virtual Reality is a new and promising approach to intuitively operable man machine interfaces for the commanding and supervision of complex automation systems. The user interface part of Projective Virtual Reality heavily builds on latest Virtual Reality techniques, a task deduction component and automatic action planning capabilities. In order to realize man machine interfaces for complex applications, not only the Virtual Reality part has to be considered but also the capabilities of the underlying robot and automation controller are of great importance. This paper presents a control architecture that has proved to be an ideal basis for the realization of complex robotic and automation systems that are controlled by Virtual Reality based man machine interfaces. The architecture does not just provide a well suited framework for the real-time control of a multi robot system but also supports Virtual Reality metaphors and augmentations which facilitate the user's job to command and supervise a complex system. The developed control architecture has already been used for a number of applications. Its capability to integrate sensor information from sensors of different levels of abstraction in real-time helps to make the realized automation system very responsive to real world changes. In this paper, the architecture will be described comprehensively, its main building blocks will be discussed and one realization that is built based on an open source real-time operating system will be presented. The software design and the features of the architecture which make it generally applicable to the distributed control of automation agents in real world applications will be explained. Furthermore its application to the commanding and control of experiments in the Columbus space laboratory, the European contribution to the International Space Station (ISS), is only one example which will be described.

  10. The universal toolbox thermal imager

    NASA Astrophysics Data System (ADS)

    Hollock, Steve; Jones, Graham; Usowicz, Paul

    2003-09-01

    The Introduction of Microsoft Pocket PC 2000/2002 has seen a standardisation of the operating systems used by the majority of PDA manufacturers. This, coupled with the recent price reductions associated with these devices, has led to a rapid increase in the sales of such units; their use is now common in industrial, commercial and domestic applications throughout the world. This paper describes the results of a programme to develop a thermal imager that will interface directly to all of these units so as to take advantage of the existing and future installed base of such devices. The imager currently interfaces with virtually any Pocket PC which provides the necessary processing, display and storage capability; as an alternative, the output of the unit can be visualised and processed in real time using a PC or laptop computer. In future, the open architecture employed by this imager will allow it to support all mobile computing devices, including phones and PDAs. The imager has been designed for one-handed or two-handed operation so that it may be pointed at awkward angles or used in confined spaces; this flexibility of use coupled with the extensive feature range and exceedingly low-cost of the imager, is extending the marketplace for thermal imaging from military and professional, through industrial to the commercial and domestic marketplaces.

  11. [Training cortical signals by means of a BMI-EEG system, its evolution and intervention. A case report].

    PubMed

    Monge-Pereira, E; Casatorres Perez-Higueras, I; Fernandez-Gonzalez, P; Ibanez-Pereda, J; Serrano, J I; Molina-Rueda, F

    2017-04-16

    In the last years, new technologies such as the brain-machine interfaces (BMI) have been incorporated in the rehabilitation process of subjects with stroke. These systems are able to detect motion intention, analyzing the cortical signals using different techniques such as the electroencephalography (EEG). This information could guide different interfaces such as robotic devices, electrical stimulation or virtual reality. A 40 years-old man with stroke with two months from the injury participated in this study. We used a BMI based on EEG. The subject's motion intention was analyzed calculating the event-related desynchronization. The upper limb motor function was evaluated with the Fugl-Meyer Assessment and the participant's satisfaction was evaluated using the QUEST 2.0. The intervention using a physical therapist as an interface was carried out without difficulty. The BMI systems detect cortical changes in a subacute stroke subject. These changes are coherent with the evolution observed using the Fugl-Meyer Assessment.

  12. Visual communication interface for severe physically disabled patients

    NASA Astrophysics Data System (ADS)

    Savino, M. J.; Fernández, E. A.

    2007-11-01

    During the last years several interfaces have been developed to allow communication to those patients suffering serious physical disabilities. In this work, a computer based communication interface is presented. It was designed to allow communication to those patients that cannot use neither their hands nor their voice but they can do it through their eyes. The system monitors the eyes movements by means of a webcam. Then, by means of an Artificial Neural Network, the system allows the identification of specified position on the screen through the identification of the eyes positions. This way the user can control a virtual keyboard on a screen that allows him to write and browse the system and enables him to send e-mails, SMS, activate video/music programs and control environmental devices. A patient was simulated to evaluate the versatility of the system. Its operation was satisfactory and it allowed the evaluation of the system potential. The development of this system requires low cost elements that are easily found in the market.

  13. Subthreshold Schottky-barrier thin-film transistors with ultralow power and high intrinsic gain

    NASA Astrophysics Data System (ADS)

    Lee, Sungsik; Nathan, Arokia

    2016-10-01

    The quest for low power becomes highly compelling in newly emerging application areas related to wearable devices in the Internet of Things. Here, we report on a Schottky-barrier indium-gallium-zinc-oxide thin-film transistor operating in the deep subthreshold regime (i.e., near the OFF state) at low supply voltages (<1 volt) and ultralow power (<1 nanowatt). By using a Schottky-barrier at the source and drain contacts, the current-voltage characteristics of the transistor were virtually channel-length independent with an infinite output resistance. It exhibited high intrinsic gain (>400) that was both bias and geometry independent. The transistor reported here is useful for sensor interface circuits in wearable devices where high current sensitivity and ultralow power are vital for battery-less operation.

  14. Integration Head Mounted Display Device and Hand Motion Gesture Device for Virtual Reality Laboratory

    NASA Astrophysics Data System (ADS)

    Rengganis, Y. A.; Safrodin, M.; Sukaridhoto, S.

    2018-01-01

    Virtual Reality Laboratory (VR Lab) is an innovation for conventional learning media which show us whole learning process in laboratory. There are many tools and materials are needed by user for doing practical in it, so user could feel new learning atmosphere by using this innovation. Nowadays, technologies more sophisticated than before. So it would carry in education and it will be more effective, efficient. The Supported technologies are needed us for making VR Lab such as head mounted display device and hand motion gesture device. The integration among them will be used us for making this research. Head mounted display device for viewing 3D environment of virtual reality laboratory. Hand motion gesture device for catching user real hand and it will be visualized in virtual reality laboratory. Virtual Reality will show us, if using the newest technologies in learning process it could make more interesting and easy to understand.

  15. Combined virtual and real robotic test-bed for single operator control of multiple robots

    NASA Astrophysics Data System (ADS)

    Lee, Sam Y.-S.; Hunt, Shawn; Cao, Alex; Pandya, Abhilash

    2010-04-01

    Teams of heterogeneous robots with different dynamics or capabilities could perform a variety of tasks such as multipoint surveillance, cooperative transport and explorations in hazardous environments. In this study, we work with heterogeneous robots of semi-autonomous ground and aerial robots for contaminant localization. We developed a human interface system which linked every real robot to its virtual counterpart. A novel virtual interface has been integrated with Augmented Reality that can monitor the position and sensory information from video feed of ground and aerial robots in the 3D virtual environment, and improve user situational awareness. An operator can efficiently control the real multi-robots using the Drag-to-Move method on the virtual multi-robots. This enables an operator to control groups of heterogeneous robots in a collaborative way for allowing more contaminant sources to be pursued simultaneously. The advanced feature of the virtual interface system is guarded teleoperation. This can be used to prevent operators from accidently driving multiple robots into walls and other objects. Moreover, the feature of the image guidance and tracking is able to reduce operator workload.

  16. NPSNET: Aural cues for virtual world immersion

    NASA Astrophysics Data System (ADS)

    Dahl, Leif A.

    1992-09-01

    NPSNET is a low-cost visual and aural simulation system designed and implemented at the Naval Postgraduate School. NPSNET is an example of a virtual world simulation environment that incorporates real-time aural cues through software-hardware interaction. In the current implementation of NPSNET, a graphics workstation functions in the sound server role which involves sending and receiving networked sound message packets across a Local Area Network, composed of multiple graphics workstations. The network messages contain sound file identification information that is transmitted from the sound server across an RS-422 protocol communication line to a serial to Musical Instrument Digital Interface (MIDI) converter. The MIDI converter, in turn relays the sound byte to a sampler, an electronic recording and playback device. The sampler correlates the hexadecimal input to a specific note or stored sound and sends it as an audio signal to speakers via an amplifier. The realism of a simulation is improved by involving multiple participant senses and removing external distractions. This thesis describes the incorporation of sound as aural cues, and the enhancement they provide in the virtual simulation environment of NPSNET.

  17. Modeling Proton Irradiation in AlGaN/GaN HEMTs: Understanding the Increase of Critical Voltage

    NASA Astrophysics Data System (ADS)

    Patrick, Erin; Law, Mark E.; Liu, Lu; Cuervo, Camilo Velez; Xi, Yuyin; Ren, Fan; Pearton, Stephen J.

    2013-12-01

    A combination of TRIM and FLOODS models the effect of radiation damage on AlGaN/GaN HEMTs. While excellent fits are obtained for threshold voltage shift, the models do not fully explain the increased reliability observed experimentally. In short, the addition of negatively-charged traps in the GaN buffer layer does not significantly change the electric field at the gate edges at radiation fluence levels seen in this study. We propose that negative trapped charge at the nitride/AlGaN interface actually produces the virtual-gate effect that results in decreasing the magnitude of the electric field at the gate edges and thus the increase in critical voltage. Simulation results including nitride interface charge show significant changes in electric field profiles while the I-V device characteristics do not change.

  18. Development and user evaluation of a virtual rehabilitation system for wobble board balance training.

    PubMed

    Fitzgerald, Diarmaid; Trakarnratanakul, Nanthana; Dunne, Lucy; Smyth, Barry; Caulfield, Brian

    2008-01-01

    We have developed a prototype virtual reality-based balance training system using a single inertial orientation sensor attached to the upper surface of a wobble board. This input device has been interfaced with Neverball, an open source computer game to create the balance training platform. Users can exercise with the system by standing on the wobble board and tilting it in different directions to control an on-screen environment. We have also developed a customized instruction manual to use when setting up the system. To evaluate the usability our prototype system we undertook a user evaluation study with twelve healthy novice participants. Participants were required to assemble the system using an instruction manual and then perform balance exercises with the system. Following this period of exercise VRUSE, a usability evaluation questionnaire, was completed by participants. Results indicated a high level of usability in all categories evaluated.

  19. The use of virtual reality technology in the treatment of anxiety and other psychiatric disorders

    PubMed Central

    Maples-Keller, Jessica L.; Bunnell, Brian E.; Kim, Sae-Jin; Rothbaum, Barbara O.

    2016-01-01

    Virtual reality, or VR, allows users to experience a sense of presence in a computer-generated three-dimensional environment. Sensory information is delivered through a head mounted display and specialized interface devices. These devices track head movements so that the movements and images change in a natural way with head motion, allowing for a sense of immersion. VR allows for controlled delivery of sensory stimulation via the therapist and is a convenient and cost-effective treatment. The primary focus of this article is to review the available literature regarding the effectiveness of incorporating VR within the psychiatric treatment of a wide range of psychiatric disorders, with a specific focus on exposure-based intervention for anxiety disorders. A systematic literature search was conducted in order to identify studies implementing VR based treatment for anxiety or other psychiatric disorders. This review will provide an overview of the history of the development of VR based technology and its use within psychiatric treatment, an overview of the empirical evidence for VR based treatment, the benefits for using VR for psychiatric research and treatment, recommendations for how to incorporate VR into psychiatric care, and future directions for VR based treatment and clinical research. PMID:28475502

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gonzalez Anez, Francisco

    This paper presents two development projects (STARMATE and VIRMAN) focused on supporting training on maintenance. Both projects aim at specifying, designing, developing, and demonstrating prototypes allowing computer guided maintenance of complex mechanical elements using Augmented and Virtual Reality techniques. VIRMAN is a Spanish development project. The objective is to create a computer tool for maintenance training course elaborations and training delivery based on 3D virtual reality models of complex components. The training delivery includes 3D record displays on maintenance procedures with all complementary information for intervention understanding. Users are requested to perform the maintenance intervention trying to follow up themore » procedure. Users can be evaluated about the level of knowledge achieved. Instructors can check the evaluation records left during the training sessions. VIRMAN is simple software supported by a regular computer and can be used in an Internet framework. STARMATE is a forward step in the area of virtual reality. STARMATE is a European Commission project in the frame of 'Information Societies Technologies'. A consortium of five companies and one research institute shares their expertise in this new technology. STARMATE provides two main functionalities (1) user assistance for achieving assembly/de-assembly and following maintenance procedures, and (2) workforce training. The project relies on Augmented Reality techniques, which is a growing area in Virtual Reality research. The idea of Augmented Reality is to combine a real scene, viewed by the user, with a virtual scene, generated by a computer, augmenting the reality with additional information. The user interface is see-through goggles, headphones, microphone and an optical tracking system. All these devices are integrated in a helmet connected with two regular computers. The user has his hands free for performing the maintenance intervention and he can navigate in the virtual world thanks to a voice recognition system and a virtual pointing device. The maintenance work is guided with audio instructions, 2D and 3D information are directly displayed into the user's goggles: There is a position-tracking system that allows 3D virtual models to be displayed in the real counterpart positions independently of the user allocation. The user can create his own virtual environment, placing the information required wherever he wants. The STARMATE system is applicable to a large variety of real work situations. (author)« less

  1. Laser device

    DOEpatents

    Scott, Jill R.; Tremblay, Paul L.

    2008-08-19

    A laser device includes a virtual source configured to aim laser energy that originates from a true source. The virtual source has a vertical rotational axis during vertical motion of the virtual source and the vertical axis passes through an exit point from which the laser energy emanates independent of virtual source position. The emanating laser energy is collinear with an orientation line. The laser device includes a virtual source manipulation mechanism that positions the virtual source. The manipulation mechanism has a center of lateral pivot approximately coincident with a lateral index and a center of vertical pivot approximately coincident with a vertical index. The vertical index and lateral index intersect at an index origin. The virtual source and manipulation mechanism auto align the orientation line through the index origin during virtual source motion.

  2. BIG data based on healthcare analysis using IOT devices

    NASA Astrophysics Data System (ADS)

    Priyanka, A.; Parimala, M.; Sudheer, K.; Thippareddy; Kaluri, Rajesh; Lakshmanna, Kuruva; Reddy, M. Praveen Kumar

    2017-11-01

    IOT (INTERNET OF THINGS) makes the brilliant protest in social insurance. The heterogeneous registering, remotely imparting arrangement of gadget that interfaces with human body and wellbeing gave to analyse, screen, track and store virtual factual and restorative data. In this paper, quiet individual data and wellbeing condition will be dissect by the specialist and they can see the patient condition and furthermore give the correct answer for the patient. The more number of information (huge information) will store and move into the specific association record framework. It makes conceivable to social occasion of rich data demonstrative of our physical and psychological wellness.

  3. A Virtual Object-Location Task for Children: Gender and Videogame Experience Influence Navigation; Age Impacts Memory and Completion Time

    PubMed Central

    Rodriguez-Andres, David; Mendez-Lopez, Magdalena; Juan, M.-Carmen; Perez-Hernandez, Elena

    2018-01-01

    The use of virtual reality-based tasks for studying memory has increased considerably. Most of the studies that have looked at child population factors that influence performance on such tasks have been focused on cognitive variables. However, little attention has been paid to the impact of non-cognitive skills. In the present paper, we tested 52 typically-developing children aged 5–12 years in a virtual object-location task. The task assessed their spatial short-term memory for the location of three objects in a virtual city. The virtual task environment was presented using a 3D application consisting of a 120″ stereoscopic screen and a gamepad interface. Measures of learning and displacement indicators in the virtual environment, 3D perception, satisfaction, and usability were obtained. We assessed the children’s videogame experience, their visuospatial span, their ability to build blocks, and emotional and behavioral outcomes. The results indicate that learning improved with age. Significant effects on the speed of navigation were found favoring boys and those more experienced with videogames. Visuospatial skills correlated mainly with ability to recall object positions, but the correlation was weak. Longer paths were related with higher scores of withdrawal behavior, attention problems, and a lower visuospatial span. Aggressiveness and experience with the device used for interaction were related with faster navigation. However, the correlations indicated only weak associations among these variables. PMID:29674988

  4. A Typology of Ethnographic Scales for Virtual Worlds

    NASA Astrophysics Data System (ADS)

    Boellstorff, Tom

    This chapter outlines a typology of genres of ethnographic research with regard to virtual worlds, informed by extensive research the author has completed both in Second Life and in Indonesia. It begins by identifying four confusions about virtual worlds: they are not games, they need not be graphical or even visual, they are not mass media, and they need not be defined in terms of escapist role-playing. A three-part typology of methods for ethnographic research in virtual worlds focuses on the relationship between research design and ethnographic scale. One class of methods for researching virtual worlds with regard to ethnographic scale explores interfaces between virtual worlds and the actual world, whereas a second examines interfaces between two or more virtual worlds. The third class involves studying a single virtual world in its own terms. Recognizing that all three approaches have merit for particular research purposes, ethnography of virtual worlds can be a vibrant field of research, contributing to central debates about human selfhood and sociality.

  5. CycloPs: generating virtual libraries of cyclized and constrained peptides including nonnatural amino acids.

    PubMed

    Duffy, Fergal J; Verniere, Mélanie; Devocelle, Marc; Bernard, Elise; Shields, Denis C; Chubb, Anthony J

    2011-04-25

    We introduce CycloPs, software for the generation of virtual libraries of constrained peptides including natural and nonnatural commercially available amino acids. The software is written in the cross-platform Python programming language, and features include generating virtual libraries in one-dimensional SMILES and three-dimensional SDF formats, suitable for virtual screening. The stand-alone software is capable of filtering the virtual libraries using empirical measurements, including peptide synthesizability by standard peptide synthesis techniques, stability, and the druglike properties of the peptide. The software and accompanying Web interface is designed to enable the rapid generation of large, structurally diverse, synthesizable virtual libraries of constrained peptides quickly and conveniently, for use in virtual screening experiments. The stand-alone software, and the Web interface for evaluating these empirical properties of a single peptide, are available at http://bioware.ucd.ie .

  6. Virtual gap element approach for the treatment of non-matching interface using three-dimensional solid elements

    NASA Astrophysics Data System (ADS)

    Song, Yeo-Ul; Youn, Sung-Kie; Park, K. C.

    2017-10-01

    A method for three-dimensional non-matching interface treatment with a virtual gap element is developed. When partitioned structures contain curved interfaces and have different brick meshes, the discretized models have gaps along the interfaces. As these gaps bring unexpected errors, special treatments are required to handle the gaps. In the present work, a virtual gap element is introduced to link the frame and surface domain nodes in the frame work of the mortar method. Since the surface of the hexahedron element is quadrilateral, the gap element is pyramidal. The pyramidal gap element consists of four domain nodes and one frame node. Zero-strain condition in the gap element is utilized for the interpolation of frame nodes in terms of the domain nodes. This approach is taken to satisfy the momentum and energy conservation. The present method is applicable not only to curved interfaces with gaps, but also to flat interfaces in three dimensions. Several numerical examples are given to describe the effectiveness and accuracy of the proposed method.

  7. Towards the virtual artery: a multiscale model for vascular physiology at the physics-chemistry-biology interface.

    PubMed

    Hoekstra, Alfons G; Alowayyed, Saad; Lorenz, Eric; Melnikova, Natalia; Mountrakis, Lampros; van Rooij, Britt; Svitenkov, Andrew; Závodszky, Gábor; Zun, Pavel

    2016-11-13

    This discussion paper introduces the concept of the Virtual Artery as a multiscale model for arterial physiology and pathologies at the physics-chemistry-biology (PCB) interface. The cellular level is identified as the mesoscopic level, and we argue that by coupling cell-based models with other relevant models on the macro- and microscale, a versatile model of arterial health and disease can be composed. We review the necessary ingredients, both models of arteries at many different scales, as well as generic methods to compose multiscale models. Next, we discuss how this can be combined into the virtual artery. Finally, we argue that the concept of models at the PCB interface could or perhaps should become a powerful paradigm, not only as in our case for studying physiology, but also for many other systems that have such PCB interfaces.This article is part of the themed issue 'Multiscale modelling at the physics-chemistry-biology interface'. © 2016 The Authors.

  8. On-line data display

    NASA Astrophysics Data System (ADS)

    Lang, Sherman Y. T.; Brooks, Martin; Gauthier, Marc; Wein, Marceli

    1993-05-01

    A data display system for embedded realtime systems has been developed for use as an operator's user interface and debugging tool. The motivation for development of the On-Line Data Display (ODD) have come from several sources. In particular the design reflects the needs of researchers developing an experimental mobile robot within our laboratory. A proliferation of specialized user interfaces revealed a need for a flexible communications and graphical data display system. At the same time the system had to be readily extensible for arbitrary graphical display formats which would be required for data visualization needs of the researchers. The system defines a communication protocol transmitting 'datagrams' between tasks executing on the realtime system and virtual devices displaying the data in a meaningful way on a graphical workstation. The communication protocol multiplexes logical channels on a single data stream. The current implementation consists of a server for the Harmony realtime operating system and an application written for the Macintosh computer. Flexibility requirements resulted in a highly modular server design, and a layered modular object- oriented design for the Macintosh part of the system. Users assign data types to specific channels at run time. Then devices are instantiated by the user and connected to channels to receive datagrams. The current suite of device types do not provide enough functionality for most users' specialized needs. Instead the system design allows the creation of new device types with modest programming effort. The protocol, design and use of the system are discussed.

  9. Using a cVEP-Based Brain-Computer Interface to Control a Virtual Agent.

    PubMed

    Riechmann, Hannes; Finke, Andrea; Ritter, Helge

    2016-06-01

    Brain-computer interfaces provide a means for controlling a device by brain activity alone. One major drawback of noninvasive BCIs is their low information transfer rate, obstructing a wider deployment outside the lab. BCIs based on codebook visually evoked potentials (cVEP) outperform all other state-of-the-art systems in that regard. Previous work investigated cVEPs for spelling applications. We present the first cVEP-based BCI for use in real-world settings to accomplish everyday tasks such as navigation or action selection. To this end, we developed and evaluated a cVEP-based on-line BCI that controls a virtual agent in a simulated, but realistic, 3-D kitchen scenario. We show that cVEPs can be reliably triggered with stimuli in less restricted presentation schemes, such as on dynamic, changing backgrounds. We introduce a novel, dynamic repetition algorithm that allows for optimizing the balance between accuracy and speed individually for each user. Using these novel mechanisms in a 12-command cVEP-BCI in the 3-D simulation results in ITRs of 50 bits/min on average and 68 bits/min maximum. Thus, this work supports the notion of cVEP-BCIs as a particular fast and robust approach suitable for real-world use.

  10. Design of an efficient framework for fast prototyping of customized human-computer interfaces and virtual environments for rehabilitation.

    PubMed

    Avola, Danilo; Spezialetti, Matteo; Placidi, Giuseppe

    2013-06-01

    Rehabilitation is often required after stroke, surgery, or degenerative diseases. It has to be specific for each patient and can be easily calibrated if assisted by human-computer interfaces and virtual reality. Recognition and tracking of different human body landmarks represent the basic features for the design of the next generation of human-computer interfaces. The most advanced systems for capturing human gestures are focused on vision-based techniques which, on the one hand, may require compromises from real-time and spatial precision and, on the other hand, ensure natural interaction experience. The integration of vision-based interfaces with thematic virtual environments encourages the development of novel applications and services regarding rehabilitation activities. The algorithmic processes involved during gesture recognition activity, as well as the characteristics of the virtual environments, can be developed with different levels of accuracy. This paper describes the architectural aspects of a framework supporting real-time vision-based gesture recognition and virtual environments for fast prototyping of customized exercises for rehabilitation purposes. The goal is to provide the therapist with a tool for fast implementation and modification of specific rehabilitation exercises for specific patients, during functional recovery. Pilot examples of designed applications and preliminary system evaluation are reported and discussed. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  11. Virtual Reality: An Instructional Medium for Visual-Spatial Tasks.

    ERIC Educational Resources Information Center

    Regian, J. Wesley; And Others

    1992-01-01

    Describes an empirical exploration of the instructional potential of virtual reality as an interface for simulation-based training. Shows that subjects learned spatial-procedural and spatial-navigational skills in virtual reality. (SR)

  12. Reaching the Next Generation of College Students via Their Digital Devices.

    NASA Astrophysics Data System (ADS)

    Whitmeyer, S. J.; De Paor, D. G.; Bentley, C.

    2015-12-01

    Current college students attended school during a decade in which many school districts banned cellphones from the classroom or even from school grounds. These students are used to being told to put away their mobile devices and concentrate on traditional classroom activities such as watching PowerPoint presentations or calculating with pencil and paper. However, due to a combination of parental security concerns and recent education research, schools are rapidly changing policy and embracing mobile devices for ubiquitous learning opportunities inside and outside of the classroom. Consequently, many of the next generation of college students will have expectations of learning via mobile technology. We have developed a range of digital geology resources to aid mobile-based geoscience education at college level, including mapping on iPads and other tablets, "crowd-sourced" field projects, augmented reality-supported asynchronous field classes, 3D and 4D split-screen virtual reality tours, macroscopic and microscopic gigapixel imagery, 360° panoramas, assistive devices for inclusive field education, and game-style educational challenges. Class testing of virtual planetary tours shows modest short-term learning gains, but more work is needed to ensure long-term retention. Many of our resources rely on the Google Earth browser plug-in and application program interface (API). Because of security concerns, browser plug-ins in general are being phased out and the Google Earth API will not be supported in future browsers. However, a new plug-in-free API is promised by Google and an alternative open-source virtual globe called Cesium is undergoing rapid development. It already supports the main aspects of Keyhole Markup Language and has features of significant benefit to geoscience, including full support on mobile devices and sub-surface viewing and touring. The research team includes: Heather Almquist, Stephen Burgin, Cinzia Cervato, Filis Coba, Chloe Constants, Gene Cooper, Mladen Dordevic, Marissa Dudek, Brandon Fitzwater, Bridget Gomez, Tyler Hansen, Paul Karabinos, Terry Pavlis, Jen Piatek, Alan Pitts, Robin Rohrback, Bill Richards, Caroline Robinson, Jeff Rollins, Jeff Ryan, Ron Schott, Kristen St. John, and Barb Tewksbury. Supported by NSF DUE 1323419 and by Google Geo Curriculum Awards.

  13. Subthreshold Schottky-barrier thin-film transistors with ultralow power and high intrinsic gain.

    PubMed

    Lee, Sungsik; Nathan, Arokia

    2016-10-21

    The quest for low power becomes highly compelling in newly emerging application areas related to wearable devices in the Internet of Things. Here, we report on a Schottky-barrier indium-gallium-zinc-oxide thin-film transistor operating in the deep subthreshold regime (i.e., near the OFF state) at low supply voltages (<1 volt) and ultralow power (<1 nanowatt). By using a Schottky-barrier at the source and drain contacts, the current-voltage characteristics of the transistor were virtually channel-length independent with an infinite output resistance. It exhibited high intrinsic gain (>400) that was both bias and geometry independent. The transistor reported here is useful for sensor interface circuits in wearable devices where high current sensitivity and ultralow power are vital for battery-less operation. Copyright © 2016, American Association for the Advancement of Science.

  14. Avatars and virtual agents – relationship interfaces for the elderly

    PubMed Central

    2017-01-01

    In the Digital Era, the authors witness a change in the relationship between the patient and the care-giver or Health Maintenance Organization's providing the health services. Another fact is the use of various technologies to increase the effectiveness and quality of health services across all primary and secondary users. These technologies range from telemedicine systems, decision making tools, online and self-services applications and virtual agents; all providing information and assistance. The common thread between all these digital implementations, is they all require human machine interfaces. These interfaces must be interactive, user friendly and inviting, to create user involvement and cooperation incentives. The challenge is to design interfaces which will best fit the target users and enable smooth interaction especially, for the elderly users. Avatars and Virtual Agents are one of the interfaces used for both home care monitoring and companionship. They are also inherently multimodal in nature and allow an intimate relation between the elderly users and the Avatar. This study discusses the need and nature of these relationship models, the challenges of designing for the elderly. The study proposes key features for the design and evaluation in the area of assistive applications using Avatar and Virtual agents for the elderly users. PMID:28706725

  15. VEVI: A Virtual Reality Tool For Robotic Planetary Explorations

    NASA Technical Reports Server (NTRS)

    Piguet, Laurent; Fong, Terry; Hine, Butler; Hontalas, Phil; Nygren, Erik

    1994-01-01

    The Virtual Environment Vehicle Interface (VEVI), developed by the NASA Ames Research Center's Intelligent Mechanisms Group, is a modular operator interface for direct teleoperation and supervisory control of robotic vehicles. Virtual environments enable the efficient display and visualization of complex data. This characteristic allows operators to perceive and control complex systems in a natural fashion, utilizing the highly-evolved human sensory system. VEVI utilizes real-time, interactive, 3D graphics and position / orientation sensors to produce a range of interface modalities from the flat panel (windowed or stereoscopic) screen displays to head mounted/head-tracking stereo displays. The interface provides generic video control capability and has been used to control wheeled, legged, air bearing, and underwater vehicles in a variety of different environments. VEVI was designed and implemented to be modular, distributed and easily operated through long-distance communication links, using a communication paradigm called SYNERGY.

  16. Design strategies and functionality of the Visual Interface for Virtual Interaction Development (VIVID) tool

    NASA Technical Reports Server (NTRS)

    Nguyen, Lac; Kenney, Patrick J.

    1993-01-01

    Development of interactive virtual environments (VE) has typically consisted of three primary activities: model (object) development, model relationship tree development, and environment behavior definition and coding. The model and relationship tree development activities are accomplished with a variety of well-established graphic library (GL) based programs - most utilizing graphical user interfaces (GUI) with point-and-click interactions. Because of this GUI format, little programming expertise on the part of the developer is necessary to create the 3D graphical models or to establish interrelationships between the models. However, the third VE development activity, environment behavior definition and coding, has generally required the greatest amount of time and programmer expertise. Behaviors, characteristics, and interactions between objects and the user within a VE must be defined via command line C coding prior to rendering the environment scenes. In an effort to simplify this environment behavior definition phase for non-programmers, and to provide easy access to model and tree tools, a graphical interface and development tool has been created. The principal thrust of this research is to effect rapid development and prototyping of virtual environments. This presentation will discuss the 'Visual Interface for Virtual Interaction Development' (VIVID) tool; an X-Windows based system employing drop-down menus for user selection of program access, models, and trees, behavior editing, and code generation. Examples of these selection will be highlighted in this presentation, as will the currently available program interfaces. The functionality of this tool allows non-programming users access to all facets of VE development while providing experienced programmers with a collection of pre-coded behaviors. In conjunction with its existing, interfaces and predefined suite of behaviors, future development plans for VIVID will be described. These include incorporation of dual user virtual environment enhancements, tool expansion, and additional behaviors.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hasanah, Lilik, E-mail: lilikhasanah@upi.edu; Suhendi, Endi; Tayubi, Yuyu Rahmat

    In this work we discuss the surface roughness of Si interface impact to the tunneling current of the Si/Si{sub 1-x}Ge{sub x}/Si heterojunction bipolar transistor. The Si interface surface roughness can be analyzed from electrical characteristics through the transversal electron velocity obtained as fitting parameter factor. The results showed that surface roughness increase as Ge content of virtual substrate increase This model can be used to investigate the effect of Ge content of the virtual substrate to the interface surface condition through current-voltage characteristic.

  18. Tactile Data Entry for Extravehicular Activity

    NASA Technical Reports Server (NTRS)

    Adams, Richard J.; Olowin, Aaron B.; Hannaford, Blake; Sands, O Scott

    2012-01-01

    In the task-saturated environment of extravehicular activity (EVA), an astronaut's ability to leverage suit-integrated information systems is limited by a lack of options for data entry. In particular, bulky gloves inhibit the ability to interact with standard computing interfaces such as a mouse or keyboard. This paper presents the results of a preliminary investigation into a system that permits the space suit gloves themselves to be used as data entry devices. Hand motion tracking is combined with simple finger gesture recognition to enable use of a virtual keyboard, while tactile feedback provides touch-based context to the graphical user interface (GUI) and positive confirmation of keystroke events. In human subject trials, conducted with twenty participants using a prototype system, participants entered text significantly faster with tactile feedback than without (p = 0.02). The results support incorporation of vibrotactile information in a future system that will enable full touch typing and general mouse interactions using instrumented EVA gloves.

  19. Virtual Reality: Toward Fundamental Improvements in Simulation-Based Training.

    ERIC Educational Resources Information Center

    Thurman, Richard A.; Mattoon, Joseph S.

    1994-01-01

    Considers the role and effectiveness of virtual reality in simulation-based training. The theoretical and practical implications of verity, integration, and natural versus artificial interface are discussed; a three-dimensional classification scheme for virtual reality is described; and the relationship between virtual reality and other…

  20. A novel device for head gesture measurement system in combination with eye-controlled human machine interface

    NASA Astrophysics Data System (ADS)

    Lin, Chern-Sheng; Ho, Chien-Wa; Chang, Kai-Chieh; Hung, San-Shan; Shei, Hung-Jung; Yeh, Mau-Shiun

    2006-06-01

    This study describes the design and combination of an eye-controlled and a head-controlled human-machine interface system. This system is a highly effective human-machine interface, detecting head movement by changing positions and numbers of light sources on the head. When the users utilize the head-mounted display to browse a computer screen, the system will catch the images of the user's eyes with CCD cameras, which can also measure the angle and position of the light sources. In the eye-tracking system, the program in the computer will locate each center point of the pupils in the images, and record the information on moving traces and pupil diameters. In the head gesture measurement system, the user wears a double-source eyeglass frame, so the system catches images of the user's head by using a CCD camera in front of the user. The computer program will locate the center point of the head, transferring it to the screen coordinates, and then the user can control the cursor by head motions. We combine the eye-controlled and head-controlled human-machine interface system for the virtual reality applications.

  1. 78 FR 54626 - Announcing Approval of Federal Information Processing Standard (FIPS) Publication 201-2, Personal...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-05

    ... its virtual contact interface be made mandatory as soon as possible for the many beneficial features... messaging and the virtual contact interface in the Standard, some Federal departments and agencies have... Laboratory Programs. [FR Doc. 2013-21491 Filed 9-4-13; 8:45 am] BILLING CODE 3510-13-P ...

  2. Linking Audio and Visual Information while Navigating in a Virtual Reality Kiosk Display

    ERIC Educational Resources Information Center

    Sullivan, Briana; Ware, Colin; Plumlee, Matthew

    2006-01-01

    3D interactive virtual reality museum exhibits should be easy to use, entertaining, and informative. If the interface is intuitive, it will allow the user more time to learn the educational content of the exhibit. This research deals with interface issues concerning activating audio descriptions of images in such exhibits while the user is…

  3. Virtual Reality: A Dream Come True or a Nightmare.

    ERIC Educational Resources Information Center

    Cornell, Richard; Bailey, Dan

    Virtual Reality (VR) is a new medium which allows total stimulation of one's senses through human/computer interfaces. VR has applications in training simulators, nano-science, medicine, entertainment, electronic technology, and manufacturing. This paper focuses on some current and potential problems of virtual reality and virtual environments…

  4. Real-time functional magnetic imaging-brain-computer interface and virtual reality promising tools for the treatment of pedophilia.

    PubMed

    Renaud, Patrice; Joyal, Christian; Stoleru, Serge; Goyette, Mathieu; Weiskopf, Nikolaus; Birbaumer, Niels

    2011-01-01

    This chapter proposes a prospective view on using a real-time functional magnetic imaging (rt-fMRI) brain-computer interface (BCI) application as a new treatment for pedophilia. Neurofeedback mediated by interactive virtual stimuli is presented as the key process in this new BCI application. Results on the diagnostic discriminant power of virtual characters depicting sexual stimuli relevant to pedophilia are given. Finally, practical and ethical implications are briefly addressed. Copyright © 2011 Elsevier B.V. All rights reserved.

  5. Wafer bonded epitaxial templates for silicon heterostructures

    DOEpatents

    Atwater, Jr., Harry A.; Zahler, James M [Pasadena, CA; Morral, Anna Fontcubera I [Paris, FR

    2008-03-11

    A heterostructure device layer is epitaxially grown on a virtual substrate, such as an InP/InGaAs/InP double heterostructure. A device substrate and a handle substrate form the virtual substrate. The device substrate is bonded to the handle substrate and is composed of a material suitable for fabrication of optoelectronic devices. The handle substrate is composed of a material suitable for providing mechanical support. The mechanical strength of the device and handle substrates is improved and the device substrate is thinned to leave a single-crystal film on the virtual substrate such as by exfoliation of a device film from the device substrate. An upper portion of the device film exfoliated from the device substrate is removed to provide a smoother and less defect prone surface for an optoelectronic device. A heterostructure is epitaxially grown on the smoothed surface in which an optoelectronic device may be fabricated.

  6. Wafer bonded epitaxial templates for silicon heterostructures

    NASA Technical Reports Server (NTRS)

    Atwater, Harry A., Jr. (Inventor); Zahler, James M. (Inventor); Morral, Anna Fontcubera I (Inventor)

    2008-01-01

    A heterostructure device layer is epitaxially grown on a virtual substrate, such as an InP/InGaAs/InP double heterostructure. A device substrate and a handle substrate form the virtual substrate. The device substrate is bonded to the handle substrate and is composed of a material suitable for fabrication of optoelectronic devices. The handle substrate is composed of a material suitable for providing mechanical support. The mechanical strength of the device and handle substrates is improved and the device substrate is thinned to leave a single-crystal film on the virtual substrate such as by exfoliation of a device film from the device substrate. An upper portion of the device film exfoliated from the device substrate is removed to provide a smoother and less defect prone surface for an optoelectronic device. A heterostructure is epitaxially grown on the smoothed surface in which an optoelectronic device may be fabricated.

  7. VirtualDose: a software for reporting organ doses from CT for adult and pediatric patients.

    PubMed

    Ding, Aiping; Gao, Yiming; Liu, Haikuan; Caracappa, Peter F; Long, Daniel J; Bolch, Wesley E; Liu, Bob; Xu, X George

    2015-07-21

    This paper describes the development and testing of VirtualDose--a software for reporting organ doses for adult and pediatric patients who undergo x-ray computed tomography (CT) examinations. The software is based on a comprehensive database of organ doses derived from Monte Carlo (MC) simulations involving a library of 25 anatomically realistic phantoms that represent patients of different ages, body sizes, body masses, and pregnant stages. Models of GE Lightspeed Pro 16 and Siemens SOMATOM Sensation 16 scanners were carefully validated for use in MC dose calculations. The software framework is designed with the 'software as a service (SaaS)' delivery concept under which multiple clients can access the web-based interface simultaneously from any computer without having to install software locally. The RESTful web service API also allows a third-party picture archiving and communication system software package to seamlessly integrate with VirtualDose's functions. Software testing showed that VirtualDose was compatible with numerous operating systems including Windows, Linux, Apple OS X, and mobile and portable devices. The organ doses from VirtualDose were compared against those reported by CT-Expo and ImPACT-two dosimetry tools that were based on the stylized pediatric and adult patient models that were known to be anatomically simple. The organ doses reported by VirtualDose differed from those reported by CT-Expo and ImPACT by as much as 300% in some of the patient models. These results confirm the conclusion from past studies that differences in anatomical realism offered by stylized and voxel phantoms have caused significant discrepancies in CT dose estimations.

  8. The combined impact of virtual reality neurorehabilitation and its interfaces on upper extremity functional recovery in patients with chronic stroke.

    PubMed

    Cameirão, Mónica S; Badia, Sergi Bermúdez i; Duarte, Esther; Frisoli, Antonio; Verschure, Paul F M J

    2012-10-01

    Although there is strong evidence on the beneficial effects of virtual reality (VR)-based rehabilitation, it is not yet well understood how the different aspects of these systems affect recovery. Consequently, we do not exactly know what features of VR neurorehabilitation systems are decisive in conveying their beneficial effects. To specifically address this issue, we developed 3 different configurations of the same VR-based rehabilitation system, the Rehabilitation Gaming System, using 3 different interface technologies: vision-based tracking, haptics, and a passive exoskeleton. Forty-four patients with chronic stroke were randomly allocated to one of the configurations and used the system for 35 minutes a day for 5 days a week during 4 weeks. Our results revealed significant within-subject improvements at most of the standard clinical evaluation scales for all groups. Specifically we observe that the beneficial effects of VR-based training are modulated by the use/nonuse of compensatory movement strategies and the specific sensorimotor contingencies presented to the user, that is, visual feedback versus combined visual haptic feedback. Our findings suggest that the beneficial effects of VR-based neurorehabilitation systems such as the Rehabilitation Gaming System for the treatment of chronic stroke depend on the specific interface systems used. These results have strong implications for the design of future VR rehabilitation strategies that aim at maximizing functional outcomes and their retention. Clinical Trial Registration- This trial was not registered because it is a small clinical study that evaluates the feasibility of prototype devices.

  9. Visuo-Haptic Mixed Reality with Unobstructed Tool-Hand Integration.

    PubMed

    Cosco, Francesco; Garre, Carlos; Bruno, Fabio; Muzzupappa, Maurizio; Otaduy, Miguel A

    2013-01-01

    Visuo-haptic mixed reality consists of adding to a real scene the ability to see and touch virtual objects. It requires the use of see-through display technology for visually mixing real and virtual objects, and haptic devices for adding haptic interaction with the virtual objects. Unfortunately, the use of commodity haptic devices poses obstruction and misalignment issues that complicate the correct integration of a virtual tool and the user's real hand in the mixed reality scene. In this work, we propose a novel mixed reality paradigm where it is possible to touch and see virtual objects in combination with a real scene, using commodity haptic devices, and with a visually consistent integration of the user's hand and the virtual tool. We discuss the visual obstruction and misalignment issues introduced by commodity haptic devices, and then propose a solution that relies on four simple technical steps: color-based segmentation of the hand, tracking-based segmentation of the haptic device, background repainting using image-based models, and misalignment-free compositing of the user's hand. We have developed a successful proof-of-concept implementation, where a user can touch virtual objects and interact with them in the context of a real scene, and we have evaluated the impact on user performance of obstruction and misalignment correction.

  10. Detection of Nuclear Sources by UAV Teleoperation Using a Visuo-Haptic Augmented Reality Interface

    PubMed Central

    Micconi, Giorgio; Caselli, Stefano; Benassi, Giacomo; Zambelli, Nicola; Bettelli, Manuele

    2017-01-01

    A visuo-haptic augmented reality (VHAR) interface is presented enabling an operator to teleoperate an unmanned aerial vehicle (UAV) equipped with a custom CdZnTe-based spectroscopic gamma-ray detector in outdoor environments. The task is to localize nuclear radiation sources, whose location is unknown to the user, without the close exposure of the operator. The developed detector also enables identification of the localized nuclear sources. The aim of the VHAR interface is to increase the situation awareness of the operator. The user teleoperates the UAV using a 3DOF haptic device that provides an attractive force feedback around the location of the most intense detected radiation source. Moreover, a fixed camera on the ground observes the environment where the UAV is flying. A 3D augmented reality scene is displayed on a computer screen accessible to the operator. Multiple types of graphical overlays are shown, including sensor data acquired by the nuclear radiation detector, a virtual cursor that tracks the UAV and geographical information, such as buildings. Experiments performed in a real environment are reported using an intense nuclear source. PMID:28961198

  11. Detection of Nuclear Sources by UAV Teleoperation Using a Visuo-Haptic Augmented Reality Interface.

    PubMed

    Aleotti, Jacopo; Micconi, Giorgio; Caselli, Stefano; Benassi, Giacomo; Zambelli, Nicola; Bettelli, Manuele; Zappettini, Andrea

    2017-09-29

    A visuo-haptic augmented reality (VHAR) interface is presented enabling an operator to teleoperate an unmanned aerial vehicle (UAV) equipped with a custom CdZnTe-based spectroscopic gamma-ray detector in outdoor environments. The task is to localize nuclear radiation sources, whose location is unknown to the user, without the close exposure of the operator. The developed detector also enables identification of the localized nuclear sources. The aim of the VHAR interface is to increase the situation awareness of the operator. The user teleoperates the UAV using a 3DOF haptic device that provides an attractive force feedback around the location of the most intense detected radiation source. Moreover, a fixed camera on the ground observes the environment where the UAV is flying. A 3D augmented reality scene is displayed on a computer screen accessible to the operator. Multiple types of graphical overlays are shown, including sensor data acquired by the nuclear radiation detector, a virtual cursor that tracks the UAV and geographical information, such as buildings. Experiments performed in a real environment are reported using an intense nuclear source.

  12. Vibrotactile sensory substitution for object manipulation: amplitude versus pulse train frequency modulation.

    PubMed

    Stepp, Cara E; Matsuoka, Yoky

    2012-01-01

    Incorporating sensory feedback with prosthetic devices is now possible, but the optimal methods of providing such feedback are still unknown. The relative utility of amplitude and pulse train frequency modulated stimulation paradigms for providing vibrotactile feedback for object manipulation was assessed in 10 participants. The two approaches were studied during virtual object manipulation using a robotic interface as a function of presentation order and a simultaneous cognitive load. Despite the potential pragmatic benefits associated with pulse train frequency modulated vibrotactile stimulation, comparison of the approach with amplitude modulation indicates that amplitude modulation vibrotactile stimulation provides superior feedback for object manipulation.

  13. Human-machine interface for a VR-based medical imaging environment

    NASA Astrophysics Data System (ADS)

    Krapichler, Christian; Haubner, Michael; Loesch, Andreas; Lang, Manfred K.; Englmeier, Karl-Hans

    1997-05-01

    Modern 3D scanning techniques like magnetic resonance imaging (MRI) or computed tomography (CT) produce high- quality images of the human anatomy. Virtual environments open new ways to display and to analyze those tomograms. Compared with today's inspection of 2D image sequences, physicians are empowered to recognize spatial coherencies and examine pathological regions more facile, diagnosis and therapy planning can be accelerated. For that purpose a powerful human-machine interface is required, which offers a variety of tools and features to enable both exploration and manipulation of the 3D data. Man-machine communication has to be intuitive and efficacious to avoid long accustoming times and to enhance familiarity with and acceptance of the interface. Hence, interaction capabilities in virtual worlds should be comparable to those in the real work to allow utilization of our natural experiences. In this paper the integration of hand gestures and visual focus, two important aspects in modern human-computer interaction, into a medical imaging environment is shown. With the presented human- machine interface, including virtual reality displaying and interaction techniques, radiologists can be supported in their work. Further, virtual environments can even alleviate communication between specialists from different fields or in educational and training applications.

  14. Design and Implementation of Cloud-Centric Configuration Repository for DIY IoT Applications

    PubMed Central

    Ahmad, Shabir; Kim, Do Hyeun

    2018-01-01

    The Do-It-Yourself (DIY) vision for the design of a smart and customizable IoT application demands the involvement of the general public in its development process. The general public lacks the technical knowledge for programming state-of-the-art prototyping and development kits. The latest IoT kits, for example, Raspberry Pi, are revolutionizing the DIY paradigm for IoT, and more than ever, a DIY intuitive programming interface is required to enable the masses to interact with and customize the behavior of remote IoT devices on the Internet. However, in most cases, these DIY toolkits store the resultant configuration data in local storage and, thus, cannot be accessed remotely. This paper presents the novel implementation of such a system, which not only enables the general public to customize the behavior of remote IoT devices through a visual interface, but also makes the configuration available everywhere and anytime by leveraging the power of cloud-based platforms. The interface enables the visualization of the resources exposed by remote embedded resources in the form of graphical virtual objects (VOs). These VOs are used to create the service design through simple operations like drag-and-drop and the setting of properties. The configuration created as a result is maintained as an XML document, which is ingested by the cloud platform, thus making it available to be used anywhere. We use the HTTP approach for the communication between the cloud and IoT toolbox and the cloud and real devices, but for communication between the toolbox and actual resources, CoAP is used. Finally, a smart home case study has been implemented and presented in order to assess the effectiveness of the proposed work. PMID:29415450

  15. Design and Implementation of Cloud-Centric Configuration Repository for DIY IoT Applications.

    PubMed

    Ahmad, Shabir; Hang, Lei; Kim, Do Hyeun

    2018-02-06

    The Do-It-Yourself (DIY) vision for the design of a smart and customizable IoT application demands the involvement of the general public in its development process. The general public lacks the technical knowledge for programming state-of-the-art prototyping and development kits. The latest IoT kits, for example, Raspberry Pi, are revolutionizing the DIY paradigm for IoT, and more than ever, a DIY intuitive programming interface is required to enable the masses to interact with and customize the behavior of remote IoT devices on the Internet. However, in most cases, these DIY toolkits store the resultant configuration data in local storage and, thus, cannot be accessed remotely. This paper presents the novel implementation of such a system, which not only enables the general public to customize the behavior of remote IoT devices through a visual interface, but also makes the configuration available everywhere and anytime by leveraging the power of cloud-based platforms. The interface enables the visualization of the resources exposed by remote embedded resources in the form of graphical virtual objects (VOs). These VOs are used to create the service design through simple operations like drag-and-drop and the setting of properties. The configuration created as a result is maintained as an XML document, which is ingested by the cloud platform, thus making it available to be used anywhere. We use the HTTP approach for the communication between the cloud and IoT toolbox and the cloud and real devices, but for communication between the toolbox and actual resources, CoAP is used. Finally, a smart home case study has been implemented and presented in order to assess the effectiveness of the proposed work.

  16. The Design and Evaluation of a Large-Scale Real-Walking Locomotion Interface

    PubMed Central

    Peck, Tabitha C.; Fuchs, Henry; Whitton, Mary C.

    2014-01-01

    Redirected Free Exploration with Distractors (RFED) is a large-scale real-walking locomotion interface developed to enable people to walk freely in virtual environments that are larger than the tracked space in their facility. This paper describes the RFED system in detail and reports on a user study that evaluated RFED by comparing it to walking-in-place and joystick interfaces. The RFED system is composed of two major components, redirection and distractors. This paper discusses design challenges, implementation details, and lessons learned during the development of two working RFED systems. The evaluation study examined the effect of the locomotion interface on users’ cognitive performance on navigation and wayfinding measures. The results suggest that participants using RFED were significantly better at navigating and wayfinding through virtual mazes than participants using walking-in-place and joystick interfaces. Participants traveled shorter distances, made fewer wrong turns, pointed to hidden targets more accurately and more quickly, and were able to place and label targets on maps more accurately, and more accurately estimate the virtual environment size. PMID:22184262

  17. User Interface Technology Transfer to NASA's Virtual Wind Tunnel System

    NASA Technical Reports Server (NTRS)

    vanDam, Andries

    1998-01-01

    Funded by NASA grants for four years, the Brown Computer Graphics Group has developed novel 3D user interfaces for desktop and immersive scientific visualization applications. This past grant period supported the design and development of a software library, the 3D Widget Library, which supports the construction and run-time management of 3D widgets. The 3D Widget Library is a mechanism for transferring user interface technology from the Brown Graphics Group to the Virtual Wind Tunnel system at NASA Ames as well as the public domain.

  18. Will Anything Useful Come Out of Virtual Reality? Examination of a Naval Application

    DTIC Science & Technology

    1993-05-01

    The term virtual reality can encompass varying meanings, but some generally accepted attributes of a virtual environment are that it is immersive...technology, but at present there are few practical applications which are utilizing the broad range of virtual reality technology. This paper will discuss an...Operability, operator functions, Virtual reality , Man-machine interface, Decision aids/decision making, Decision support. ASW.

  19. System and Method for Providing a Climate Data Persistence Service

    NASA Technical Reports Server (NTRS)

    Schnase, John L. (Inventor); Ripley, III, William David (Inventor); Duffy, Daniel Q. (Inventor); Thompson, John H. (Inventor); Strong, Savannah L. (Inventor); McInerney, Mark (Inventor); Sinno, Scott (Inventor); Tamkin, Glenn S. (Inventor); Nadeau, Denis (Inventor)

    2018-01-01

    A system, method and computer-readable storage devices for providing a climate data persistence service. A system configured to provide the service can include a climate data server that performs data and metadata storage and management functions for climate data objects, a compute-storage platform that provides the resources needed to support a climate data server, provisioning software that allows climate data server instances to be deployed as virtual climate data servers in a cloud computing environment, and a service interface, wherein persistence service capabilities are invoked by software applications running on a client device. The climate data objects can be in various formats, such as International Organization for Standards (ISO) Open Archival Information System (OAIS) Reference Model Submission Information Packages, Archive Information Packages, and Dissemination Information Packages. The climate data server can enable scalable, federated storage, management, discovery, and access, and can be tailored for particular use cases.

  20. Noninvasive Electroencephalogram Based Control of a Robotic Arm for Reach and Grasp Tasks

    NASA Astrophysics Data System (ADS)

    Meng, Jianjun; Zhang, Shuying; Bekyo, Angeliki; Olsoe, Jaron; Baxter, Bryan; He, Bin

    2016-12-01

    Brain-computer interface (BCI) technologies aim to provide a bridge between the human brain and external devices. Prior research using non-invasive BCI to control virtual objects, such as computer cursors and virtual helicopters, and real-world objects, such as wheelchairs and quadcopters, has demonstrated the promise of BCI technologies. However, controlling a robotic arm to complete reach-and-grasp tasks efficiently using non-invasive BCI has yet to be shown. In this study, we found that a group of 13 human subjects could willingly modulate brain activity to control a robotic arm with high accuracy for performing tasks requiring multiple degrees of freedom by combination of two sequential low dimensional controls. Subjects were able to effectively control reaching of the robotic arm through modulation of their brain rhythms within the span of only a few training sessions and maintained the ability to control the robotic arm over multiple months. Our results demonstrate the viability of human operation of prosthetic limbs using non-invasive BCI technology.

  1. Live Virtual Constructive (LVC): Interface Control Document (ICD) for the LVC Gateway. [Flight Test 3

    NASA Technical Reports Server (NTRS)

    Jovic, Srba

    2015-01-01

    This Interface Control Document (ICD) documents and tracks the necessary information required for the Live Virtual and Constructive (LVC) systems components as well as protocols for communicating with them in order to achieve all research objectives captured by the experiment requirements. The purpose of this ICD is to clearly communicate all inputs and outputs from the subsystem components.

  2. Virtual facebow technique.

    PubMed

    Solaberrieta, Eneko; Garmendia, Asier; Minguez, Rikardo; Brizuela, Aritza; Pradies, Guillermo

    2015-12-01

    This article describes a virtual technique for transferring the location of a digitized cast from the patient to a virtual articulator (virtual facebow transfer). Using a virtual procedure, the maxillary digital cast is transferred to a virtual articulator by means of reverse engineering devices. The following devices necessary to carry out this protocol are available in many contemporary practices: an intraoral scanner, a digital camera, and specific software. Results prove the viability of integrating different tools and software and of completely integrating this procedure into a dental digital workflow. Copyright © 2015 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  3. The Virtual Solar Observatory and the Heliophysics Meta-Virtual Observatory

    NASA Technical Reports Server (NTRS)

    Gurman, Joseph B.

    2007-01-01

    The Virtual Solar Observatory (VSO) is now able to search for solar data ranging from the radio to gamma rays, obtained from space and groundbased observatories, from 26 sources at 12 data providers, and from 1915 to the present. The solar physics community can use a Web interface or an Application Programming Interface (API) that allows integrating VSO searches into other software, including other Web services. Over the next few years, this integration will be especially obvious as the NASA Heliophysics division sponsors the development of a heliophysics-wide virtual observatory (VO), based on existing VO's in heliospheric, magnetospheric, and ionospheric physics as well as the VSO. We examine some of the challenges and potential of such a "meta-VO."

  4. COM1/348: Design and Implementation of a Portal for the Market of the Medical Equipment (MEDICOM)

    PubMed Central

    Palamas, S; Vlachos, I; Panou-Diamandi, O; Marinos, G; Kalivas, D; Zeelenberg, C; Nimwegen, C; Koutsouris, D

    1999-01-01

    Introduction The MEDICOM system provides the electronic means for medical equipment manufacturers to communicate online with their customers supporting the Purchasing Process and the Post Market Surveillance. The MEDICOM service will be provided over the Internet by the MEDICOM Portal, and by a set of distributed subsystems dedicated to handle structured information related to medical devices. There are three kinds of these subsystems, the Hypermedia Medical Catalogue (HMC), Virtual Medical Exhibition (VME), which contains information in a form of Virtual Models, and the Post Market Surveillance system (PMS). The Universal Medical Devices Nomenclature System (UMDNS) is used to register all products. This work was partially funded by the ESPRIT Project 25289 (MEDICOM). Methods The Portal provides the end user interface operating as the MEDICOM Portal, acts as the yellow pages for finding both products and providers, providing links to the providers servers, implements the system management and supports the subsystem database compatibility. The Portal hosts a database system composed of two parts: (a) the Common Database, which describes a set of encoded parameters (like Supported Languages, Geographic Regions, UMDNS Codes, etc) common to all subsystems and (b) the Short Description Database, which contains summarised descriptions of medical devices, including a text description, the codes of the manufacturer, UMDNS code, attribute values and links to the corresponding HTML pages of the HMC, VME and PMS servers. The Portal provides the MEDICOM user interface including services like end user profiling and registration, end user query forms, creation and hosting of newsgroups, links to online libraries, end user subscription to manufacturers' mailing lists, online information for the MEDICOM system and special messages or advertisements from manufacturers. Results Platform independence and interoperability characterise the system design. A general purpose RDBMS is used for the implementation of the databases. The end user interface is implemented using HTML and Java applets, while the subsystem administration applications are developed using Java. The JDBC interface is used in order to provide database access to these applications. The communication between subsystems is implemented using CORBA objects and Java servlets are used in subsystem servers for the activation of remote operations. Discussion In the second half of 1999, the MEDICOM Project will enter the phase of evaluation and pilot operation. The benefits of the MEDICOM system are expected to be the establishment of a world wide accessible marketplace between providers and health care professionals. The latter will achieve the provision of up-to-date and high quality products information in an easy and friendly way, and the enhancement of the marketing procedures and after sales support efficiency.

  5. Virtual Sensor Test Instrumentation

    NASA Technical Reports Server (NTRS)

    Wang, Roy

    2011-01-01

    Virtual Sensor Test Instrumentation is based on the concept of smart sensor technology for testing with intelligence needed to perform sell-diagnosis of health, and to participate in a hierarchy of health determination at sensor, process, and system levels. A virtual sensor test instrumentation consists of five elements: (1) a common sensor interface, (2) microprocessor, (3) wireless interface, (4) signal conditioning and ADC/DAC (analog-to-digital conversion/ digital-to-analog conversion), and (5) onboard EEPROM (electrically erasable programmable read-only memory) for metadata storage and executable software to create powerful, scalable, reconfigurable, and reliable embedded and distributed test instruments. In order to maximize the efficient data conversion through the smart sensor node, plug-and-play functionality is required to interface with traditional sensors to enhance their identity and capabilities for data processing and communications. Virtual sensor test instrumentation can be accessible wirelessly via a Network Capable Application Processor (NCAP) or a Smart Transducer Interlace Module (STIM) that may be managed under real-time rule engines for mission-critical applications. The transducer senses the physical quantity being measured and converts it into an electrical signal. The signal is fed to an A/D converter, and is ready for use by the processor to execute functional transformation based on the sensor characteristics stored in a Transducer Electronic Data Sheet (TEDS). Virtual sensor test instrumentation is built upon an open-system architecture with standardized protocol modules/stacks to interface with industry standards and commonly used software. One major benefit for deploying the virtual sensor test instrumentation is the ability, through a plug-and-play common interface, to convert raw sensor data in either analog or digital form, to an IEEE 1451 standard-based smart sensor, which has instructions to program sensors for a wide variety of functions. The sensor data is processed in a distributed fashion across the network, providing a large pool of resources in real time to meet stringent latency requirements.

  6. Wafer bonded virtual substrate and method for forming the same

    DOEpatents

    Atwater, Jr., Harry A.; Zahler, James M [Pasadena, CA; Morral, Anna Fontcuberta i [Paris, FR

    2007-07-03

    A method of forming a virtual substrate comprised of an optoelectronic device substrate and handle substrate comprises the steps of initiating bonding of the device substrate to the handle substrate, improving or increasing the mechanical strength of the device and handle substrates, and thinning the device substrate to leave a single-crystal film on the virtual substrate such as by exfoliation of a device film from the device substrate. The handle substrate is typically Si or other inexpensive common substrate material, while the optoelectronic device substrate is formed of more expensive and specialized electro-optic material. Using the methodology of the invention a wide variety of thin film electro-optic materials of high quality can be bonded to inexpensive substrates which serve as the mechanical support for an optoelectronic device layer fabricated in the thin film electro-optic material.

  7. Wafer bonded virtual substrate and method for forming the same

    NASA Technical Reports Server (NTRS)

    Atwater, Jr., Harry A. (Inventor); Zahler, James M. (Inventor); Morral, Anna Fontcuberta i (Inventor)

    2007-01-01

    A method of forming a virtual substrate comprised of an optoelectronic device substrate and handle substrate comprises the steps of initiating bonding of the device substrate to the handle substrate, improving or increasing the mechanical strength of the device and handle substrates, and thinning the device substrate to leave a single-crystal film on the virtual substrate such as by exfoliation of a device film from the device substrate. The handle substrate is typically Si or other inexpensive common substrate material, while the optoelectronic device substrate is formed of more expensive and specialized electro-optic material. Using the methodology of the invention a wide variety of thin film electro-optic materials of high quality can be bonded to inexpensive substrates which serve as the mechanical support for an optoelectronic device layer fabricated in the thin film electro-optic material.

  8. Supporting the Loewenstein occupational therapy cognitive assessment using distributed user interfaces.

    PubMed

    Tesoriero, Ricardo; Gallud Lazaro, Jose A; Altalhi, Abdulrahman H

    2017-02-01

    Improve the quantity and quality of information obtained from traditional Loewenstein Occupational Therapy Cognitive Assessment Battery systems to monitor the evolution of patients' rehabilitation process as well as to compare different rehabilitation therapies. The system replaces traditional artefacts with virtual versions of them to take advantage of cutting edge interaction technology. The system is defined as a Distributed User Interface (DUI) supported by a display ecosystem, including mobile devices as well as multi-touch surfaces. Due to the heterogeneity of the devices involved in the system, the software technology is based on a client-server architecture using the Web as the software platform. The system provides therapists with information that is not available (or it is very difficult to gather) using traditional technologies (i.e. response time measurements, object tracking, information storage and retrieval facilities, etc.). The use of DUIs allows therapists to gather information that is unavailable using traditional assessment methods as well as adapt the system to patients' profile to increase the range of patients that are able to take this assessment. Implications for Rehabilitation Using a Distributed User Interface environment to carry out LOTCAs improves the quality of the information gathered during the rehabilitation assessment. This system captures physical data regarding patient's interaction during the assessment to improve the rehabilitation process analysis. Allows professionals to adapt the assessment procedure to create different versions according to patients' profile. Improves the availability of patients' profile information to therapists to adapt the assessment procedure.

  9. Transduction between worlds: using virtual and mixed reality for earth and planetary science

    NASA Astrophysics Data System (ADS)

    Hedley, N.; Lochhead, I.; Aagesen, S.; Lonergan, C. D.; Benoy, N.

    2017-12-01

    Virtual reality (VR) and augmented reality (AR) have the potential to transform the way we visualize multidimensional geospatial datasets in support of geoscience research, exploration and analysis. The beauty of virtual environments is that they can be built at any scale, users can view them at many levels of abstraction, move through them in unconventional ways, and experience spatial phenomena as if they had superpowers. Similarly, augmented reality allows you to bring the power of virtual 3D data visualizations into everyday spaces. Spliced together, these interface technologies hold incredible potential to support 21st-century geoscience. In my ongoing research, my team and I have made significant advances to connect data and virtual simulations with real geographic spaces, using virtual environments, geospatial augmented reality and mixed reality. These research efforts have yielded new capabilities to connect users with spatial data and phenomena. These innovations include: geospatial x-ray vision; flexible mixed reality; augmented 3D GIS; situated augmented reality 3D simulations of tsunamis and other phenomena interacting with real geomorphology; augmented visual analytics; and immersive GIS. These new modalities redefine the ways in which we can connect digital spaces of spatial analysis, simulation and geovisualization, with geographic spaces of data collection, fieldwork, interpretation and communication. In a way, we are talking about transduction between real and virtual worlds. Taking a mixed reality approach to this, we can link real and virtual worlds. This paper presents a selection of our 3D geovisual interface projects in terrestrial, coastal, underwater and other environments. Using rigorous applied geoscience data, analyses and simulations, our research aims to transform the novelty of virtual and augmented reality interface technologies into game-changing mixed reality geoscience.

  10. Development of a Haptic Interface for Natural Orifice Translumenal Endoscopic Surgery Simulation

    PubMed Central

    Dargar, Saurabh; Sankaranarayanan, Ganesh

    2016-01-01

    Natural orifice translumenal endoscopic surgery (NOTES) is a minimally invasive procedure, which utilizes the body’s natural orifices to gain access to the peritoneal cavity. The NOTES procedure is designed to minimize external scarring and patient trauma, however flexible endoscopy based pure NOTES procedures require critical scope handling skills. The delicate nature of the NOTES procedure requires extensive training, thus to improve access to training while reducing risk to patients we have designed and developed the VTEST©, a virtual reality NOTES simulator. As part of the simulator, a novel decoupled 2-DOF haptic device was developed to provide realistic force feedback to the user in training. A series of experiments were performed to determine the behavioral characteristics of the device. The device was found capable of rendering up to 5.62N and 0.190Nm of continuous force and torque in the translational and rotational DOF, respectively. The device possesses 18.1Hz and 5.7Hz of force bandwidth in the translational and rotational DOF, respectively. A feedforward friction compensator was also successfully implemented to minimize the negative impact of friction during the interaction with the device. In this work we have presented the detailed development and evaluation of the haptic device for the VTEST©. PMID:27008674

  11. Open-Source Assisted Laboratory Automation through Graphical User Interfaces and 3D Printers: Application to Equipment Hyphenation for Higher-Order Data Generation.

    PubMed

    Siano, Gabriel G; Montemurro, Milagros; Alcaráz, Mirta R; Goicoechea, Héctor C

    2017-10-17

    Higher-order data generation implies some automation challenges, which are mainly related to the hidden programming languages and electronic details of the equipment. When techniques and/or equipment hyphenation are the key to obtaining higher-order data, the required simultaneous control of them demands funds for new hardware, software, and licenses, in addition to very skilled operators. In this work, we present Design of Inputs-Outputs with Sikuli (DIOS), a free and open-source code program that provides a general framework for the design of automated experimental procedures without prior knowledge of programming or electronics. Basically, instruments and devices are considered as nodes in a network, and every node is associated both with physical and virtual inputs and outputs. Virtual components, such as graphical user interfaces (GUIs) of equipment, are handled by means of image recognition tools provided by Sikuli scripting language, while handling of their physical counterparts is achieved using an adapted open-source three-dimensional (3D) printer. Two previously reported experiments of our research group, related to fluorescence matrices derived from kinetics and high-performance liquid chromatography, were adapted to be carried out in a more automated fashion. Satisfactory results, in terms of analytical performance, were obtained. Similarly, advantages derived from open-source tools assistance could be appreciated, mainly in terms of lesser intervention of operators and cost savings.

  12. Illusion media: Generating virtual objects using realizable metamaterials

    NASA Astrophysics Data System (ADS)

    Jiang, Wei Xiang; Ma, Hui Feng; Cheng, Qiang; Cui, Tie Jun

    2010-03-01

    We propose a class of optical transformation media, illusion media, which render the enclosed object invisible and generate one or more virtual objects as desired. We apply the proposed media to design a microwave device, which transforms an actual object into two virtual objects. Such an illusion device exhibits unusual electromagnetic behavior as verified by full-wave simulations. Different from the published illusion devices which are composed of left-handed materials with simultaneously negative permittivity and permeability, the proposed illusion media have finite and positive permittivity and permeability. Hence the designed device could be realizable using artificial metamaterials.

  13. Making Information Overload Work: The Dragon Software System on a Virtual Reality Responsive Workbench

    DTIC Science & Technology

    1998-03-01

    Research Laboratory’s Virtual Reality Responsive Workbench (VRRWB) and Dragon software system which together address the problem of battle space...and describe the lessons which have been learned. Interactive graphics, workbench, battle space visualization, virtual reality , user interface.

  14. A specification of 3D manipulation in virtual environments

    NASA Technical Reports Server (NTRS)

    Su, S. Augustine; Furuta, Richard

    1994-01-01

    In this paper we discuss the modeling of three basic kinds of 3-D manipulations in the context of a logical hand device and our virtual panel architecture. The logical hand device is a useful software abstraction representing hands in virtual environments. The virtual panel architecture is the 3-D component of the 2-D window systems. Both of the abstractions are intended to form the foundation for adaptable 3-D manipulation.

  15. A kickball game for ankle rehabilitation by JAVA, JNI, and VRML

    NASA Astrophysics Data System (ADS)

    Choi, Hyungjeen; Ryu, Jeha; Lee, Chansu

    2004-03-01

    This paper presents development of a virtual environment that can be applied to the ankle rehabilitation procedure. We developed a virtual football stadium to intrigue a patient, where two degree of freedom (DOF) plate-shaped object is oriented to kick a ball falling from the sky in accordance with the data from the ankle's dorisflexion/plantarflexion and inversion/eversion motion on the moving platform of the K-Platform. This Kickball Game is implemented by Virtual Reality Modeling Language (VRML). To control virtual objects, data from the K-Platform are transmitted through the communication module implemented in C++. Java, Java Native Interface (JNI) and VRML plug-in are combined together so as to interface the communication module with the virtual environment by VRML. This game may be applied to the Active Range of Motion (AROM) exercise procedure that is one of the ankle rehabilitation procedures.

  16. Data communications in a parallel active messaging interface of a parallel computer

    DOEpatents

    Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

    2014-09-02

    Eager send data communications in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints that specify a client, a context, and a task, including receiving an eager send data communications instruction with transfer data disposed in a send buffer characterized by a read/write send buffer memory address in a read/write virtual address space of the origin endpoint; determining for the send buffer a read-only send buffer memory address in a read-only virtual address space, the read-only virtual address space shared by both the origin endpoint and the target endpoint, with all frames of physical memory mapped to pages of virtual memory in the read-only virtual address space; and communicating by the origin endpoint to the target endpoint an eager send message header that includes the read-only send buffer memory address.

  17. Data communications in a parallel active messaging interface of a parallel computer

    DOEpatents

    Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

    2014-09-16

    Eager send data communications in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints that specify a client, a context, and a task, including receiving an eager send data communications instruction with transfer data disposed in a send buffer characterized by a read/write send buffer memory address in a read/write virtual address space of the origin endpoint; determining for the send buffer a read-only send buffer memory address in a read-only virtual address space, the read-only virtual address space shared by both the origin endpoint and the target endpoint, with all frames of physical memory mapped to pages of virtual memory in the read-only virtual address space; and communicating by the origin endpoint to the target endpoint an eager send message header that includes the read-only send buffer memory address.

  18. WE-G-BRA-04: The Development of a Virtual Reality Dosimetry Training Platform for Physics Training.

    PubMed

    Beavis, A; Ward, J

    2012-06-01

    Recently there has been a great deal of interest in the application of Simulation methodologies for training. We have previously developed a Virtual Environment for Radiotherapy Training, VERT, which simulates a fully interactive and functional Linac. Patient and plan data can be accessed across a DICOM interface, allowing the treatment process to be simulated. Here we present a newly developed range of Physics equipment, which allows the user to undertake realistic QC processes. Five devices are available: 1) scanning water phantom, 2) 'solid water' QC block/ion chamber, 3) light/ radiation field coincidence phantom, 4) laser alignment phantom and 5) water based calibration phantom with reference class and 'departmental' ion chamber. The devices were created to operate realistically and function as expected, each has an associated control screen which provides control and feedback information. The dosimetric devices respond appropriately to the beam qualities available on the Linac. Geometrical characteristics of the Linac, e.g. isocentre integrity, laser calibration and jaw calibrations can have random errors introduced in order to enable the user learn and observe fault conditions. In the calibration module appropriate factors for temperature and pressure must be set to correct for ambient, simulated, room conditions. The dosimetric devices can be used to characterise the Linac beams. Depth doses with Dmax of 15mm/29mm and d10 of 67%/77% respectively for 10cm square 6/15MV beams were measured. The Quality Indices (TPR20/10 ratios) can be measured as 0.668 and 0.761 respectively. At a simple level the tools can be used to demonstrate beam divergence or the effect of the inverse square law; They are also designed to be used to simulate the calibration of a new ion chamber. We have developed a novel set of tools that allow education of Physics processes via simulation training in our virtual environment. Both Authors are Founders and Directors of Vertual Ltd, a spin-out company that exists to commericalise the results of the research work presented in this abstract. © 2012 American Association of Physicists in Medicine.

  19. Active tactile exploration using a brain-machine-brain interface.

    PubMed

    O'Doherty, Joseph E; Lebedev, Mikhail A; Ifft, Peter J; Zhuang, Katie Z; Shokur, Solaiman; Bleuler, Hannes; Nicolelis, Miguel A L

    2011-10-05

    Brain-machine interfaces use neuronal activity recorded from the brain to establish direct communication with external actuators, such as prosthetic arms. It is hoped that brain-machine interfaces can be used to restore the normal sensorimotor functions of the limbs, but so far they have lacked tactile sensation. Here we report the operation of a brain-machine-brain interface (BMBI) that both controls the exploratory reaching movements of an actuator and allows signalling of artificial tactile feedback through intracortical microstimulation (ICMS) of the primary somatosensory cortex. Monkeys performed an active exploration task in which an actuator (a computer cursor or a virtual-reality arm) was moved using a BMBI that derived motor commands from neuronal ensemble activity recorded in the primary motor cortex. ICMS feedback occurred whenever the actuator touched virtual objects. Temporal patterns of ICMS encoded the artificial tactile properties of each object. Neuronal recordings and ICMS epochs were temporally multiplexed to avoid interference. Two monkeys operated this BMBI to search for and distinguish one of three visually identical objects, using the virtual-reality arm to identify the unique artificial texture associated with each. These results suggest that clinical motor neuroprostheses might benefit from the addition of ICMS feedback to generate artificial somatic perceptions associated with mechanical, robotic or even virtual prostheses.

  20. Three-Dimensional User Interfaces for Immersive Virtual Reality

    NASA Technical Reports Server (NTRS)

    vanDam, Andries

    1997-01-01

    The focus of this grant was to experiment with novel user interfaces for immersive Virtual Reality (VR) systems, and thus to advance the state of the art of user interface technology for this domain. Our primary test application was a scientific visualization application for viewing Computational Fluid Dynamics (CFD) datasets. This technology has been transferred to NASA via periodic status reports and papers relating to this grant that have been published in conference proceedings. This final report summarizes the research completed over the past year, and extends last year's final report of the first three years of the grant.

  1. Biological Visualization, Imaging and Simulation(Bio-VIS) at NASA Ames Research Center: Developing New Software and Technology for Astronaut Training and Biology Research in Space

    NASA Technical Reports Server (NTRS)

    Smith, Jeffrey

    2003-01-01

    The Bio- Visualization, Imaging and Simulation (BioVIS) Technology Center at NASA's Ames Research Center is dedicated to developing and applying advanced visualization, computation and simulation technologies to support NASA Space Life Sciences research and the objectives of the Fundamental Biology Program. Research ranges from high resolution 3D cell imaging and structure analysis, virtual environment simulation of fine sensory-motor tasks, computational neuroscience and biophysics to biomedical/clinical applications. Computer simulation research focuses on the development of advanced computational tools for astronaut training and education. Virtual Reality (VR) and Virtual Environment (VE) simulation systems have become important training tools in many fields from flight simulation to, more recently, surgical simulation. The type and quality of training provided by these computer-based tools ranges widely, but the value of real-time VE computer simulation as a method of preparing individuals for real-world tasks is well established. Astronauts routinely use VE systems for various training tasks, including Space Shuttle landings, robot arm manipulations and extravehicular activities (space walks). Currently, there are no VE systems to train astronauts for basic and applied research experiments which are an important part of many missions. The Virtual Glovebox (VGX) is a prototype VE system for real-time physically-based simulation of the Life Sciences Glovebox where astronauts will perform many complex tasks supporting research experiments aboard the International Space Station. The VGX consists of a physical display system utilizing duel LCD projectors and circular polarization to produce a desktop-sized 3D virtual workspace. Physically-based modeling tools (Arachi Inc.) provide real-time collision detection, rigid body dynamics, physical properties and force-based controls for objects. The human-computer interface consists of two magnetic tracking devices (Ascention Inc.) attached to instrumented gloves (Immersion Inc.) which co-locate the user's hands with hand/forearm representations in the virtual workspace. Force-feedback is possible in a work volume defined by a Phantom Desktop device (SensAble inc.). Graphics are written in OpenGL. The system runs on a 2.2 GHz Pentium 4 PC. The prototype VGX provides astronauts and support personnel with a real-time physically-based VE system to simulate basic research tasks both on Earth and in the microgravity of Space. The immersive virtual environment of the VGX also makes it a useful tool for virtual engineering applications including CAD development, procedure design and simulation of human-system systems in a desktop-sized work volume.

  2. NASA Virtual Glovebox (VBX): Emerging Simulation Technology for Space Station Experiment Design, Development, Training and Troubleshooting

    NASA Technical Reports Server (NTRS)

    Smith, Jeffrey D.; Twombly, I. Alexander; Maese, A. Christopher; Cagle, Yvonne; Boyle, Richard

    2003-01-01

    The International Space Station demonstrates the greatest capabilities of human ingenuity, international cooperation and technology development. The complexity of this space structure is unprecedented; and training astronaut crews to maintain all its systems, as well as perform a multitude of research experiments, requires the most advanced training tools and techniques. Computer simulation and virtual environments are currently used by astronauts to train for robotic arm manipulations and extravehicular activities; but now, with the latest computer technologies and recent successes in areas of medical simulation, the capability exists to train astronauts for more hands-on research tasks using immersive virtual environments. We have developed a new technology, the Virtual Glovebox (VGX), for simulation of experimental tasks that astronauts will perform aboard the Space Station. The VGX may also be used by crew support teams for design of experiments, testing equipment integration capability and optimizing the procedures astronauts will use. This is done through the 3D, desk-top sized, reach-in virtual environment that can simulate the microgravity environment in space. Additional features of the VGX allow for networking multiple users over the internet and operation of tele-robotic devices through an intuitive user interface. Although the system was developed for astronaut training and assisting support crews, Earth-bound applications, many emphasizing homeland security, have also been identified. Examples include training experts to handle hazardous biological and/or chemical agents in a safe simulation, operation of tele-robotic systems for assessing and diffusing threats such as bombs, and providing remote medical assistance to field personnel through a collaborative virtual environment. Thus, the emerging VGX simulation technology, while developed for space- based applications, can serve a dual use facilitating homeland security here on Earth.

  3. Virtual reality in the operating room of the future.

    PubMed

    Müller, W; Grosskopf, S; Hildebrand, A; Malkewitz, R; Ziegler, R

    1997-01-01

    In cooperation with the Max-Delbrück-Centrum/Robert-Rössle-Klinik (MDC/RRK) in Berlin, the Fraunhofer Institute for Computer Graphics is currently designing and developing a scenario for the operating room of the future. The goal of this project is to integrate new analysis, visualization and interaction tools in order to optimize and refine tumor diagnostics and therapy in combination with laser technology and remote stereoscopic video transfer. Hence, a human 3-D reference model is reconstructed using CT, MR, and anatomical cryosection images from the National Library of Medicine's Visible Human Project. Applying segmentation algorithms and surface-polygonization methods a 3-D representation is obtained. In addition, a "fly-through" the virtual patient is realized using 3-D input devices (data glove, tracking system, 6-DOF mouse). In this way, the surgeon can experience really new perspectives of the human anatomy. Moreover, using a virtual cutting plane any cut of the CT volume can be interactively placed and visualized in realtime. In conclusion, this project delivers visions for the application of effective visualization and VR systems. Commonly known as Virtual Prototyping and applied by the automotive industry long ago, this project shows, that the use of VR techniques can also prototype an operating room. After evaluating design and functionality of the virtual operating room, MDC plans to build real ORs in the near future. The use of VR techniques provides a more natural interface for the surgeon in the OR (e.g., controlling interactions by voice input). Besides preoperative planning future work will focus on supporting the surgeon in performing surgical interventions. An optimal synthesis of real and synthetic data, and the inclusion of visual, aural, and tactile senses in virtual environments can meet these requirements. This Augmented Reality could represent the environment for the surgeons of tomorrow.

  4. VirtualDose: a software for reporting organ doses from CT for adult and pediatric patients

    NASA Astrophysics Data System (ADS)

    Ding, Aiping; Gao, Yiming; Liu, Haikuan; Caracappa, Peter F.; Long, Daniel J.; Bolch, Wesley E.; Liu, Bob; Xu, X. George

    2015-07-01

    This paper describes the development and testing of VirtualDose—a software for reporting organ doses for adult and pediatric patients who undergo x-ray computed tomography (CT) examinations. The software is based on a comprehensive database of organ doses derived from Monte Carlo (MC) simulations involving a library of 25 anatomically realistic phantoms that represent patients of different ages, body sizes, body masses, and pregnant stages. Models of GE Lightspeed Pro 16 and Siemens SOMATOM Sensation 16 scanners were carefully validated for use in MC dose calculations. The software framework is designed with the ‘software as a service (SaaS)’ delivery concept under which multiple clients can access the web-based interface simultaneously from any computer without having to install software locally. The RESTful web service API also allows a third-party picture archiving and communication system software package to seamlessly integrate with VirtualDose’s functions. Software testing showed that VirtualDose was compatible with numerous operating systems including Windows, Linux, Apple OS X, and mobile and portable devices. The organ doses from VirtualDose were compared against those reported by CT-Expo and ImPACT—two dosimetry tools that were based on the stylized pediatric and adult patient models that were known to be anatomically simple. The organ doses reported by VirtualDose differed from those reported by CT-Expo and ImPACT by as much as 300% in some of the patient models. These results confirm the conclusion from past studies that differences in anatomical realism offered by stylized and voxel phantoms have caused significant discrepancies in CT dose estimations.

  5. Towards real-time communication between in vivo neurophysiological data sources and simulator-based brain biomimetic models.

    PubMed

    Lee, Giljae; Matsunaga, Andréa; Dura-Bernal, Salvador; Zhang, Wenjie; Lytton, William W; Francis, Joseph T; Fortes, José Ab

    2014-11-01

    Development of more sophisticated implantable brain-machine interface (BMI) will require both interpretation of the neurophysiological data being measured and subsequent determination of signals to be delivered back to the brain. Computational models are the heart of the machine of BMI and therefore an essential tool in both of these processes. One approach is to utilize brain biomimetic models (BMMs) to develop and instantiate these algorithms. These then must be connected as hybrid systems in order to interface the BMM with in vivo data acquisition devices and prosthetic devices. The combined system then provides a test bed for neuroprosthetic rehabilitative solutions and medical devices for the repair and enhancement of damaged brain. We propose here a computer network-based design for this purpose, detailing its internal modules and data flows. We describe a prototype implementation of the design, enabling interaction between the Plexon Multichannel Acquisition Processor (MAP) server, a commercial tool to collect signals from microelectrodes implanted in a live subject and a BMM, a NEURON-based model of sensorimotor cortex capable of controlling a virtual arm. The prototype implementation supports an online mode for real-time simulations, as well as an offline mode for data analysis and simulations without real-time constraints, and provides binning operations to discretize continuous input to the BMM and filtering operations for dealing with noise. Evaluation demonstrated that the implementation successfully delivered monkey spiking activity to the BMM through LAN environments, respecting real-time constraints.

  6. Centrally managed unified shared virtual address space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilkes, John

    Systems, apparatuses, and methods for managing a unified shared virtual address space. A host may execute system software and manage a plurality of nodes coupled to the host. The host may send work tasks to the nodes, and for each node, the host may externally manage the node's view of the system's virtual address space. Each node may have a central processing unit (CPU) style memory management unit (MMU) with an internal translation lookaside buffer (TLB). In one embodiment, the host may be coupled to a given node via an input/output memory management unit (IOMMU) interface, where the IOMMU frontendmore » interface shares the TLB with the given node's MMU. In another embodiment, the host may control the given node's view of virtual address space via memory-mapped control registers.« less

  7. Training simulator for retinal laser photocoagulation: a new approach for surgeons' apprenticeships

    NASA Astrophysics Data System (ADS)

    Dubois, Patrick; Meseure, Philippe; Peugnet, Frederic; Rouland, Jean-Francois

    1998-06-01

    Retinal laser photocoagulation is a current practice in many eye diseases therapy. Its mastering requires a specific training usually made on actual patients with some risks. The authors present a new device aimed to deliver a complete training separated from the therapeutic practice. This training simulator is built around the actual instrument to comply with the required realism. The instrumental functionalities of the device give the residents the same operating conditions as in the actual practice. The eye fundus visualization is simulated by virtual images, based on actual fundus pictures. They are computed at the rate of 10-12 frames/second according to the adjustments and manipulations of the 3-mirror lens made by the operator. All the pictures are combined in a fundus database planned to collect a wide variety of pathologies. The pedagogical functionalities are gathered in the user's interface. The two major guidelines of the developed software was to achieve an easy to use interface and to enforce no 'school dependent' rules of valuation. This new pedagogical instrument runs on PC micro-computers which allows a low- cost technology and could provide a practical training to retinal photocoagulation without the patient. A clinical validation of its pedagogical efficiency is submitted in another abstract.

  8. Gesture Based Control and EMG Decomposition

    NASA Technical Reports Server (NTRS)

    Wheeler, Kevin R.; Chang, Mindy H.; Knuth, Kevin H.

    2005-01-01

    This paper presents two probabilistic developments for use with Electromyograms (EMG). First described is a new-electric interface for virtual device control based on gesture recognition. The second development is a Bayesian method for decomposing EMG into individual motor unit action potentials. This more complex technique will then allow for higher resolution in separating muscle groups for gesture recognition. All examples presented rely upon sampling EMG data from a subject's forearm. The gesture based recognition uses pattern recognition software that has been trained to identify gestures from among a given set of gestures. The pattern recognition software consists of hidden Markov models which are used to recognize the gestures as they are being performed in real-time from moving averages of EMG. Two experiments were conducted to examine the feasibility of this interface technology. The first replicated a virtual joystick interface, and the second replicated a keyboard. Moving averages of EMG do not provide easy distinction between fine muscle groups. To better distinguish between different fine motor skill muscle groups we present a Bayesian algorithm to separate surface EMG into representative motor unit action potentials. The algorithm is based upon differential Variable Component Analysis (dVCA) [l], [2] which was originally developed for Electroencephalograms. The algorithm uses a simple forward model representing a mixture of motor unit action potentials as seen across multiple channels. The parameters of this model are iteratively optimized for each component. Results are presented on both synthetic and experimental EMG data. The synthetic case has additive white noise and is compared with known components. The experimental EMG data was obtained using a custom linear electrode array designed for this study.

  9. Virtual Reality and Computer-Enhanced Training Devices Equally Improve Laparoscopic Surgical Skill in Novices

    PubMed Central

    Kanumuri, Prathima; Ganai, Sabha; Wohaibi, Eyad M.; Bush, Ronald W.; Grow, Daniel R.

    2008-01-01

    Background: The study aim was to compare the effectiveness of virtual reality and computer-enhanced video-scopic training devices for training novice surgeons in complex laparoscopic skills. Methods: Third-year medical students received instruction on laparoscopic intracorporeal suturing and knot tying and then underwent a pretraining assessment of the task using a live porcine model. Students were then randomized to objectives-based training on either the virtual reality (n=8) or computer-enhanced (n=8) training devices for 4 weeks, after which the assessment was repeated. Results: Posttraining performance had improved compared with pretraining performance in both task completion rate (94% versus 18%; P<0.001*) and time [181±58 (SD) versus 292±24*]. Performance of the 2 groups was comparable before and after training. Of the subjects, 88% thought that haptic cues were important in simulators. Both groups agreed that their respective training systems were effective teaching tools, but computer-enhanced device trainees were more likely to rate their training as representative of reality (P<0.01). Conclusions: Training on virtual reality and computer-enhanced devices had equivalent effects on skills improvement in novices. Despite the perception that haptic feedback is important in laparoscopic simulation training, its absence in the virtual reality device did not impede acquisition of skill. PMID:18765042

  10. Computational Virtual Reality (VR) as a human-computer interface in the operation of telerobotic systems

    NASA Technical Reports Server (NTRS)

    Bejczy, Antal K.

    1995-01-01

    This presentation focuses on the application of computer graphics or 'virtual reality' (VR) techniques as a human-computer interface tool in the operation of telerobotic systems. VR techniques offer very valuable task realization aids for planning, previewing and predicting robotic actions, operator training, and for visual perception of non-visible events like contact forces in robotic tasks. The utility of computer graphics in telerobotic operation can be significantly enhanced by high-fidelity calibration of virtual reality images to actual TV camera images. This calibration will even permit the creation of artificial (synthetic) views of task scenes for which no TV camera views are available.

  11. Visualization of Vgi Data Through the New NASA Web World Wind Virtual Globe

    NASA Astrophysics Data System (ADS)

    Brovelli, M. A.; Kilsedar, C. E.; Zamboni, G.

    2016-06-01

    GeoWeb 2.0, laying the foundations of Volunteered Geographic Information (VGI) systems, has led to platforms where users can contribute to the geographic knowledge that is open to access. Moreover, as a result of the advancements in 3D visualization, virtual globes able to visualize geographic data even on browsers emerged. However the integration of VGI systems and virtual globes has not been fully realized. The study presented aims to visualize volunteered data in 3D, considering also the ease of use aspects for general public, using Free and Open Source Software (FOSS). The new Application Programming Interface (API) of NASA, Web World Wind, written in JavaScript and based on Web Graphics Library (WebGL) is cross-platform and cross-browser, so that the virtual globe created using this API can be accessible through any WebGL supported browser on different operating systems and devices, as a result not requiring any installation or configuration on the client-side, making the collected data more usable to users, which is not the case with the World Wind for Java as installation and configuration of the Java Virtual Machine (JVM) is required. Furthermore, the data collected through various VGI platforms might be in different formats, stored in a traditional relational database or in a NoSQL database. The project developed aims to visualize and query data collected through Open Data Kit (ODK) platform and a cross-platform application, where data is stored in a relational PostgreSQL and NoSQL CouchDB databases respectively.

  12. Cortical Spiking Network Interfaced with Virtual Musculoskeletal Arm and Robotic Arm.

    PubMed

    Dura-Bernal, Salvador; Zhou, Xianlian; Neymotin, Samuel A; Przekwas, Andrzej; Francis, Joseph T; Lytton, William W

    2015-01-01

    Embedding computational models in the physical world is a critical step towards constraining their behavior and building practical applications. Here we aim to drive a realistic musculoskeletal arm model using a biomimetic cortical spiking model, and make a robot arm reproduce the same trajectories in real time. Our cortical model consisted of a 3-layered cortex, composed of several hundred spiking model-neurons, which display physiologically realistic dynamics. We interconnected the cortical model to a two-joint musculoskeletal model of a human arm, with realistic anatomical and biomechanical properties. The virtual arm received muscle excitations from the neuronal model, and fed back proprioceptive information, forming a closed-loop system. The cortical model was trained using spike timing-dependent reinforcement learning to drive the virtual arm in a 2D reaching task. Limb position was used to simultaneously control a robot arm using an improved network interface. Virtual arm muscle activations responded to motoneuron firing rates, with virtual arm muscles lengths encoded via population coding in the proprioceptive population. After training, the virtual arm performed reaching movements which were smoother and more realistic than those obtained using a simplistic arm model. This system provided access to both spiking network properties and to arm biophysical properties, including muscle forces. The use of a musculoskeletal virtual arm and the improved control system allowed the robot arm to perform movements which were smoother than those reported in our previous paper using a simplistic arm. This work provides a novel approach consisting of bidirectionally connecting a cortical model to a realistic virtual arm, and using the system output to drive a robotic arm in real time. Our techniques are applicable to the future development of brain neuroprosthetic control systems, and may enable enhanced brain-machine interfaces with the possibility for finer control of limb prosthetics.

  13. Human-computer interface glove using flexible piezoelectric sensors

    NASA Astrophysics Data System (ADS)

    Cha, Youngsu; Seo, Jeonggyu; Kim, Jun-Sik; Park, Jung-Min

    2017-05-01

    In this note, we propose a human-computer interface glove based on flexible piezoelectric sensors. We select polyvinylidene fluoride as the piezoelectric material for the sensors because of advantages such as a steady piezoelectric characteristic and good flexibility. The sensors are installed in a fabric glove by means of pockets and Velcro bands. We detect changes in the angles of the finger joints from the outputs of the sensors, and use them for controlling a virtual hand that is utilized in virtual object manipulation. To assess the sensing ability of the piezoelectric sensors, we compare the processed angles from the sensor outputs with the real angles from a camera recoding. With good agreement between the processed and real angles, we successfully demonstrate the user interaction system with the virtual hand and interface glove based on the flexible piezoelectric sensors, for four hand motions: fist clenching, pinching, touching, and grasping.

  14. Encountered-Type Haptic Interface for Representation of Shape and Rigidity of 3D Virtual Objects.

    PubMed

    Takizawa, Naoki; Yano, Hiroaki; Iwata, Hiroo; Oshiro, Yukio; Ohkohchi, Nobuhiro

    2017-01-01

    This paper describes the development of an encountered-type haptic interface that can generate the physical characteristics, such as shape and rigidity, of three-dimensional (3D) virtual objects using an array of newly developed non-expandable balloons. To alter the rigidity of each non-expandable balloon, the volume of air in it is controlled through a linear actuator and a pressure sensor based on Hooke's law. Furthermore, to change the volume of each balloon, its exposed surface area is controlled by using another linear actuator with a trumpet-shaped tube. A position control mechanism is constructed to display virtual objects using the balloons. The 3D position of each balloon is controlled using a flexible tube and a string. The performance of the system is tested and the results confirm the effectiveness of the proposed principle and interface.

  15. Direct Manipulation in Virtual Reality

    NASA Technical Reports Server (NTRS)

    Bryson, Steve

    2003-01-01

    Virtual Reality interfaces offer several advantages for scientific visualization such as the ability to perceive three-dimensional data structures in a natural way. The focus of this chapter is direct manipulation, the ability for a user in virtual reality to control objects in the virtual environment in a direct and natural way, much as objects are manipulated in the real world. Direct manipulation provides many advantages for the exploration of complex, multi-dimensional data sets, by allowing the investigator the ability to intuitively explore the data environment. Because direct manipulation is essentially a control interface, it is better suited for the exploration and analysis of a data set than for the publishing or communication of features found in that data set. Thus direct manipulation is most relevant to the analysis of complex data that fills a volume of three-dimensional space, such as a fluid flow data set. Direct manipulation allows the intuitive exploration of that data, which facilitates the discovery of data features that would be difficult to find using more conventional visualization methods. Using a direct manipulation interface in virtual reality, an investigator can, for example, move a data probe about in space, watching the results and getting a sense of how the data varies within its spatial volume.

  16. Evaluation of haptic interfaces for simulation of drill vibration in virtual temporal bone surgery.

    PubMed

    Ghasemloonia, Ahmad; Baxandall, Shalese; Zareinia, Kourosh; Lui, Justin T; Dort, Joseph C; Sutherland, Garnette R; Chan, Sonny

    2016-11-01

    Surgical training is evolving from an observership model towards a new paradigm that includes virtual-reality (VR) simulation. In otolaryngology, temporal bone dissection has become intimately linked with VR simulation as the complexity of anatomy demands a high level of surgeon aptitude and confidence. While an adequate 3D visualization of the surgical site is available in current simulators, the force feedback rendered during haptic interaction does not convey vibrations. This lack of vibration rendering limits the simulation fidelity of a surgical drill such as that used in temporal bone dissection. In order to develop an immersive simulation platform capable of haptic force and vibration feedback, the efficacy of hand controllers for rendering vibration in different drilling circumstances needs to be investigated. In this study, the vibration rendering ability of four different haptic hand controllers were analyzed and compared to find the best commercial haptic hand controller. A test-rig was developed to record vibrations encountered during temporal bone dissection and a software was written to render the recorded signals without adding hardware to the system. An accelerometer mounted on the end-effector of each device recorded the rendered vibration signals. The newly recorded vibration signal was compared with the input signal in both time and frequency domains by coherence and cross correlation analyses to quantitatively measure the fidelity of these devices in terms of rendering vibrotactile drilling feedback in different drilling conditions. This method can be used to assess the vibration rendering ability in VR simulation systems and selection of ideal haptic devices. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Intelligent virtual reality in the setting of fuzzy sets

    NASA Technical Reports Server (NTRS)

    Dockery, John; Littman, David

    1992-01-01

    The authors have previously introduced the concept of virtual reality worlds governed by artificial intelligence. Creation of an intelligent virtual reality was further proposed as a universal interface for the handicapped. This paper extends consideration of intelligent virtual realty to a context in which fuzzy set principles are explored as a major tool for implementing theory in the domain of applications to the disabled.

  18. Bluetooth as a Playful Public Art Interface

    NASA Astrophysics Data System (ADS)

    Stukoff, Maria N.

    This chapter investigates how the application of emergent communication technologies assisted in the design of playful art experience in a public place. Every Passing Moment (EPM), was a mobile public artwork that tracked and recorded any discoverable Bluetooth device to automatically seed a flower in a virtual garden projected onto an urban screen. The EPM was the first public art work to run blu_box, a custom-designed Bluetooth system for mobile telephony. The aim of blu_box was to build a system that supported playful interactions between the public and an urban screen, openly accessible to anyone with a Bluetooth-enabled mobile phone. This participatory engagement was observed in EPM on three levels, namely; unconscious, conscious, and dynamic play. Furthermore, this chapter highlights how sound and face-to-face communication proved imperative in the play dynamics of EPM. In conclusion, this chapter proposes ways in which the use of emergent communication technologies in public places, especially when interfaced with urban screening platforms, can construct playful city spaces for the public at large.

  19. Skills based evaluation of alternative input methods to command a semi-autonomous electric wheelchair.

    PubMed

    Rojas, Mario; Ponce, Pedro; Molina, Arturo

    2016-08-01

    This paper presents the evaluation, under standardized metrics, of alternative input methods to steer and maneuver a semi-autonomous electric wheelchair. The Human-Machine Interface (HMI), which includes a virtual joystick, head movements and speech recognition controls, was designed to facilitate mobility skills for severely disabled people. Thirteen tasks, which are common to all the wheelchair users, were attempted five times by controlling it with the virtual joystick and the hands-free interfaces in different areas for disabled and non-disabled people. Even though the prototype has an intelligent navigation control, based on fuzzy logic and ultrasonic sensors, the evaluation was done without assistance. The scored values showed that both controls, the head movements and the virtual joystick have similar capabilities, 92.3% and 100%, respectively. However, the 54.6% capacity score obtained for the speech control interface indicates the needs of the navigation assistance to accomplish some of the goals. Furthermore, the evaluation time indicates those skills which require more user's training with the interface and specifications to improve the total performance of the wheelchair.

  20. Training Surgical Residents With a Haptic Robotic Central Venous Catheterization Simulator.

    PubMed

    Pepley, David F; Gordon, Adam B; Yovanoff, Mary A; Mirkin, Katelin A; Miller, Scarlett R; Han, David C; Moore, Jason Z

    Ultrasound guided central venous catheterization (CVC) is a common surgical procedure with complication rates ranging from 5 to 21 percent. Training is typically performed using manikins that do not simulate anatomical variations such as obesity and abnormal vessel positioning. The goal of this study was to develop and validate the effectiveness of a new virtual reality and force haptic based simulation platform for CVC of the right internal jugular vein. A CVC simulation platform was developed using a haptic robotic arm, 3D position tracker, and computer visualization. The haptic robotic arm simulated needle insertion force that was based on cadaver experiments. The 3D position tracker was used as a mock ultrasound device with realistic visualization on a computer screen. Upon completion of a practice simulation, performance feedback is given to the user through a graphical user interface including scoring factors based on good CVC practice. The effectiveness of the system was evaluated by training 13 first year surgical residents using the virtual reality haptic based training system over a 3 month period. The participants' performance increased from 52% to 96% on the baseline training scenario, approaching the average score of an expert surgeon: 98%. This also resulted in improvement in positive CVC practices including a 61% decrease between final needle tip position and vein center, a decrease in mean insertion attempts from 1.92 to 1.23, and a 12% increase in time spent aspirating the syringe throughout the procedure. A virtual reality haptic robotic simulator for CVC was successfully developed. Surgical residents training on the simulation improved to near expert levels after three robotic training sessions. This suggests that this system could act as an effective training device for CVC. Copyright © 2017 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  1. Optimizing virtual reality for all users through gaze-contingent and adaptive focus displays.

    PubMed

    Padmanaban, Nitish; Konrad, Robert; Stramer, Tal; Cooper, Emily A; Wetzstein, Gordon

    2017-02-28

    From the desktop to the laptop to the mobile device, personal computing platforms evolve over time. Moving forward, wearable computing is widely expected to be integral to consumer electronics and beyond. The primary interface between a wearable computer and a user is often a near-eye display. However, current generation near-eye displays suffer from multiple limitations: they are unable to provide fully natural visual cues and comfortable viewing experiences for all users. At their core, many of the issues with near-eye displays are caused by limitations in conventional optics. Current displays cannot reproduce the changes in focus that accompany natural vision, and they cannot support users with uncorrected refractive errors. With two prototype near-eye displays, we show how these issues can be overcome using display modes that adapt to the user via computational optics. By using focus-tunable lenses, mechanically actuated displays, and mobile gaze-tracking technology, these displays can be tailored to correct common refractive errors and provide natural focus cues by dynamically updating the system based on where a user looks in a virtual scene. Indeed, the opportunities afforded by recent advances in computational optics open up the possibility of creating a computing platform in which some users may experience better quality vision in the virtual world than in the real one.

  2. A 3-D mixed-reality system for stereoscopic visualization of medical dataset.

    PubMed

    Ferrari, Vincenzo; Megali, Giuseppe; Troia, Elena; Pietrabissa, Andrea; Mosca, Franco

    2009-11-01

    We developed a simple, light, and cheap 3-D visualization device based on mixed reality that can be used by physicians to see preoperative radiological exams in a natural way. The system allows the user to see stereoscopic "augmented images," which are created by mixing 3-D virtual models of anatomies obtained by processing preoperative volumetric radiological images (computed tomography or MRI) with real patient live images, grabbed by means of cameras. The interface of the system consists of a head-mounted display equipped with two high-definition cameras. Cameras are mounted in correspondence of the user's eyes and allow one to grab live images of the patient with the same point of view of the user. The system does not use any external tracker to detect movements of the user or the patient. The movements of the user's head and the alignment of virtual patient with the real one are done using machine vision methods applied on pairs of live images. Experimental results, concerning frame rate and alignment precision between virtual and real patient, demonstrate that machine vision methods used for localization are appropriate for the specific application and that systems based on stereoscopic mixed reality are feasible and can be proficiently adopted in clinical practice.

  3. Optimizing virtual reality for all users through gaze-contingent and adaptive focus displays

    NASA Astrophysics Data System (ADS)

    Padmanaban, Nitish; Konrad, Robert; Stramer, Tal; Cooper, Emily A.; Wetzstein, Gordon

    2017-02-01

    From the desktop to the laptop to the mobile device, personal computing platforms evolve over time. Moving forward, wearable computing is widely expected to be integral to consumer electronics and beyond. The primary interface between a wearable computer and a user is often a near-eye display. However, current generation near-eye displays suffer from multiple limitations: they are unable to provide fully natural visual cues and comfortable viewing experiences for all users. At their core, many of the issues with near-eye displays are caused by limitations in conventional optics. Current displays cannot reproduce the changes in focus that accompany natural vision, and they cannot support users with uncorrected refractive errors. With two prototype near-eye displays, we show how these issues can be overcome using display modes that adapt to the user via computational optics. By using focus-tunable lenses, mechanically actuated displays, and mobile gaze-tracking technology, these displays can be tailored to correct common refractive errors and provide natural focus cues by dynamically updating the system based on where a user looks in a virtual scene. Indeed, the opportunities afforded by recent advances in computational optics open up the possibility of creating a computing platform in which some users may experience better quality vision in the virtual world than in the real one.

  4. Lessons Learned during the Development and Operation of Virtual Observatory

    NASA Astrophysics Data System (ADS)

    Ohishi, M.; Shirasaki, Y.; Komiya, Y.; Mizumoto, Y.; Yasuda, N.; Tanaka, M.

    2010-12-01

    In the last a few years several Virtual Observatory (VO) projects have entered from the research and development phase to the operations phase. The VO projects include AstroGrid (UK), Virtual Astronomical Observatory (former National Virtual Observatory, USA), EURO-VO (EU), Japanese Virtual Observatory (Japan), and so on. This successful transition from the development phase to the operations phase owes primarily to the concerted action to develop standard interfaces among the VO projects in the world, that has been conducted in the International Virtual Observatory Alliance. The registry interface has been one of the most important key to share among the VO projects and data centers (data providers) with the observed data and the catalog data. Data access protocols and/or language (SIAP, SSAP, ADQL) and the common data format (VOTable) are other keys. Consequently we are able to find scientific papers so far published. However, we had faced some experience during the implementation process as follows:

  5. At the initial stage of the registry implementation, some fraction of the registry meta data were not correctly set, or some meta data were missing. IVOA members found that it would be needed to have validation tools to check the compliance before making the interface public;
  6. It seemed that some data centers and/or data providers might find some difficulties to implement various standardized interfaces (protocols) in order to publish their data through the VO interfaces. If there were some kind of VO interface toolkits, it would be much easier for the data centers to implement the VO interfaces; At the current VO standardization, it has not been discussed in depth on the quality assurance on the published data, or how we could provide indexes on the data quality. Such measures would be quite helpful for the data users in order to judge the data quality. It would be needed to discuss this issue not only within IVOA but with observatories and data providers;
  7. Past and current development in the VO projects have been driven from the technology side. However, since the ultimate purpose of the VOs is to accelerate getting astronomical insights from, e.g., huge amount of data or multi-wavelength data, science driven advertisement (including schools to train astronomers) would be needed;
  8. Some data centers and data providers mentioned that they need to be credited. In the Data-Centric science era it would be crucial to explicitly respect the observatories, data centers and data providers;
  9. Some suggestions to these issues are described.

  10. MIRAGE: The data acquisition, analysis, and display system

    NASA Technical Reports Server (NTRS)

    Rosser, Robert S.; Rahman, Hasan H.

    1993-01-01

    Developed for the NASA Johnson Space Center and Life Sciences Directorate by GE Government Services, the Microcomputer Integrated Real-time Acquisition Ground Equipment (MIRAGE) system is a portable ground support system for Spacelab life sciences experiments. The MIRAGE system can acquire digital or analog data. Digital data may be NRZ-formatted telemetry packets of packets from a network interface. Analog signal are digitized and stored in experimental packet format. Data packets from any acquisition source are archived to a disk as they are received. Meta-parameters are generated from the data packet parameters by applying mathematical and logical operators. Parameters are displayed in text and graphical form or output to analog devices. Experiment data packets may be retransmitted through the network interface. Data stream definition, experiment parameter format, parameter displays, and other variables are configured using spreadsheet database. A database can be developed to support virtually any data packet format. The user interface provides menu- and icon-driven program control. The MIRAGE system can be integrated with other workstations to perform a variety of functions. The generic capabilities, adaptability and ease of use make the MIRAGE a cost-effective solution to many experimental data processing requirements.

  11. Mixed Reality with HoloLens: Where Virtual Reality Meets Augmented Reality in the Operating Room.

    PubMed

    Tepper, Oren M; Rudy, Hayeem L; Lefkowitz, Aaron; Weimer, Katie A; Marks, Shelby M; Stern, Carrie S; Garfein, Evan S

    2017-11-01

    Virtual reality and augmented reality devices have recently been described in the surgical literature. The authors have previously explored various iterations of these devices, and although they show promise, it has become clear that virtual reality and/or augmented reality devices alone do not adequately meet the demands of surgeons. The solution may lie in a hybrid technology known as mixed reality, which merges many virtual reality and augmented realty features. Microsoft's HoloLens, the first commercially available mixed reality device, provides surgeons intraoperative hands-free access to complex data, the real environment, and bidirectional communication. This report describes the use of HoloLens in the operating room to improve decision-making and surgical workflow. The pace of mixed reality-related technological development will undoubtedly be rapid in the coming years, and plastic surgeons are ideally suited to both lead and benefit from this advance.

  12. Monitoring and Control Interface Based on Virtual Sensors

    PubMed Central

    Escobar, Ricardo F.; Adam-Medina, Manuel; García-Beltrán, Carlos D.; Olivares-Peregrino, Víctor H.; Juárez-Romero, David; Guerrero-Ramírez, Gerardo V.

    2014-01-01

    In this article, a toolbox based on a monitoring and control interface (MCI) is presented and applied in a heat exchanger. The MCI was programed in order to realize sensor fault detection and isolation and fault tolerance using virtual sensors. The virtual sensors were designed from model-based high-gain observers. To develop the control task, different kinds of control laws were included in the monitoring and control interface. These control laws are PID, MPC and a non-linear model-based control law. The MCI helps to maintain the heat exchanger under operation, even if a temperature outlet sensor fault occurs; in the case of outlet temperature sensor failure, the MCI will display an alarm. The monitoring and control interface is used as a practical tool to support electronic engineering students with heat transfer and control concepts to be applied in a double-pipe heat exchanger pilot plant. The method aims to teach the students through the observation and manipulation of the main variables of the process and by the interaction with the monitoring and control interface (MCI) developed in LabVIEW©. The MCI provides the electronic engineering students with the knowledge of heat exchanger behavior, since the interface is provided with a thermodynamic model that approximates the temperatures and the physical properties of the fluid (density and heat capacity). An advantage of the interface is the easy manipulation of the actuator for an automatic or manual operation. Another advantage of the monitoring and control interface is that all algorithms can be manipulated and modified by the users. PMID:25365462

  13. VRUSE--a computerised diagnostic tool: for usability evaluation of virtual/synthetic environment systems.

    PubMed

    Kalawsky, R S

    1999-02-01

    A special questionnaire (VRUSE) has been designed to measure the usability of a VR system according to the attitude and perception of its users. Important aspects of VR systems were carefully derived to produce key usability factors for the questionnaire. Unlike questionnaires designed for generic interfaces VRUSE is specifically designed to cater for evaluating virtual environments, being a diagnostic tool providing a wealth of information about a user's viewpoint of the interface. VRUSE can be used to great effect with other evaluation techniques to pinpoint problematical areas of a VR interface. Other applications include bench-marking of competitor VR systems.

  14. Simple force feedback for small virtual environments

    NASA Astrophysics Data System (ADS)

    Schiefele, Jens; Albert, Oliver; van Lier, Volker; Huschka, Carsten

    1998-08-01

    In today's civil flight training simulators only the cockpit and all its interaction devices exist as physical mockups. All other elements such as flight behavior, motion, sound, and the visual system are virtual. As an extension to this approach `Virtual Flight Simulation' tries to subsidize the cockpit mockup by a 3D computer generated image. The complete cockpit including the exterior view is displayed on a Head Mounted Display (HMD), a BOOM, or a Cave Animated Virtual Environment. In most applications a dataglove or virtual pointers are used as input devices. A basic problem of such a Virtual Cockpit simulation is missing force feedback. A pilot cannot touch and feel buttons, knobs, dials, etc. he tries to manipulate. As a result, it is very difficult to generate realistic inputs into VC systems. `Seating Bucks' are used in automotive industry to overcome the problem of missing force feedback. Only a seat, steering wheel, pedal, stick shift, and radio panel are physically available. All other geometry is virtual and therefore untouchable but visible in the output device. In extension to this concept a `Seating Buck' for commercial transport aircraft cockpits was developed. Pilot seat, side stick, pedals, thrust-levers, and flaps lever are physically available. All other panels are simulated by simple flat plastic panels. They are located at the same location as their real counterparts only lacking the real input devices. A pilot sees the entire photorealistic cockpit in a HMD as 3D geometry but can only touch the physical parts and plastic panels. In order to determine task performance with the developed Seating Buck, a test series was conducted. Users press buttons, adapt dials, and turn knobs. In a first test, a complete virtual environment was used. The second setting had a plastic panel replacing all input devices. Finally, as cross reference the participants had to repeat the test with a complete physical mockup of the input devices. All panels and physical devices can be easily relocated to simulate a different type of cockpit. Maximal 30 minutes are needed for a complete adaptation. So far, an Airbus A340 and a generic cockpit are supported.

  15. DEC Ada interface to Screen Management Guidelines (SMG)

    NASA Technical Reports Server (NTRS)

    Laomanachareon, Somsak; Lekkos, Anthony A.

    1986-01-01

    DEC's Screen Management Guidelines are the Run-Time Library procedures that perform terminal-independent screen management functions on a VT100-class terminal. These procedures assist users in designing, composing, and keeping track of complex images on a video screen. There are three fundamental elements in the screen management model: the pasteboard, the virtual display, and the virtual keyboard. The pasteboard is like a two-dimensional area on which a user places and manipulates screen displays. The virtual display is a rectangular part of the terminal screen to which a program writes data with procedure calls. The virtual keyboard is a logical structure for input operation associated with a physical keyboard. SMG can be called by all major VAX languages. Through Ada, predefined language Pragmas are used to interface with SMG. These features and elements of SMG are briefly discussed.

  16. The Virtual Solar Observatory and the Heliophysics Meta-Virtual Observatory

    NASA Astrophysics Data System (ADS)

    Gurman, J. B.; Hourclé, J. A.; Bogart, R. S.; Tian, K.; Hill, F.; Suàrez-Sola, I.; Zarro, D. M.; Davey, A. R.; Martens, P. C.; Yoshimura, K.; Reardon, K. M.

    2006-12-01

    The Virtual Solar Observatory (VSO) has survived its infancy and provides metadata search and data identification for measurements from 45 instrument data sets held at 12 online archives, as well as flare and coronal mass ejection (CME) event lists. Like any toddler, the VSO is good at getting into anything and everything, and is now extending its grasp to more data sets, new missions, and new access methods using its application programming interface (API). We discuss and demonstrate recent changes, including developments for STEREO and SDO, and an IDL-callable interface for the VSO API. We urge the heliophysics community to help civilize this obstreperous youngster by providing input on ways to make the VSO even more useful for system science research in its role as part of the growing cluster of Heliophysics Virtual Observatories.

  17. Functional performance comparison between real and virtual tasks in older adults

    PubMed Central

    Bezerra, Ítalla Maria Pinheiro; Crocetta, Tânia Brusque; Massetti, Thais; da Silva, Talita Dias; Guarnieri, Regiani; Meira, Cassio de Miranda; Arab, Claudia; de Abreu, Luiz Carlos; de Araujo, Luciano Vieira; Monteiro, Carlos Bandeira de Mello

    2018-01-01

    Abstract Introduction: Ageing is usually accompanied by deterioration of physical abilities, such as muscular strength, sensory sensitivity, and functional capacity, making chronic diseases, and the well-being of older adults new challenges to global public health. Objective: The purpose of this study was to evaluate whether a task practiced in a virtual environment could promote better performance and enable transfer to the same task in a real environment. Method: The study evaluated 65 older adults of both genders, aged 60 to 82 years (M = 69.6, SD = 6.3). A timing coincident task was applied to measure the perceptual-motor ability to perform a motor response. The participants were divided into 2 groups: started in a real interface and started in a virtual interface. Results: All subjects improved their performance during the practice, but improvement was not observed for the real interface, as the participants were near maximum performance from the beginning of the task. However, there was no transfer of performance from the virtual to real environment or vice versa. Conclusions: The virtual environment was shown to provide improvement of performance with a short-term motor learning protocol in a timing coincident task. This result suggests that the practice of tasks in a virtual environment seems to be a promising tool for the assessment and training of healthy older adults, even though there was no transfer of performance to a real environment. Trial registration: ISRCTN02960165. Registered 8 November 2016. PMID:29369177

  18. Virtual interface environment

    NASA Technical Reports Server (NTRS)

    Fisher, Scott S.

    1986-01-01

    A head-mounted, wide-angle, stereoscopic display system controlled by operator position, voice and gesture has been developed for use as a multipurpose interface environment. The system provides a multisensory, interactive display environment in which a user can virtually explore a 360-degree synthesized or remotely sensed environment and can viscerally interact with its components. Primary applications of the system are in telerobotics, management of large-scale integrated information systems, and human factors research. System configuration, application scenarios, and research directions are described.

  19. Brain-computer interface: changes in performance using virtual reality techniques.

    PubMed

    Ron-Angevin, Ricardo; Díaz-Estrella, Antonio

    2009-01-09

    The ability to control electroencephalographic (EEG) signals when different mental tasks are carried out would provide a method of communication for people with serious motor function problems. This system is known as a brain-computer interface (BCI). Due to the difficulty of controlling one's own EEG signals, a suitable training protocol is required to motivate subjects, as it is necessary to provide some type of visual feedback allowing subjects to see their progress. Conventional systems of feedback are based on simple visual presentations, such as a horizontal bar extension. However, virtual reality is a powerful tool with graphical possibilities to improve BCI-feedback presentation. The objective of the study is to explore the advantages of the use of feedback based on virtual reality techniques compared to conventional systems of feedback. Sixteen untrained subjects, divided into two groups, participated in the experiment. A group of subjects was trained using a BCI system, which uses conventional feedback (bar extension), and another group was trained using a BCI system, which submits subjects to a more familiar environment, such as controlling a car to avoid obstacles. The obtained results suggest that EEG behaviour can be modified via feedback presentation. Significant differences in classification error rates between both interfaces were obtained during the feedback period, confirming that an interface based on virtual reality techniques can improve the feedback control, specifically for untrained subjects.

  20. Network device interface for digitally interfacing data channels to a controller a via network

    NASA Technical Reports Server (NTRS)

    Konz, Daniel W. (Inventor); Ellerbrock, Philip J. (Inventor); Grant, Robert L. (Inventor); Winkelmann, Joseph P. (Inventor)

    2006-01-01

    The present invention provides a network device interface and method for digitally connecting a plurality of data channels to a controller using a network bus. The network device interface interprets commands and data received from the controller and polls the data channels in accordance with these commands. Specifically, the network device interface receives digital commands and data from the controller, and based on these commands and data, communicates with the data channels to either retrieve data in the case of a sensor or send data to activate an actuator. In one embodiment, the bus controller transmits messages to the network device interface containing a plurality of bits having a value defined by a transition between first and second states in the bits. The network device interface determines timing of the data sequence of the message and uses the determined timing to communicate with the bus controller.

  21. Network device interface for digitally interfacing data channels to a controller via a network

    NASA Technical Reports Server (NTRS)

    Ellerbrock, Philip J. (Inventor); Grant, Robert L. (Inventor); Konz, Daniel W. (Inventor); Winkelmann, Joseph P. (Inventor)

    2005-01-01

    The present invention provides a network device interface and method for digitally connecting a plurality of data channels, such as sensors, actuators, and subsystems, to a controller using a network bus. The network device interface interprets commands and data received from the controller and polls the data channels in accordance with these commands. Specifically, the network device interface receives digital commands and data from the controller, and based on these commands and data, communicates with the data channels to either retrieve data in the case of a sensor or send data to activate an actuator. Data retrieved from the sensor is then converted by the network device interface into digital signals and transmitted back to the controller. In one advantageous embodiment, the network device interface uses a specialized protocol for communicating across the network bus that uses a low-level instruction set and has low overhead for data communication.

  22. Network device interface for digitally interfacing data channels to a controller via a network

    NASA Technical Reports Server (NTRS)

    Ellerbrock, Philip J. (Inventor); Winkelmann, Joseph P. (Inventor); Grant, Robert L. (Inventor); Konz, Daniel W. (Inventor)

    2006-01-01

    The present invention provides a network device interface and method for digitally connecting a plurality of data channels, such as sensors, actuators, and subsystems, to a controller using a network bus. The network device interface interprets commands and data received from the controller and polls the data channels in accordance with these commands. Specifically, the network device interface receives digital commands and data from the controller, and based on these commands and data, communicates with the data channels to either retrieve data in the case of a sensor or send data to activate an actuator. Data retrieved from the sensor is then converted by the network device interface into digital signals and transmitted back to the controller. In one advantageous embodiment, the network device interface is a state machine, such as an ASIC, that operates independent of a processor in communicating with the bus controller and data channels.

  23. Network device interface for digitally interfacing data channels to a controller via a network

    NASA Technical Reports Server (NTRS)

    Ellerbrock, Philip J. (Inventor); Konz, Daniel W. (Inventor); Winkelmann, Joseph P. (Inventor); Grant, Robert L. (Inventor)

    2004-01-01

    The present invention provides a network device interface and method for digitally connecting a plurality of data channels, such as sensors, actuators, and subsystems, to a controller using a network bus. The network device interface interprets commands and data received from the controller and polls the data channels in accordance with these commands. Specifically, the network device interface receives digital commands and data from the controller, and based on these commands and data, communicates with the data channels to either retrieve data in the case of a sensor or send data to activate an actuator. Data retrieved from the sensor is then converted by the network device interface into digital signals and transmitted back to the controller. In one advantageous embodiment, the network device interface uses a specialized protocol for communicating across the network bus that uses a low-level instruction set and has low overhead for data communication.

  24. Reprint of: Client interfaces to the Virtual Observatory Registry

    NASA Astrophysics Data System (ADS)

    Demleitner, M.; Harrison, P.; Taylor, M.; Normand, J.

    2015-06-01

    The Virtual Observatory Registry is a distributed directory of information systems and other resources relevant to astronomy. To make it useful, facilities to query that directory must be provided to humans and machines alike. This article reviews the development and status of such facilities, also considering the lessons learnt from about a decade of experience with Registry interfaces. After a brief outline of the history of the standards development, it describes the use of Registry interfaces in some popular clients as well as dedicated UIs for interrogating the Registry. It continues with a thorough discussion of the design of the two most recent Registry interface standards, RegTAP on the one hand and a full-text-based interface on the other hand. The article finally lays out some of the less obvious conventions that emerged in the interaction between providers of registry records and Registry users as well as remaining challenges and current developments.

  25. Client interfaces to the Virtual Observatory Registry

    NASA Astrophysics Data System (ADS)

    Demleitner, M.; Harrison, P.; Taylor, M.; Normand, J.

    2015-04-01

    The Virtual Observatory Registry is a distributed directory of information systems and other resources relevant to astronomy. To make it useful, facilities to query that directory must be provided to humans and machines alike. This article reviews the development and status of such facilities, also considering the lessons learnt from about a decade of experience with Registry interfaces. After a brief outline of the history of the standards development, it describes the use of Registry interfaces in some popular clients as well as dedicated UIs for interrogating the Registry. It continues with a thorough discussion of the design of the two most recent Registry interface standards, RegTAP on the one hand and a full-text-based interface on the other hand. The article finally lays out some of the less obvious conventions that emerged in the interaction between providers of registry records and Registry users as well as remaining challenges and current developments.

  1. Virtual reality for mobility devices: training applications and clinical results: a review.

    PubMed

    Erren-Wolters, Catelijne Victorien; van Dijk, Henk; de Kort, Alexander C; Ijzerman, Maarten J; Jannink, Michiel J

    2007-06-01

    Virtual reality technology is an emerging technology that possibly can address the problems encountered in training (elderly) people to handle a mobility device. The objective of this review was to study different virtual reality training applications as well as their clinical implication for patients with mobility problems. Computerized literature searches were performed using the MEDLINE, Cochrane, CIRRIE and REHABDATA databases. This resulted in eight peer reviewed journal articles. The included studies could be divided into three categories, on the basis of their study objective. Five studies were related to training driving skills, two to physical exercise training and one to leisure activity. This review suggests that virtual reality is a potentially useful means to improve the use of a mobility device, in training one's driving skills, for keeping up the physical condition and also in a way of leisure time activity. Although this field of research appears to be in its early stages, the included studies pointed out a promising transfer of training in a virtual environment to the real-life use of mobility devices.

  2. Evaluation of a conceptual framework for predicting navigation performance in virtual reality.

    PubMed

    Grübel, Jascha; Thrash, Tyler; Hölscher, Christoph; Schinazi, Victor R

    2017-01-01

    Previous research in spatial cognition has often relied on simple spatial tasks in static environments in order to draw inferences regarding navigation performance. These tasks are typically divided into categories (e.g., egocentric or allocentric) that reflect different two-systems theories. Unfortunately, this two-systems approach has been insufficient for reliably predicting navigation performance in virtual reality (VR). In the present experiment, participants were asked to learn and navigate towards goal locations in a virtual city and then perform eight simple spatial tasks in a separate environment. These eight tasks were organised along four orthogonal dimensions (static/dynamic, perceived/remembered, egocentric/allocentric, and distance/direction). We employed confirmatory and exploratory analyses in order to assess the relationship between navigation performance and performances on these simple tasks. We provide evidence that a dynamic task (i.e., intercepting a moving object) is capable of predicting navigation performance in a familiar virtual environment better than several categories of static tasks. These results have important implications for studies on navigation in VR that tend to over-emphasise the role of spatial memory. Given that our dynamic tasks required efficient interaction with the human interface device (HID), they were more closely aligned with the perceptuomotor processes associated with locomotion than wayfinding. In the future, researchers should consider training participants on HIDs using a dynamic task prior to conducting a navigation experiment. Performances on dynamic tasks should also be assessed in order to avoid confounding skill with an HID and spatial knowledge acquisition.

  3. Evaluation of a conceptual framework for predicting navigation performance in virtual reality

    PubMed Central

    Thrash, Tyler; Hölscher, Christoph; Schinazi, Victor R.

    2017-01-01

    Previous research in spatial cognition has often relied on simple spatial tasks in static environments in order to draw inferences regarding navigation performance. These tasks are typically divided into categories (e.g., egocentric or allocentric) that reflect different two-systems theories. Unfortunately, this two-systems approach has been insufficient for reliably predicting navigation performance in virtual reality (VR). In the present experiment, participants were asked to learn and navigate towards goal locations in a virtual city and then perform eight simple spatial tasks in a separate environment. These eight tasks were organised along four orthogonal dimensions (static/dynamic, perceived/remembered, egocentric/allocentric, and distance/direction). We employed confirmatory and exploratory analyses in order to assess the relationship between navigation performance and performances on these simple tasks. We provide evidence that a dynamic task (i.e., intercepting a moving object) is capable of predicting navigation performance in a familiar virtual environment better than several categories of static tasks. These results have important implications for studies on navigation in VR that tend to over-emphasise the role of spatial memory. Given that our dynamic tasks required efficient interaction with the human interface device (HID), they were more closely aligned with the perceptuomotor processes associated with locomotion than wayfinding. In the future, researchers should consider training participants on HIDs using a dynamic task prior to conducting a navigation experiment. Performances on dynamic tasks should also be assessed in order to avoid confounding skill with an HID and spatial knowledge acquisition. PMID:28915266

  4. Mobile devices, Virtual Reality, Augmented Reality, and Digital Geoscience Education.

    NASA Astrophysics Data System (ADS)

    Crompton, H.; De Paor, D. G.; Whitmeyer, S. J.; Bentley, C.

    2016-12-01

    Mobile devices are playing an increasing role in geoscience education. Affordances include instructor-student communication and class management in large classrooms, virtual and augmented reality applications, digital mapping, and crowd-sourcing. Mobile technologies have spawned the sub field of mobile learning or m-learning, which is defined as learning across multiple contexts, through social and content interactions. Geoscientists have traditionally engaged in non-digital mobile learning via fieldwork, but digital devices are greatly extending the possibilities, especially for non-traditional students. Smartphones and tablets are the most common devices but smart glasses such as Pivothead enable live streaming of a first-person view (see for example, https://youtu.be/gWrDaYP5w58). Virtual reality headsets such as Google Cardboard create an immersive virtual field experience and digital imagery such as GigaPan and Structure from Motion enables instructors and/or students to create virtual specimens and outcrops that are sharable across the globe. Whereas virtual reality (VR) replaces the real world with a virtual representation, augmented reality (AR) overlays digital data on the live scene visible to the user in real time. We have previously reported on our use of the AR application called FreshAiR for geoscientific "egg hunts." The popularity of Pokémon Go demonstrates the potential of AR for mobile learning in the geosciences.

  5. Toward a Virtual Solar Observatory: Starting Before the Petabytes Fall

    NASA Technical Reports Server (NTRS)

    Gurman, J. B.; Fisher, Richard R. (Technical Monitor)

    2002-01-01

    NASA is currently engaged in the study phase of a modest effort to establish a Virtual Solar Observatory (VSO). The VSO would serve ground- and space-based solar physics data sets from a distributed network of archives through a small number of interfaces to the scientific community. The basis of this approach, as of all planned virtual observatories, is the translation of metadata from the various sources via source-specific dictionaries so the user will not have to distinguish among keyword usages. A single Web interface should give access to all the distributed data. We present the current status of the VSO, its initial scope, and its relation to the European EGSO effort.

  6. System and method for interfacing large-area electronics with integrated circuit devices

    DOEpatents

    Verma, Naveen; Glisic, Branko; Sturm, James; Wagner, Sigurd

    2016-07-12

    A system and method for interfacing large-area electronics with integrated circuit devices is provided. The system may be implemented in an electronic device including a large area electronic (LAE) device disposed on a substrate. An integrated circuit IC is disposed on the substrate. A non-contact interface is disposed on the substrate and coupled between the LAE device and the IC. The non-contact interface is configured to provide at least one of a data acquisition path or control path between the LAE device and the IC.

  7. Intelligent subsystem interface for modular hardware system

    NASA Technical Reports Server (NTRS)

    Caffrey, Robert T. (Inventor); Krening, Douglas N. (Inventor); Lannan, Gregory B. (Inventor); Schneiderwind, Michael J. (Inventor); Schneiderwind, Robert A. (Inventor)

    2000-01-01

    A single chip application specific integrated circuit (ASIC) which provides a flexible, modular interface between a subsystem and a standard system bus. The ASIC includes a microcontroller/microprocessor, a serial interface for connection to the bus, and a variety of communications interface devices available for coupling to the subsystem. A three-bus architecture, utilizing arbitration, provides connectivity within the ASIC and between the ASIC and the subsystem. The communication interface devices include UART (serial), parallel, analog, and external device interface utilizing bus connections paired with device select signals. A low power (sleep) mode is provided as is a processor disable option.

  8. A Survey of Middleware for Sensor and Network Virtualization

    PubMed Central

    Khalid, Zubair; Fisal, Norsheila; Rozaini, Mohd.

    2014-01-01

    Wireless Sensor Network (WSN) is leading to a new paradigm of Internet of Everything (IoE). WSNs have a wide range of applications but are usually deployed in a particular application. However, the future of WSNs lies in the aggregation and allocation of resources, serving diverse applications. WSN virtualization by the middleware is an emerging concept that enables aggregation of multiple independent heterogeneous devices, networks, radios and software platforms; and enhancing application development. WSN virtualization, middleware can further be categorized into sensor virtualization and network virtualization. Middleware for WSN virtualization poses several challenges like efficient decoupling of networks, devices and software. In this paper efforts have been put forward to bring an overview of the previous and current middleware designs for WSN virtualization, the design goals, software architectures, abstracted services, testbeds and programming techniques. Furthermore, the paper also presents the proposed model, challenges and future opportunities for further research in the middleware designs for WSN virtualization. PMID:25615737

  9. A survey of middleware for sensor and network virtualization.

    PubMed

    Khalid, Zubair; Fisal, Norsheila; Rozaini, Mohd

    2014-12-12

    Wireless Sensor Network (WSN) is leading to a new paradigm of Internet of Everything (IoE). WSNs have a wide range of applications but are usually deployed in a particular application. However, the future of WSNs lies in the aggregation and allocation of resources, serving diverse applications. WSN virtualization by the middleware is an emerging concept that enables aggregation of multiple independent heterogeneous devices, networks, radios and software platforms; and enhancing application development. WSN virtualization, middleware can further be categorized into sensor virtualization and network virtualization. Middleware for WSN virtualization poses several challenges like efficient decoupling of networks, devices and software. In this paper efforts have been put forward to bring an overview of the previous and current middleware designs for WSN virtualization, the design goals, software architectures, abstracted services, testbeds and programming techniques. Furthermore, the paper also presents the proposed model, challenges and future opportunities for further research in the middleware designs for WSN virtualization.

  10. Open access for ALICE analysis based on virtualization technology

    NASA Astrophysics Data System (ADS)

    Buncic, P.; Gheata, M.; Schutz, Y.

    2015-12-01

    Open access is one of the important leverages for long-term data preservation for a HEP experiment. To guarantee the usability of data analysis tools beyond the experiment lifetime it is crucial that third party users from the scientific community have access to the data and associated software. The ALICE Collaboration has developed a layer of lightweight components built on top of virtualization technology to hide the complexity and details of the experiment-specific software. Users can perform basic analysis tasks within CernVM, a lightweight generic virtual machine, paired with an ALICE specific contextualization. Once the virtual machine is launched, a graphical user interface is automatically started without any additional configuration. This interface allows downloading the base ALICE analysis software and running a set of ALICE analysis modules. Currently the available tools include fully documented tutorials for ALICE analysis, such as the measurement of strange particle production or the nuclear modification factor in Pb-Pb collisions. The interface can be easily extended to include an arbitrary number of additional analysis modules. We present the current status of the tools used by ALICE through the CERN open access portal, and the plans for future extensions of this system.

  11. Virtual Sensors for Advanced Controllers in Rehabilitation Robotics.

    PubMed

    Mancisidor, Aitziber; Zubizarreta, Asier; Cabanes, Itziar; Portillo, Eva; Jung, Je Hyung

    2018-03-05

    In order to properly control rehabilitation robotic devices, the measurement of interaction force and motion between patient and robot is an essential part. Usually, however, this is a complex task that requires the use of accurate sensors which increase the cost and the complexity of the robotic device. In this work, we address the development of virtual sensors that can be used as an alternative of actual force and motion sensors for the Universal Haptic Pantograph (UHP) rehabilitation robot for upper limbs training. These virtual sensors estimate the force and motion at the contact point where the patient interacts with the robot using the mathematical model of the robotic device and measurement through low cost position sensors. To demonstrate the performance of the proposed virtual sensors, they have been implemented in an advanced position/force controller of the UHP rehabilitation robot and experimentally evaluated. The experimental results reveal that the controller based on the virtual sensors has similar performance to the one using direct measurement (less than 0.005 m and 1.5 N difference in mean error). Hence, the developed virtual sensors to estimate interaction force and motion can be adopted to replace actual precise but normally high-priced sensors which are fundamental components for advanced control of rehabilitation robotic devices.

  12. Cross-species 3D virtual reality toolbox for visual and cognitive experiments.

    PubMed

    Doucet, Guillaume; Gulli, Roberto A; Martinez-Trujillo, Julio C

    2016-06-15

    Although simplified visual stimuli, such as dots or gratings presented on homogeneous backgrounds, provide strict control over the stimulus parameters during visual experiments, they fail to approximate visual stimulation in natural conditions. Adoption of virtual reality (VR) in neuroscience research has been proposed to circumvent this problem, by combining strict control of experimental variables and behavioral monitoring within complex and realistic environments. We have created a VR toolbox that maximizes experimental flexibility while minimizing implementation costs. A free VR engine (Unreal 3) has been customized to interface with any control software via text commands, allowing seamless introduction into pre-existing laboratory data acquisition frameworks. Furthermore, control functions are provided for the two most common programming languages used in visual neuroscience: Matlab and Python. The toolbox offers milliseconds time resolution necessary for electrophysiological recordings and is flexible enough to support cross-species usage across a wide range of paradigms. Unlike previously proposed VR solutions whose implementation is complex and time-consuming, our toolbox requires minimal customization or technical expertise to interface with pre-existing data acquisition frameworks as it relies on already familiar programming environments. Moreover, as it is compatible with a variety of display and input devices, identical VR testing paradigms can be used across species, from rodents to humans. This toolbox facilitates the addition of VR capabilities to any laboratory without perturbing pre-existing data acquisition frameworks, or requiring any major hardware changes. Copyright © 2016 Z. All rights reserved.

  13. Interfacing laboratory instruments to multiuser, virtual memory computers

    NASA Technical Reports Server (NTRS)

    Generazio, Edward R.; Stang, David B.; Roth, Don J.

    1989-01-01

    Incentives, problems and solutions associated with interfacing laboratory equipment with multiuser, virtual memory computers are presented. The major difficulty concerns how to utilize these computers effectively in a medium sized research group. This entails optimization of hardware interconnections and software to facilitate multiple instrument control, data acquisition and processing. The architecture of the system that was devised, and associated programming and subroutines are described. An example program involving computer controlled hardware for ultrasonic scan imaging is provided to illustrate the operational features.

  14. Integrating UniTree with the data migration API

    NASA Technical Reports Server (NTRS)

    Schrodel, David G.

    1994-01-01

    The Data Migration Application Programming Interface (DMAPI) has the potential to allow developers of open systems Hierarchical Storage Management (HSM) products to virtualize native file systems without the requirement to make changes to the underlying operating system. This paper describes advantages of virtualizing native file systems in hierarchical storage management systems, the DMAPI at a high level, what the goals are for the interface, and the integration of the Convex UniTree+HSM with DMAPI along with some of the benefits derived in the resulting product.

  15. HyFinBall: A Two-Handed, Hybrid 2D/3D Desktop VR Interface for Visualization

    DTIC Science & Technology

    2013-01-01

    user study . This is done in the context of a rich, visual analytics interface containing coordinated views with 2D and 3D visualizations and...the user interface (hardware and software), the design space, as well as preliminary results of a formal user study . This is done in the context of a ... virtual reality , user interface , two-handed interface , hybrid user interface , multi-touch, gesture,

  16. Interactive Molecular Graphics for Augmented Reality Using HoloLens.

    PubMed

    Müller, Christoph; Krone, Michael; Huber, Markus; Biener, Verena; Herr, Dominik; Koch, Steffen; Reina, Guido; Weiskopf, Daniel; Ertl, Thomas

    2018-06-13

    Immersive technologies like stereo rendering, virtual reality, or augmented reality (AR) are often used in the field of molecular visualisation. Modern, comparably lightweight and affordable AR headsets like Microsoft's HoloLens open up new possibilities for immersive analytics in molecular visualisation. A crucial factor for a comprehensive analysis of molecular data in AR is the rendering speed. HoloLens, however, has limited hardware capabilities due to requirements like battery life, fanless cooling and weight. Consequently, insights from best practises for powerful desktop hardware may not be transferable. Therefore, we evaluate the capabilities of the HoloLens hardware for modern, GPU-enabled, high-quality rendering methods for the space-filling model commonly used in molecular visualisation. We also assess the scalability for large molecular data sets. Based on the results, we discuss ideas and possibilities for immersive molecular analytics. Besides more obvious benefits like the stereoscopic rendering offered by the device, this specifically includes natural user interfaces that use physical navigation instead of the traditional virtual one. Furthermore, we consider different scenarios for such an immersive system, ranging from educational use to collaborative scenarios.

  17. Tangible display systems: direct interfaces for computer-based studies of surface appearance

    NASA Astrophysics Data System (ADS)

    Darling, Benjamin A.; Ferwerda, James A.

    2010-02-01

    When evaluating the surface appearance of real objects, observers engage in complex behaviors involving active manipulation and dynamic viewpoint changes that allow them to observe the changing patterns of surface reflections. We are developing a class of tangible display systems to provide these natural modes of interaction in computer-based studies of material perception. A first-generation tangible display was created from an off-the-shelf laptop computer containing an accelerometer and webcam as standard components. Using these devices, custom software estimated the orientation of the display and the user's viewing position. This information was integrated with a 3D rendering module so that rotating the display or moving in front of the screen would produce realistic changes in the appearance of virtual objects. In this paper, we consider the design of a second-generation system to improve the fidelity of the virtual surfaces rendered to the screen. With a high-quality display screen and enhanced tracking and rendering capabilities, a secondgeneration system will be better able to support a range of appearance perception applications.

  18. Cyber entertainment system using an immersive networked virtual environment

    NASA Astrophysics Data System (ADS)

    Ihara, Masayuki; Honda, Shinkuro; Kobayashi, Minoru; Ishibashi, Satoshi

    2002-05-01

    Authors are examining a cyber entertainment system that applies IPT (Immersive Projection Technology) displays to the entertainment field. This system enables users who are in remote locations to communicate with each other so that they feel as if they are together. Moreover, the system enables those users to experience a high degree of presence, this is due to provision of stereoscopic vision as well as a haptic interface and stereo sound. This paper introduces this system from the viewpoint of space sharing across the network and elucidates its operation using the theme of golf. The system is developed by integrating avatar control, an I/O device, communication links, virtual interaction, mixed reality, and physical simulations. Pairs of these environments are connected across the network. This allows the two players to experience competition. An avatar of each player is displayed by the other player's IPT display in the remote location and is driven by only two magnetic sensors. That is, in the proposed system, users don't need to wear any data suit with a lot of sensors and they are able to play golf without any encumbrance.

  19. Virtual Reality: You Are There

    NASA Technical Reports Server (NTRS)

    1993-01-01

    Telepresence or "virtual reality," allows a person, with assistance from advanced technology devices, to figuratively project himself into another environment. This technology is marketed by several companies, among them Fakespace, Inc., a former Ames Research Center contractor. Fakespace developed a teleoperational motion platform for transmitting sounds and images from remote locations. The "Molly" matches the user's head motion and, when coupled with a stereo viewing device and appropriate software, creates the telepresence experience. Its companion piece is the BOOM-the user's viewing device that provides the sense of involvement in the virtual environment. Either system may be used alone. Because suits, gloves, headphones, etc. are not needed, a whole range of commercial applications is possible, including computer-aided design techniques and virtual reality visualizations. Customers include Sandia National Laboratories, Stanford Research Institute and Mattel Toys.

  20. Towards Wearable A-Mode Ultrasound Sensing for Real-Time Finger Motion Recognition.

    PubMed

    Yang, Xingchen; Sun, Xueli; Zhou, Dalin; Li, Yuefeng; Liu, Honghai

    2018-06-01

    It is evident that surface electromyography (sEMG) based human-machine interfaces (HMI) have inherent difficulty in predicting dexterous musculoskeletal movements such as finger motions. This paper is an attempt to investigate a plausible alternative to sEMG, ultrasound-driven HMI, for dexterous motion recognition due to its characteristic of detecting morphological changes of deep muscles and tendons. A multi-channel A-mode ultrasound lightweight device is adopted to evaluate the performance of finger motion recognition; an experiment is designed for both widely acceptable offline and online algorithms with eight able-bodied subjects employed. The experiment result presents that the offline recognition accuracy is up to 98.83% ± 0.79%. The real-time motion completion rate is 95.4% ± 8.7% and online motion selection time is 0.243 ± 0.127 s. The outcomes confirm the feasibility of A-mode ultrasound based wearable HMI and its prosperous applications in prosthetic devices, virtual reality, and remote manipulation.

  1. Revealing the planar chemistry of two-dimensional heterostructures at the atomic level.

    PubMed

    Chou, Harry; Ismach, Ariel; Ghosh, Rudresh; Ruoff, Rodney S; Dolocan, Andrei

    2015-06-23

    Two-dimensional (2D) atomic crystals and their heterostructures are an intense area of study owing to their unique properties that result from structural planar confinement. Intrinsically, the performance of a planar vertical device is linked to the quality of its 2D components and their interfaces, therefore requiring characterization tools that can reveal both its planar chemistry and morphology. Here, we propose a characterization methodology combining (micro-) Raman spectroscopy, atomic force microscopy and time-of-flight secondary ion mass spectrometry to provide structural information, morphology and planar chemical composition at virtually the atomic level, aimed specifically at studying 2D vertical heterostructures. As an example system, a graphene-on-h-BN heterostructure is analysed to reveal, with an unprecedented level of detail, the subtle chemistry and interactions within its layer structure that can be assigned to specific fabrication steps. Such detailed chemical information is of crucial importance for the complete integration of 2D heterostructures into functional devices.

  2. Virtual reality and brain computer interface in neurorehabilitation

    PubMed Central

    Dahdah, Marie; Driver, Simon; Parsons, Thomas D.; Richter, Kathleen M.

    2016-01-01

    The potential benefit of technology to enhance recovery after central nervous system injuries is an area of increasing interest and exploration. The primary emphasis to date has been motor recovery/augmentation and communication. This paper introduces two original studies to demonstrate how advanced technology may be integrated into subacute rehabilitation. The first study addresses the feasibility of brain computer interface with patients on an inpatient spinal cord injury unit. The second study explores the validity of two virtual environments with acquired brain injury as part of an intensive outpatient neurorehabilitation program. These preliminary studies support the feasibility of advanced technologies in the subacute stage of neurorehabilitation. These modalities were well tolerated by participants and could be incorporated into patients' inpatient and outpatient rehabilitation regimens without schedule disruptions. This paper expands the limited literature base regarding the use of advanced technologies in the early stages of recovery for neurorehabilitation populations and speaks favorably to the potential integration of brain computer interface and virtual reality technologies as part of a multidisciplinary treatment program. PMID:27034541

  3. ReportTutor – An Intelligent Tutoring System that Uses a Natural Language Interface

    PubMed Central

    Crowley, Rebecca S.; Tseytlin, Eugene; Jukic, Drazen

    2005-01-01

    ReportTutor is an extension to our work on Intelligent Tutoring Systems for visual diagnosis. ReportTutor combines a virtual microscope and a natural language interface to allow students to visually inspect a virtual slide as they type a diagnostic report on the case. The system monitors both actions in the virtual microscope interface as well as text created by the student in the reporting interface. It provides feedback about the correctness, completeness, and style of the report. ReportTutor uses MMTx with a custom data-source created with the NCI Metathesaurus. A separate ontology of cancer specific concepts is used to structure the domain knowledge needed for evaluation of the student’s input including co-reference resolution. As part of the early evaluation of the system, we collected data from 4 pathology residents who typed in their reports without the tutoring aspects of the system, and compared responses to an expert dermatopathologist. We analyzed the resulting reports to (1) identify the error rates and distribution among student reports, (2) determine the performance of the system in identifying features within student reports, and (3) measure the accuracy of the system in distinguishing between correct and incorrect report elements. PMID:16779024

  4. Lessons learned from an Ada conversion project

    NASA Technical Reports Server (NTRS)

    Porter, Tim

    1988-01-01

    Background; SAVVAS architecture; software portability; history of Ada; isolation of non-portable code; simple terminal interface package; constraints of language features; and virtual interfaces are outlined. This presentation is represented by viewgraphs only.

  5. 3D multiplayer virtual pets game using Google Card Board

    NASA Astrophysics Data System (ADS)

    Herumurti, Darlis; Riskahadi, Dimas; Kuswardayan, Imam

    2017-08-01

    Virtual Reality (VR) is a technology which allows user to interact with the virtual environment. This virtual environment is generated and simulated by computer. This technology can make user feel the sensation when they are in the virtual environment. The VR technology provides real virtual environment view for user and it is not viewed from screen. But it needs another additional device to show the view of virtual environment. This device is known as Head Mounted Device (HMD). Oculust Rift and Microsoft Hololens are the most famous HMD devices used in VR. And in 2014, Google Card Board was introduced at Google I/O developers conference. Google Card Board is VR platform which allows user to enjoy the VR with simple and cheap way. In this research, we explore Google Card Board to develop simulation game of raising pet. The Google Card Board is used to create view for the VR environment. The view and control in VR environment is built using Unity game engine. And the simulation process is designed using Finite State Machine (FSM). This FSM can help to design the process clearly. So the simulation process can describe the simulation of raising pet well. Raising pet is fun activity. But sometimes, there are many conditions which cause raising pet become difficult to do, i.e. environment condition, disease, high cost, etc. this research aims to explore and implement Google Card Board in simulation of raising pet.

  6. Network device interface for digitally interfacing data channels to a controller via a network

    NASA Technical Reports Server (NTRS)

    Konz, Daniel W. (Inventor); Winkelmann, Joseph P. (Inventor); Ellerbrock, Philip J. (Inventor); Grant, Robert L. (Inventor)

    2007-01-01

    The present invention provides a network device interface and method for digitally connecting a plurality of data channels, such as sensors, actuators, and subsystems, to a controller using a network bus. The network device interface interprets commands and data received from the controller and polls the data channels in accordance with these commands. Specifically, the network device interface receives digital commands and data from the controller, and based on these commands and data, communicates with the data channels to either retrieve data in the case of a sensor or send data to activate an actuator. Data retrieved from the sensor is converted into digital signals and transmitted to the controller. In some embodiments, network device interfaces associated with different data channels coordinate communications with the other interfaces based on either a transition in a command message sent by the bus controller or a synchronous clock signal.

  7. Development of a simulated smart pump interface.

    PubMed

    Elias, Beth L; Moss, Jacqueline A; Shih, Alan; Dillavou, Marcus

    2014-01-01

    Medical device user interfaces are increasingly complex, resulting in a need for evaluation in clinicallyaccurate settings. Simulation of these interfaces can allow for evaluation, training, and use for research without the risk of harming patients and with a significant cost reduction over using the actual medical devices. This pilot project was phase 1 of a study to define and evaluate a methodology for development of simulated medical device interface technology to be used for education, device development, and research. Digital video and audio recordings of interface interactions were analyzed to develop a model of a smart intravenous medication infusion pump user interface. This model was used to program a high-fidelity simulated smart intravenous medication infusion pump user interface on an inexpensive netbook platform.

  8. Evaluation of navigation interfaces in virtual environments

    NASA Astrophysics Data System (ADS)

    Mestre, Daniel R.

    2014-02-01

    When users are immersed in cave-like virtual reality systems, navigational interfaces have to be used when the size of the virtual environment becomes larger than the physical extent of the cave floor. However, using navigation interfaces, physically static users experience self-motion (visually-induced vection). As a consequence, sensorial incoherence between vision (indicating self-motion) and other proprioceptive inputs (indicating immobility) can make them feel dizzy and disoriented. We tested, in two experimental studies, different locomotion interfaces. The objective was twofold: testing spatial learning and cybersickness. In a first experiment, using first-person navigation with a flystick ®, we tested the effect of sensorial aids, a spatialized sound or guiding arrows on the ground, attracting the user toward the goal of the navigation task. Results revealed that sensorial aids tended to impact negatively spatial learning. Moreover, subjects reported significant levels of cybersickness. In a second experiment, we tested whether such negative effects could be due to poorly controlled rotational motion during simulated self-motion. Subjects used a gamepad, in which rotational and translational displacements were independently controlled by two joysticks. Furthermore, we tested first- versus third-person navigation. No significant difference was observed between these two conditions. Overall, cybersickness tended to be lower, as compared to experiment 1, but the difference was not significant. Future research should evaluate further the hypothesis of the role of passively perceived optical flow in cybersickness, but manipulating the virtual environment'sperrot structure. It also seems that video-gaming experience might be involved in the user's sensitivity to cybersickness.

  9. Secure data exchange between intelligent devices and computing centers

    NASA Astrophysics Data System (ADS)

    Naqvi, Syed; Riguidel, Michel

    2005-03-01

    The advent of reliable spontaneous networking technologies (commonly known as wireless ad-hoc networks) has ostensibly raised stakes for the conception of computing intensive environments using intelligent devices as their interface with the external world. These smart devices are used as data gateways for the computing units. These devices are employed in highly volatile environments where the secure exchange of data between these devices and their computing centers is of paramount importance. Moreover, their mission critical applications require dependable measures against the attacks like denial of service (DoS), eavesdropping, masquerading, etc. In this paper, we propose a mechanism to assure reliable data exchange between an intelligent environment composed of smart devices and distributed computing units collectively called 'computational grid'. The notion of infosphere is used to define a digital space made up of a persistent and a volatile asset in an often indefinite geographical space. We study different infospheres and present general evolutions and issues in the security of such technology-rich and intelligent environments. It is beyond any doubt that these environments will likely face a proliferation of users, applications, networked devices, and their interactions on a scale never experienced before. It would be better to build in the ability to uniformly deal with these systems. As a solution, we propose a concept of virtualization of security services. We try to solve the difficult problems of implementation and maintenance of trust on the one hand, and those of security management in heterogeneous infrastructure on the other hand.

  10. The NASA Augmented/Virtual Reality Lab: The State of the Art at KSC

    NASA Technical Reports Server (NTRS)

    Little, William

    2017-01-01

    The NASA Augmented Virtual Reality (AVR) Lab at Kennedy Space Center is dedicated to the investigation of Augmented Reality (AR) and Virtual Reality (VR) technologies, with the goal of determining potential uses of these technologies as human-computer interaction (HCI) devices in an aerospace engineering context. Begun in 2012, the AVR Lab has concentrated on commercially available AR and VR devices that are gaining in popularity and use in a number of fields such as gaming, training, and telepresence. We are working with such devices as the Microsoft Kinect, the Oculus Rift, the Leap Motion, the HTC Vive, motion capture systems, and the Microsoft Hololens. The focus of our work has been on human interaction with the virtual environment, which in turn acts as a communications bridge to remote physical devices and environments which the operator cannot or should not control or experience directly. Particularly in reference to dealing with spacecraft and the oftentimes hazardous environments they inhabit, it is our hope that AR and VR technologies can be utilized to increase human safety and mission success by physically removing humans from those hazardous environments while virtually putting them right in the middle of those environments.

  11. Cortical Spiking Network Interfaced with Virtual Musculoskeletal Arm and Robotic Arm

    PubMed Central

    Dura-Bernal, Salvador; Zhou, Xianlian; Neymotin, Samuel A.; Przekwas, Andrzej; Francis, Joseph T.; Lytton, William W.

    2015-01-01

    Embedding computational models in the physical world is a critical step towards constraining their behavior and building practical applications. Here we aim to drive a realistic musculoskeletal arm model using a biomimetic cortical spiking model, and make a robot arm reproduce the same trajectories in real time. Our cortical model consisted of a 3-layered cortex, composed of several hundred spiking model-neurons, which display physiologically realistic dynamics. We interconnected the cortical model to a two-joint musculoskeletal model of a human arm, with realistic anatomical and biomechanical properties. The virtual arm received muscle excitations from the neuronal model, and fed back proprioceptive information, forming a closed-loop system. The cortical model was trained using spike timing-dependent reinforcement learning to drive the virtual arm in a 2D reaching task. Limb position was used to simultaneously control a robot arm using an improved network interface. Virtual arm muscle activations responded to motoneuron firing rates, with virtual arm muscles lengths encoded via population coding in the proprioceptive population. After training, the virtual arm performed reaching movements which were smoother and more realistic than those obtained using a simplistic arm model. This system provided access to both spiking network properties and to arm biophysical properties, including muscle forces. The use of a musculoskeletal virtual arm and the improved control system allowed the robot arm to perform movements which were smoother than those reported in our previous paper using a simplistic arm. This work provides a novel approach consisting of bidirectionally connecting a cortical model to a realistic virtual arm, and using the system output to drive a robotic arm in real time. Our techniques are applicable to the future development of brain neuroprosthetic control systems, and may enable enhanced brain-machine interfaces with the possibility for finer control of limb prosthetics. PMID:26635598

  12. Naval Applications of Virtual Reality,

    DTIC Science & Technology

    1993-01-01

    Expert Virtual Reality Special Report 󈨡, pp. 67- 72. 14. SUBJECT TERMS 15 NUMBER o0 PAGES man-machine interface virtual reality decision support...collective and individual performance. -" Virtual reality projects could help *y by Mark Gembicki Av-t-abilty CodesA Avafllat Idt Iofe and David Rousseau...alt- 67 VIRTUAL . REALITY SPECIAl, REPORT r-OPY avcriaikxb to DD)C qg .- 154,41X~~~~~~~~~~~~j 1411 iI..:41 T a].’ 1,1 4 1111 I 4 1 * .11 ~ 4 l.~w111511 I

  13. The Comparison Of Dome And HMD Delivery Systems: A Case Study

    NASA Technical Reports Server (NTRS)

    Chen, Jian; Harm, Deborah L.; Loftin, R. Bowen; Tyalor, Laura C.; Leiss, Ernst L.

    2002-01-01

    For effective astronaut training applications, choosing the right display devices to present images is crucial. In order to assess what devices are appropriate, it is important to design a successful virtual environment for a comparison study of the display devices. We present a comprehensive system, a Virtual environment testbed (VET), for the comparison of Dome and Head Mounted Display (HMD) systems on an SGI Onyx workstation. By writing codelets, we allow a variety of virtual scenarios and subjects' information to be loaded without programming or changing the code. This is part of an ongoing research project conducted by the NASA / JSC.

  14. A Human Machine Interface for EVA

    NASA Astrophysics Data System (ADS)

    Hartmann, L.

    EVA astronauts work in a challenging environment that includes high rate of muscle fatigue, haptic and proprioception impairment, lack of dexterity and interaction with robotic equipment. Currently they are heavily dependent on support from on-board crew and ground station staff for information and robotics operation. They are limited to the operation of simple controls on the suit exterior and external robot controls that are difficult to operate because of the heavy gloves that are part of the EVA suit. A wearable human machine interface (HMI) inside the suit provides a powerful alternative for robot teleoperation, procedure checklist access, generic equipment operation via virtual control panels and general information retrieval and presentation. The HMI proposed here includes speech input and output, a simple 6 degree of freedom (dof) pointing device and a heads up display (HUD). The essential characteristic of this interface is that it offers an alternative to the standard keyboard and mouse interface of a desktop computer. The astronaut's speech is used as input to command mode changes, execute arbitrary computer commands and generate text. The HMI can respond with speech also in order to confirm selections, provide status and feedback and present text output. A candidate 6 dof pointing device is Measurand's Shapetape, a flexible "tape" substrate to which is attached an optic fiber with embedded sensors. Measurement of the modulation of the light passing through the fiber can be used to compute the shape of the tape and, in particular, the position and orientation of the end of the Shapetape. It can be used to provide any kind of 3d geometric information including robot teleoperation control. The HUD can overlay graphical information onto the astronaut's visual field including robot joint torques, end effector configuration, procedure checklists and virtual control panels. With suitable tracking information about the position and orientation of the EVA suit, the overlaid graphical information can be registered with the external world. For example, information about an object can be positioned on or beside the object. This wearable HMI supports many applications during EVA including robot teleoperation, procedure checklist usage, operation of virtual control panels and general information or documentation retrieval and presentation. Whether the robot end effector is a mobile platform for the EVA astronaut or is an assistant to the astronaut in an assembly or repair task, the astronaut can control the robot via a direct manipulation interface. Embedded in the suit or the astronaut's clothing, Shapetape can measure the user's arm/hand position and orientation which can be directly mapped into the workspace coordinate system of the robot. Motion of the users hand can generate corresponding motion of the robot end effector in order to reposition the EVA platform or to manipulate objects in the robot's grasp. Speech input can be used to execute commands and mode changes without the astronaut having to withdraw from the teleoperation task. Speech output from the system can provide feedback without affecting the user's visual attention. The procedure checklist guiding the astronaut's detailed activities can be presented on the HUD and manipulated (e.g., move, scale, annotate, mark tasks as done, consult prerequisite tasks) by spoken command. Virtual control panels for suit equipment, equipment being repaired or arbitrary equipment on the space station can be displayed on the HUD and can be operated by speech commands or by hand gestures. For example, an antenna being repaired could be pointed under the control of the EVA astronaut. Additionally arbitrary computer activities such as information retrieval and presentation can be carried out using similar interface techniques. Considering the risks, expense and physical challenges of EVA work, it is appropriate that EVA astronauts have considerable support from station crew and ground station staff. Reducing their dependence on such personnel may under many circumstances, however, improve performance and reduce risk. For example, the EVA astronaut is likely to have the best viewpoint at a robotic worksite. Direct access to the procedure checklist can help provide temporal context and continuity throughout an EVA. Access to station facilities through an HMI such as the one described here could be invaluable during an emergency or in a situation in which a fault occurs. The full paper will describe the HMI operation and applications in the EVA context in more detail and will describe current laboratory prototyping activities.

  15. Ownership and Agency of an Independent Supernumerary Hand Induced by an Imitation Brain-Computer Interface.

    PubMed

    Bashford, Luke; Mehring, Carsten

    2016-01-01

    To study body ownership and control, illusions that elicit these feelings in non-body objects are widely used. Classically introduced with the Rubber Hand Illusion, these illusions have been replicated more recently in virtual reality and by using brain-computer interfaces. Traditionally these illusions investigate the replacement of a body part by an artificial counterpart, however as brain-computer interface research develops it offers us the possibility to explore the case where non-body objects are controlled in addition to movements of our own limbs. Therefore we propose a new illusion designed to test the feeling of ownership and control of an independent supernumerary hand. Subjects are under the impression they control a virtual reality hand via a brain-computer interface, but in reality there is no causal connection between brain activity and virtual hand movement but correct movements are observed with 80% probability. These imitation brain-computer interface trials are interspersed with movements in both the subjects' real hands, which are in view throughout the experiment. We show that subjects develop strong feelings of ownership and control over the third hand, despite only receiving visual feedback with no causal link to the actual brain signals. Our illusion is crucially different from previously reported studies as we demonstrate independent ownership and control of the third hand without loss of ownership in the real hands.

  16. Virtual Reality: Real Promises and False Expectations.

    ERIC Educational Resources Information Center

    Homan, Willem J.

    1994-01-01

    Examines virtual reality (VR), and discusses the dilemma of defining VR, the limitations of the current technology, and the implications of VR for education. Highlights include a VR experience; human factors and the interface; and altered reality versus VR. (Author/AEF)

  17. Development of a multimodal transportation educational virtual appliance (MTEVA) to study congestion during extreme tropical events.

    DOT National Transportation Integrated Search

    2011-11-28

    In this study, a prototype Multimodal Transportation Educational Virtual Appliance (MTEVA) is developed to assist in transportation and cyberinfrastructure undergraduate education. This initial version of the MTEVA provides a graphical user interface...

  18. III-V compound semiconductor growth on silicon via germanium buffer and surface passivation for CMOS technology

    NASA Astrophysics Data System (ADS)

    Choi, Donghun

    Integration of III-V compound semiconductors on silicon substrates has recently received much attention for the development of optoelectronic and high speed electronic devices. However, it is well known that there are some key challenges for the realization of III-V device fabrication on Si substrates: (i) the large lattice mismatch (in case of GaAs: 4.1%), and (ii) the formation of antiphase domain (APD) due to the polar compound semiconductor growth on non-polar elemental structure. Besides these growth issues, the lack of a useful surface passivation technology for compound semiconductors has precluded development of metal-oxide-semiconductor (MOS) devices and causes high surface recombination parasitics in scaled devices. This work demonstrates the growth of high quality III-V materials on Si via an intermediate Ge buffer layer and some surface passivation methods to reduce interface defect density for the fabrication of MOS devices. The initial goal was to achieve both low threading dislocation density (TDD) and low surface roughness on Ge-on-Si heterostructure growth. This was achieved by repeating a deposition-annealing cycle consisting of low temperature deposition + high temperature-high rate deposition + high temperature hydrogen annealing, using reduced-pressure chemical-vapor deposition (CVD). We then grew III-V materials on the Ge/Si virtual substrates using molecular-beam epitaxy (MBE). The relationship between initial Ge surface configuration and antiphase boundary formation was investigated using surface reflection high-energy electron diffraction (RHEED) patterns and atomic force microscopy (AFM) image analysis. In addition, some MBE growth techniques, such as migration enhanced epitaxy (MEE) and low temperature GaAs growth, were adopted to improve surface roughness and solve the Ge self-doping problem. Finally, an Al2O3 gate oxide layer was deposited using atomic-layer-deposition (ALD) system after HCl native oxide etching and ALD in-situ pre-annealing at 400 °C. A 100 nm thick aluminum layer was deposited to form the gate contact for a MOS device fabrication. C-V measurement results show very small frequency dispersion and 200-300 mV hysteresis, comparable to our best results for InGaAs/GaAs MOS structures on GaAs substrate. Most notably, the quasi-static C-V curve demonstrates clear inversion layer formation. I-V curves show a reasonable leakage current level. The inferred midgap interface state density, Dit, of 2.4 x 1012 eV-1cm-2 was calculated by combined high-low frequency capacitance method. In addition, we investigated the interface properties of amorphous LaAlO 3/GaAs MOS capacitors fabricated on GaAs substrate. The surface was protected during sample transfer between III-V and oxide molecular beam deposition (MBD) chambers by a thick arsenic-capping layer. An annealing method, a low temperature-short time RTA followed by a high temperature RTA, was developed, yielding extremely small hysteresis (˜ 30 mV), frequency dispersion (˜ 60 mV), and interface trap density (mid 1010 eV-1cm -2). We used capacitance-voltage (C-V) and current-voltage (I-V) measurements for electrical characterization of MOS devices, tapping-mode AFM for surface morphology analysis, X-ray photoelectron spectroscopy (XPS) for chemical elements analysis of interface, cross section transmission-electron microscopy (TEM), X-ray diffraction (XRD), secondary ion mass spectrometry (SIMS), and photoluminescence (PL) measurement for film quality characterization. This successful growth and appropriate surface treatments of III-V materials provides a first step for the fabrication of III-V optical and electrical devices on the same Si-based electronic circuits.

  19. Shared virtual environments for telerehabilitation.

    PubMed

    Popescu, George V; Burdea, Grigore; Boian, Rares

    2002-01-01

    Current VR telerehabilitation systems use offline remote monitoring from the clinic and patient-therapist videoconferencing. Such "store and forward" and video-based systems cannot implement medical services involving patient therapist direct interaction. Real-time telerehabilitation applications (including remote therapy) can be developed using a shared Virtual Environment (VE) architecture. We developed a two-user shared VE for hand telerehabilitation. Each site has a telerehabilitation workstation with a videocamera and a Rutgers Master II (RMII) force feedback glove. Each user can control a virtual hand and interact hapticly with virtual objects. Simulated physical interactions between therapist and patient are implemented using hand force feedback. The therapist's graphic interface contains several virtual panels, which allow control over the rehabilitation process. These controls start a videoconferencing session, collect patient data, or apply therapy. Several experimental telerehabilitation scenarios were successfully tested on a LAN. A Web-based approach to "real-time" patient telemonitoring--the monitoring portal for hand telerehabilitation--was also developed. The therapist interface is implemented as a Java3D applet that monitors patient hand movement. The monitoring portal gives real-time performance on off-the-shelf desktop workstations.

  20. A Virtual Environment System for the Comparison of Dome and HMD Systems

    NASA Technical Reports Server (NTRS)

    Chen, Jian; Harm, Deboran L.; Loftin, R. Bowen; Lin, Ching-yao; Leiss, Ernst L.

    2002-01-01

    For effective astronaut training applications, choosing the right display devices to present images is crucial. In order to assess what devices are appropriate, it is important to design a successful virtual environment for a comparison study of the display devices. We present a comprehensive system for the comparison of Dome and head-mounted display (HMD) systems. In particular, we address interactions techniques and playback environments.

  1. Research and realization of signal simulation on virtual instrument

    NASA Astrophysics Data System (ADS)

    Zhao, Qi; He, Wenting; Guan, Xiumei

    2010-02-01

    In the engineering project, arbitrary waveform generator controlled by software interface is needed by simulation and test. This article discussed the program using the SCPI (Standard Commands For Programmable Instruments) protocol and the VISA (Virtual Instrument System Architecture) library to control the Agilent signal generator (Agilent N5182A) by instrument communication over the LAN interface. The program can conduct several signal generations such as CW (continuous wave), AM (amplitude modulation), FM (frequency modulation), ΦM (phase modulation), Sweep. As the result, the program system has good operability and portability.

  2. The Next Wave: Humans, Computers, and Redefining Reality

    NASA Technical Reports Server (NTRS)

    Little, William

    2018-01-01

    The Augmented/Virtual Reality (AVR) Lab at KSC is dedicated to " exploration into the growing computer fields of Extended Reality and the Natural User Interface (it is) a proving ground for new technologies that can be integrated into future NASA projects and programs." The topics of Human Computer Interface, Human Computer Interaction, Augmented Reality, Virtual Reality, and Mixed Reality are defined; examples of work being done in these fields in the AVR Lab are given. Current new and future work in Computer Vision, Speech Recognition, and Artificial Intelligence are also outlined.

  3. Hearing in True 3-D

    NASA Technical Reports Server (NTRS)

    2004-01-01

    In 1984, researchers from Ames Research Center came together to develop advanced human interfaces for NASA s teleoperations that would come to be known as "virtual reality." The basis of the work theorized that if the sensory interfaces met a certain threshold and sufficiently supported each other, then the operator would feel present in the remote/synthetic environment, rather than present in their physical location. Twenty years later, this prolific research continues to pay dividends to society in the form of cutting-edge virtual reality products, such as an interactive audio simulation system.

  4. Virtual displays for 360-degree video

    NASA Astrophysics Data System (ADS)

    Gilbert, Stephen; Boonsuk, Wutthigrai; Kelly, Jonathan W.

    2012-03-01

    In this paper we describe a novel approach for comparing users' spatial cognition when using different depictions of 360- degree video on a traditional 2D display. By using virtual cameras within a game engine and texture mapping of these camera feeds to an arbitrary shape, we were able to offer users a 360-degree interface composed of four 90-degree views, two 180-degree views, or one 360-degree view of the same interactive environment. An example experiment is described using these interfaces. This technique for creating alternative displays of wide-angle video facilitates the exploration of how compressed or fish-eye distortions affect spatial perception of the environment and can benefit the creation of interfaces for surveillance and remote system teleoperation.

  5. Novel Virtual User Models of Mild Cognitive Impairment for Simulating Dementia

    PubMed Central

    Segkouli, Sofia; Tzovaras, Dimitrios; Tsakiris, Thanos; Tsolaki, Magda; Karagiannidis, Charalampos

    2015-01-01

    Virtual user modeling research has attempted to address critical issues of human-computer interaction (HCI) such as usability and utility through a large number of analytic, usability-oriented approaches as cognitive models in order to provide users with experiences fitting to their specific needs. However, there is demand for more specific modules embodied in cognitive architecture that will detect abnormal cognitive decline across new synthetic task environments. Also, accessibility evaluation of graphical user interfaces (GUIs) requires considerable effort for enhancing ICT products accessibility for older adults. The main aim of this study is to develop and test virtual user models (VUM) simulating mild cognitive impairment (MCI) through novel specific modules, embodied at cognitive models and defined by estimations of cognitive parameters. Well-established MCI detection tests assessed users' cognition, elaborated their ability to perform multitasks, and monitored the performance of infotainment related tasks to provide more accurate simulation results on existing conceptual frameworks and enhanced predictive validity in interfaces' design supported by increased tasks' complexity to capture a more detailed profile of users' capabilities and limitations. The final outcome is a more robust cognitive prediction model, accurately fitted to human data to be used for more reliable interfaces' evaluation through simulation on the basis of virtual models of MCI users. PMID:26339282

  6. Optimizing virtual reality for all users through gaze-contingent and adaptive focus displays

    PubMed Central

    Padmanaban, Nitish; Konrad, Robert; Stramer, Tal; Wetzstein, Gordon

    2017-01-01

    From the desktop to the laptop to the mobile device, personal computing platforms evolve over time. Moving forward, wearable computing is widely expected to be integral to consumer electronics and beyond. The primary interface between a wearable computer and a user is often a near-eye display. However, current generation near-eye displays suffer from multiple limitations: they are unable to provide fully natural visual cues and comfortable viewing experiences for all users. At their core, many of the issues with near-eye displays are caused by limitations in conventional optics. Current displays cannot reproduce the changes in focus that accompany natural vision, and they cannot support users with uncorrected refractive errors. With two prototype near-eye displays, we show how these issues can be overcome using display modes that adapt to the user via computational optics. By using focus-tunable lenses, mechanically actuated displays, and mobile gaze-tracking technology, these displays can be tailored to correct common refractive errors and provide natural focus cues by dynamically updating the system based on where a user looks in a virtual scene. Indeed, the opportunities afforded by recent advances in computational optics open up the possibility of creating a computing platform in which some users may experience better quality vision in the virtual world than in the real one. PMID:28193871

  7. Wearables in Medicine.

    PubMed

    Yetisen, Ali K; Martinez-Hurtado, Juan Leonardo; Ünal, Barış; Khademhosseini, Ali; Butt, Haider

    2018-06-11

    Wearables as medical technologies are becoming an integral part of personal analytics, measuring physical status, recording physiological parameters, or informing schedule for medication. These continuously evolving technology platforms do not only promise to help people pursue a healthier life style, but also provide continuous medical data for actively tracking metabolic status, diagnosis, and treatment. Advances in the miniaturization of flexible electronics, electrochemical biosensors, microfluidics, and artificial intelligence algorithms have led to wearable devices that can generate real-time medical data within the Internet of things. These flexible devices can be configured to make conformal contact with epidermal, ocular, intracochlear, and dental interfaces to collect biochemical or electrophysiological signals. This article discusses consumer trends in wearable electronics, commercial and emerging devices, and fabrication methods. It also reviews real-time monitoring of vital signs using biosensors, stimuli-responsive materials for drug delivery, and closed-loop theranostic systems. It covers future challenges in augmented, virtual, and mixed reality, communication modes, energy management, displays, conformity, and data safety. The development of patient-oriented wearable technologies and their incorporation in randomized clinical trials will facilitate the design of safe and effective approaches. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Evaluation of the Next-Gen Exercise Software Interface in the NEEMO Analog

    NASA Technical Reports Server (NTRS)

    Hanson, Andrea; Kalogera, Kent; Sandor, Aniko; Hardy, Marc; Frank, Andrew; English, Kirk; Williams, Thomas; Perera, Jeevan; Amonette, William

    2017-01-01

    NSBRI (National Space Biomedical Research Institute) funded research grant to develop the 'NextGen' exercise software for the NEEMO (NASA Extreme Environment Mission Operations) analog. Develop a software architecture to integrate instructional, motivational and socialization techniques into a common portal to enhance exercise countermeasures in remote environments. Increase user efficiency and satisfaction, and institute commonality across multiple exercise systems. Utilized GUI (Graphical User Interface) design principals focused on intuitive ease of use to minimize training time and realize early user efficiency. Project requirement to test the software in an analog environment. Top Level Project Aims: 1) Improve the usability of crew interface software to exercise CMS (Crew Management System) through common app-like interfaces. 2) Introduce virtual instructional motion training. 3) Use virtual environment to provide remote socialization with family and friends, improve exercise technique, adherence, motivation and ultimately performance outcomes.

  9. World Reaction to Virtual Space

    NASA Technical Reports Server (NTRS)

    1999-01-01

    DRaW Computing developed virtual reality software for the International Space Station. Open Worlds, as the software has been named, can be made to support Java scripting and virtual reality hardware devices. Open Worlds permits the use of VRML script nodes to add virtual reality capabilities to the user's applications.

  10. The electronic-commerce-oriented virtual merchandise model

    NASA Astrophysics Data System (ADS)

    Fang, Xiaocui; Lu, Dongming

    2004-03-01

    Electronic commerce has been the trend of commerce activities. Providing with Virtual Reality interface, electronic commerce has better expressing capacity and interaction means. But most of the applications of virtual reality technology in EC, 3D model is only the appearance description of merchandises. There is almost no information concerned with commerce information and interaction information. This resulted in disjunction of virtual model and commerce information. So we present Electronic Commerce oriented Virtual Merchandise Model (ECVMM), which combined a model with commerce information, interaction information and figure information of virtual merchandise. ECVMM with abundant information provides better support to information obtainment and communication in electronic commerce.

  11. Evaluation of stiffness feedback for hard nodule identification on a phantom silicone model

    PubMed Central

    Konstantinova, Jelizaveta; Xu, Guanghua; He, Bo; Aminzadeh, Vahid; Xie, Jun; Wurdemann, Helge; Althoefer, Kaspar

    2017-01-01

    Haptic information in robotic surgery can significantly improve clinical outcomes and help detect hard soft-tissue inclusions that indicate potential abnormalities. Visual representation of tissue stiffness information is a cost-effective technique. Meanwhile, direct force feedback, although considerably more expensive than visual representation, is an intuitive method of conveying information regarding tissue stiffness to surgeons. In this study, real-time visual stiffness feedback by sliding indentation palpation is proposed, validated, and compared with force feedback involving human subjects. In an experimental tele-manipulation environment, a dynamically updated color map depicting the stiffness of probed soft tissue is presented via a graphical interface. The force feedback is provided, aided by a master haptic device. The haptic device uses data acquired from an F/T sensor attached to the end-effector of a tele-manipulated robot. Hard nodule detection performance is evaluated for 2 modes (force feedback and visual stiffness feedback) of stiffness feedback on an artificial organ containing buried stiff nodules. From this artificial organ, a virtual-environment tissue model is generated based on sliding indentation measurements. Employing this virtual-environment tissue model, we compare the performance of human participants in distinguishing differently sized hard nodules by force feedback and visual stiffness feedback. Results indicate that the proposed distributed visual representation of tissue stiffness can be used effectively for hard nodule identification. The representation can also be used as a sufficient substitute for force feedback in tissue palpation. PMID:28248996

  12. Evaluation of stiffness feedback for hard nodule identification on a phantom silicone model.

    PubMed

    Li, Min; Konstantinova, Jelizaveta; Xu, Guanghua; He, Bo; Aminzadeh, Vahid; Xie, Jun; Wurdemann, Helge; Althoefer, Kaspar

    2017-01-01

    Haptic information in robotic surgery can significantly improve clinical outcomes and help detect hard soft-tissue inclusions that indicate potential abnormalities. Visual representation of tissue stiffness information is a cost-effective technique. Meanwhile, direct force feedback, although considerably more expensive than visual representation, is an intuitive method of conveying information regarding tissue stiffness to surgeons. In this study, real-time visual stiffness feedback by sliding indentation palpation is proposed, validated, and compared with force feedback involving human subjects. In an experimental tele-manipulation environment, a dynamically updated color map depicting the stiffness of probed soft tissue is presented via a graphical interface. The force feedback is provided, aided by a master haptic device. The haptic device uses data acquired from an F/T sensor attached to the end-effector of a tele-manipulated robot. Hard nodule detection performance is evaluated for 2 modes (force feedback and visual stiffness feedback) of stiffness feedback on an artificial organ containing buried stiff nodules. From this artificial organ, a virtual-environment tissue model is generated based on sliding indentation measurements. Employing this virtual-environment tissue model, we compare the performance of human participants in distinguishing differently sized hard nodules by force feedback and visual stiffness feedback. Results indicate that the proposed distributed visual representation of tissue stiffness can be used effectively for hard nodule identification. The representation can also be used as a sufficient substitute for force feedback in tissue palpation.

  13. Innovative approaches to the rehabilitation of upper extremity hemiparesis using virtual environments

    PubMed Central

    MERIANS, A. S.; TUNIK, E.; FLUET, G. G.; QIU, Q.; ADAMOVICH, S. V.

    2017-01-01

    Aim Upper-extremity interventions for hemiparesis are a challenging aspect of stroke rehabilitation. Purpose of this paper is to report the feasibility of using virtual environments (VEs) in combination with robotics to assist recovery of hand-arm function and to present preliminary data demonstrating the potential of using sensory manipulations in VE to drive activation in targeted neural regions. Methods We trained 8 subjects for 8 three hour sessions using a library of complex VE’s integrated with robots, comparing training arm and hand separately to training arm and hand together. Instrumented gloves and hand exoskeleton were used for hand tracking and haptic effects. Haptic Master robotic arm was used for arm tracking and generating three-dimensional haptic VEs. To investigate the use of manipulations in VE to drive neural activations, we created a “virtual mirror” that subjects used while performing a unimanual task. Cortical activation was measured with functional MRI (fMRI) and transcranial magnetic stimulation. Results Both groups showed improvement in kinematics and measures of real-world function. The group trained using their arm and hand together showed greater improvement. In a stroke subject, fMRI data suggested virtual mirror feedback could activate the sensorimotor cortex contralateral to the reflected hand (ipsilateral to the moving hand) thus recruiting the lesioned hemisphere. Conclusion Gaming simulations interfaced with robotic devices provide a training medium that can modify movement patterns. In addition to showing that our VE therapies can optimize behavioral performance, we show preliminary evidence to support the potential of using specific sensory manipulations to selectively recruit targeted neural circuits. PMID:19158659

  14. Mathematical Basis of Knowledge Discovery and Autonomous Intelligent Architectures - Technology for the Creation of Virtual objects in the Real World

    DTIC Science & Technology

    2005-12-14

    control of position/orientation of mobile TV cameras. 9 Unit 9 Force interaction system Unit 6 Helmet mounted displays robot like device drive...joints of the master arm (see Unit 1) which joint coordinates are tracked by the virtual manipulator. Unit 6 . Two displays built in the helmet...special device for simulating the tactile- kinaesthetic effect of immersion. When virtual body is a manipulator it comprises: − master arm with 6

  15. Predicting the Electric Field Distribution in the Brain for the Treatment of Glioblastoma

    PubMed Central

    Miranda, Pedro C.; Mekonnen, Abeye; Salvador, Ricardo; Basser, Peter J.

    2014-01-01

    The use of alternating electric fields has been recently proposed for the treatment of recurrent glioblastoma. In order to predict the electric field distribution in the brain during the application of such tumor treating fields (TTF), we constructed a realistic head model from MRI data and placed transducer arrays on the scalp to mimic an FDA-approved medical device. Values for the tissue dielectric properties were taken from the literature; values for the device parameters were obtained from the manufacturer. The finite element method was used to calculate the electric field distribution in the brain. We also included a “virtual lesion” in the model to simulate the presence of an idealized tumor. The calculated electric field in the brain varied mostly between 0.5 and 2.0 V/cm and exceeded 1.0 V/cm in 60% of the total brain volume. Regions of local field enhancement occurred near interfaces between tissues with different conductivities wherever the electric field was perpendicular to those interfaces. These increases were strongest near the ventricles but were also present outside the tumor’s necrotic core and in some parts of the gray matter-white matter interface. The electric field values predicted in this model brain are in reasonably good agreement with those that have been shown to reduce cancer cell proliferation in vitro. The electric field distribution is highly non-uniform and depends on tissue geometry and dielectric properties. This could explain some of the variability in treatment outcomes. The proposed modeling framework could be used to better understand the physical basis of TTF efficacy through retrospective analysis and to improve TTF treatment planning. PMID:25003941

  16. Predicting the electric field distribution in the brain for the treatment of glioblastoma

    NASA Astrophysics Data System (ADS)

    Miranda, Pedro C.; Mekonnen, Abeye; Salvador, Ricardo; Basser, Peter J.

    2014-08-01

    The use of alternating electric fields has been recently proposed for the treatment of recurrent glioblastoma. In order to predict the electric field distribution in the brain during the application of such tumor treating fields (TTF), we constructed a realistic head model from MRI data and placed transducer arrays on the scalp to mimic an FDA-approved medical device. Values for the tissue dielectric properties were taken from the literature; values for the device parameters were obtained from the manufacturer. The finite element method was used to calculate the electric field distribution in the brain. We also included a ‘virtual lesion’ in the model to simulate the presence of an idealized tumor. The calculated electric field in the brain varied mostly between 0.5 and 2.0 V cm - 1 and exceeded 1.0 V cm - 1 in 60% of the total brain volume. Regions of local field enhancement occurred near interfaces between tissues with different conductivities wherever the electric field was perpendicular to those interfaces. These increases were strongest near the ventricles but were also present outside the tumor’s necrotic core and in some parts of the gray matter-white matter interface. The electric field values predicted in this model brain are in reasonably good agreement with those that have been shown to reduce cancer cell proliferation in vitro. The electric field distribution is highly non-uniform and depends on tissue geometry and dielectric properties. This could explain some of the variability in treatment outcomes. The proposed modeling framework could be used to better understand the physical basis of TTF efficacy through retrospective analysis and to improve TTF treatment planning.

  17. Predicting the electric field distribution in the brain for the treatment of glioblastoma.

    PubMed

    Miranda, Pedro C; Mekonnen, Abeye; Salvador, Ricardo; Basser, Peter J

    2014-08-07

    The use of alternating electric fields has been recently proposed for the treatment of recurrent glioblastoma. In order to predict the electric field distribution in the brain during the application of such tumor treating fields (TTF), we constructed a realistic head model from MRI data and placed transducer arrays on the scalp to mimic an FDA-approved medical device. Values for the tissue dielectric properties were taken from the literature; values for the device parameters were obtained from the manufacturer. The finite element method was used to calculate the electric field distribution in the brain. We also included a 'virtual lesion' in the model to simulate the presence of an idealized tumor. The calculated electric field in the brain varied mostly between 0.5 and 2.0 V cm( - 1) and exceeded 1.0 V cm( - 1) in 60% of the total brain volume. Regions of local field enhancement occurred near interfaces between tissues with different conductivities wherever the electric field was perpendicular to those interfaces. These increases were strongest near the ventricles but were also present outside the tumor's necrotic core and in some parts of the gray matter-white matter interface. The electric field values predicted in this model brain are in reasonably good agreement with those that have been shown to reduce cancer cell proliferation in vitro. The electric field distribution is highly non-uniform and depends on tissue geometry and dielectric properties. This could explain some of the variability in treatment outcomes. The proposed modeling framework could be used to better understand the physical basis of TTF efficacy through retrospective analysis and to improve TTF treatment planning.

  18. Wearable computer for mobile augmented-reality-based controlling of an intelligent robot

    NASA Astrophysics Data System (ADS)

    Turunen, Tuukka; Roening, Juha; Ahola, Sami; Pyssysalo, Tino

    2000-10-01

    An intelligent robot can be utilized to perform tasks that are either hazardous or unpleasant for humans. Such tasks include working in disaster areas or conditions that are, for example, too hot. An intelligent robot can work on its own to some extent, but in some cases the aid of humans will be needed. This requires means for controlling the robot from somewhere else, i.e. teleoperation. Mobile augmented reality can be utilized as a user interface to the environment, as it enhances the user's perception of the situation compared to other interfacing methods and allows the user to perform other tasks while controlling the intelligent robot. Augmented reality is a method that combines virtual objects into the user's perception of the real world. As computer technology evolves, it is possible to build very small devices that have sufficient capabilities for augmented reality applications. We have evaluated the existing wearable computers and mobile augmented reality systems to build a prototype of a future mobile terminal- the CyPhone. A wearable computer with sufficient system resources for applications, wireless communication media with sufficient throughput and enough interfaces for peripherals has been built at the University of Oulu. It is self-sustained in energy, with enough operating time for the applications to be useful, and uses accurate positioning systems.

  19. Mechanically Compliant Electronic Materials for Wearable Photovoltaics and Human-Machine Interfaces

    NASA Astrophysics Data System (ADS)

    O'Connor, Timothy Francis, III

    Applications of stretchable electronic materials for human-machine interfaces are described herein. Intrinsically stretchable organic conjugated polymers and stretchable electronic composites were used to develop stretchable organic photovoltaics (OPVs), mechanically robust wearable OPVs, and human-machine interfaces for gesture recognition, American Sign Language Translation, haptic control of robots, and touch emulation for virtual reality, augmented reality, and the transmission of touch. The stretchable and wearable OPVs comprise active layers of poly-3-alkylthiophene:phenyl-C61-butyric acid methyl ester (P3AT:PCBM) and transparent conductive electrodes of poly(3,4-ethylenedioxythiophene)-poly(styrenesulfonate) (PEDOT:PSS) and devices could only be fabricated through a deep understanding of the connection between molecular structure and the co-engineering of electronic performance with mechanical resilience. The talk concludes with the use of composite piezoresistive sensors two smart glove prototypes. The first integrates stretchable strain sensors comprising a carbon-elastomer composite, a wearable microcontroller, low energy Bluetooth, and a 6-axis accelerometer/gyroscope to construct a fully functional gesture recognition glove capable of wirelessly translating American Sign Language to text on a cell phone screen. The second creates a system for the haptic control of a 3D printed robot arm, as well as the transmission of touch and temperature information.

  20. Multi-Material ALE with AMR for Modeling Hot Plasmas and Cold Fragmenting Materials

    NASA Astrophysics Data System (ADS)

    Alice, Koniges; Nathan, Masters; Aaron, Fisher; David, Eder; Wangyi, Liu; Robert, Anderson; David, Benson; Andrea, Bertozzi

    2015-02-01

    We have developed a new 3D multi-physics multi-material code, ALE-AMR, which combines Arbitrary Lagrangian Eulerian (ALE) hydrodynamics with Adaptive Mesh Refinement (AMR) to connect the continuum to the microstructural regimes. The code is unique in its ability to model hot radiating plasmas and cold fragmenting solids. New numerical techniques were developed for many of the physics packages to work efficiently on a dynamically moving and adapting mesh. We use interface reconstruction based on volume fractions of the material components within mixed zones and reconstruct interfaces as needed. This interface reconstruction model is also used for void coalescence and fragmentation. A flexible strength/failure framework allows for pluggable material models, which may require material history arrays to determine the level of accumulated damage or the evolving yield stress in J2 plasticity models. For some applications laser rays are propagating through a virtual composite mesh consisting of the finest resolution representation of the modeled space. A new 2nd order accurate diffusion solver has been implemented for the thermal conduction and radiation transport packages. One application area is the modeling of laser/target effects including debris/shrapnel generation. Other application areas include warm dense matter, EUV lithography, and material wall interactions for fusion devices.

  1. Control of an ER haptic master in a virtual slave environment for minimally invasive surgery applications

    NASA Astrophysics Data System (ADS)

    Han, Young-Min; Choi, Seung-Bok

    2008-12-01

    This paper presents the control performance of an electrorheological (ER) fluid-based haptic master device connected to a virtual slave environment that can be used for minimally invasive surgery (MIS). An already developed haptic joint featuring controllable ER fluid and a spherical joint mechanism is adopted for the master system. Medical forceps and an angular position measuring device are devised and integrated with the joint to establish the MIS master system. In order to embody a human organ in virtual space, a volumetric deformable object is used. The virtual object is then mathematically formulated by a shape-retaining chain-linked (S-chain) model. After evaluating the reflection force, computation time and compatibility with real-time control, the haptic architecture for MIS is established by incorporating the virtual slave with the master device so that the reflection force for the object of the virtual slave and the desired position for the master operator are transferred to each other. In order to achieve the desired force trajectories, a sliding mode controller is formulated and then experimentally realized. Tracking control performances for various force trajectories are evaluated and presented in the time domain.

  2. Network device interface for digitally interfacing data channels to a controller via a network

    NASA Technical Reports Server (NTRS)

    Konz, Daniel W. (Inventor); Ellerbrock, Philip J. (Inventor); Grant, Robert L. (Inventor); Winkelmann, Joseph P. (Inventor)

    2006-01-01

    The present invention provides a network device interface and method for digitally connecting a plurality of data channels, such as sensors, actuators, and subsystems, to a controller using a network bus. The network device interface interprets commands and data received from the controller and polls the data channels in accordance with these commands. Specifically, the network device interface receives digital commands and data from the controller, and based on these commands and data, communicates with the data channels to either retrieve data in the case of a sensor or send data to activate an actuator. Data retrieved from the sensor is then converted into digital signals and transmitted back to the controller. In one embodiment, the bus controller sends commands and data a defined bit rate, and the network device interface senses this bit rate and sends data back to the bus controller using the defined bit rate.

  3. Naver: a PC-cluster-based VR system

    NASA Astrophysics Data System (ADS)

    Park, ChangHoon; Ko, HeeDong; Kim, TaiYun

    2003-04-01

    In this paper, we present a new framework NAVER for virtual reality application. The NAVER is based on a cluster of low-cost personal computers. The goal of NAVER is to provide flexible, extensible, scalable and re-configurable framework for the virtual environments defined as the integration of 3D virtual space and external modules. External modules are various input or output devices and applications on the remote hosts. From the view of system, personal computers are divided into three servers according to its specific functions: Render Server, Device Server and Control Server. While Device Server contains external modules requiring event-based communication for the integration, Control Server contains external modules requiring synchronous communication every frame. And, the Render Server consists of 5 managers: Scenario Manager, Event Manager, Command Manager, Interaction Manager and Sync Manager. These managers support the declaration and operation of virtual environment and the integration with external modules on remote servers.

  4. NAFFS: network attached flash file system for cloud storage on portable consumer electronics

    NASA Astrophysics Data System (ADS)

    Han, Lin; Huang, Hao; Xie, Changsheng

    Cloud storage technology has become a research hotspot in recent years, while the existing cloud storage services are mainly designed for data storage needs with stable high speed Internet connection. Mobile Internet connections are often unstable and the speed is relatively low. These native features of mobile Internet limit the use of cloud storage in portable consumer electronics. The Network Attached Flash File System (NAFFS) presented the idea of taking the portable device built-in NAND flash memory as the front-end cache of virtualized cloud storage device. Modern portable devices with Internet connection have built-in more than 1GB NAND Flash, which is quite enough for daily data storage. The data transfer rate of NAND flash device is much higher than mobile Internet connections[1], and its non-volatile feature makes it very suitable as the cache device of Internet cloud storage on portable device, which often have unstable power supply and intermittent Internet connection. In the present work, NAFFS is evaluated with several benchmarks, and its performance is compared with traditional network attached file systems, such as NFS. Our evaluation results indicate that the NAFFS achieves an average accessing speed of 3.38MB/s, which is about 3 times faster than directly accessing cloud storage by mobile Internet connection, and offers a more stable interface than that of directly using cloud storage API. Unstable Internet connection and sudden power off condition are tolerable, and no data in cache will be lost in such situation.

  5. Comparative assessment of two interfaces for delivering a multimedia medical course in the French-speaking Virtual Medical University (UMVF).

    PubMed

    Brunetaud, Jean Marc; Leroy, Nicolas; Pelayo, Sylvia; Wascat, Caroline; Renard, Jean Marie; Prin, Lionel; Beuscart-Zéphir, Marie Catherine

    2003-01-01

    The UMVF aims at helping medical students during their normal curriculum via the facilities provided by Internet based techniques. This paper describes a comparative assessment of two interfaces delivering a multimedia course: a conventional web server (WS) and an integrated e-learning platform in the form of a Virtual Campus (VC). Eleven students were arbitrarily divided into two groups. We used a qualitative method for comparing their acceptance of the on line course provided by the two different interfaces. The two groups were globally satisfied. However, a decrease in satisfaction was noted at the end of the experimentation in the VC group. This may be explained by a more complex Graphical User Interface (GUI) of the VC and some constraints which do not exist with the WS. The current e-learning platforms are probably not optimised for working conditions where presential and virtual activities are mixed. We think that a new type of "light" platforms should be developed for these specific working conditions. Students of the two groups also had limitations about the multimedia environment. They may change their opinion if they get more accustomed with the multimedia environment and if their teachers make a more adequate use of the multimedia techniques.

  6. Surgical virtual reality - highlights in developing a high performance surgical haptic device.

    PubMed

    Custură-Crăciun, D; Cochior, D; Constantinoiu, S; Neagu, C

    2013-01-01

    Just like simulators are a standard in aviation and aerospace sciences, we expect for surgical simulators to soon become a standard in medical applications. These will correctly instruct future doctors in surgical techniques without there being a need for hands on patient instruction. Using virtual reality by digitally transposing surgical procedures changes surgery in are volutionary manner by offering possibilities for implementing new, much more efficient, learning methods, by allowing the practice of new surgical techniques and by improving surgeon abilities and skills. Perfecting haptic devices has opened the door to a series of opportunities in the fields of research,industry, nuclear science and medicine. Concepts purely theoretical at first, such as telerobotics, telepresence or telerepresentation,have become a practical reality as calculus techniques, telecommunications and haptic devices evolved,virtual reality taking a new leap. In the field of surgery barrier sand controversies still remain, regarding implementation and generalization of surgical virtual simulators. These obstacles remain connected to the high costs of this yet fully sufficiently developed technology, especially in the domain of haptic devices. Celsius.

  7. Virtually optimized insoles for offloading the diabetic foot: A randomized crossover study.

    PubMed

    Telfer, S; Woodburn, J; Collier, A; Cavanagh, P R

    2017-07-26

    Integration of objective biomechanical measures of foot function into the design process for insoles has been shown to provide enhanced plantar tissue protection for individuals at-risk of plantar ulceration. The use of virtual simulations utilizing numerical modeling techniques offers a potential approach to further optimize these devices. In a patient population at-risk of foot ulceration, we aimed to compare the pressure offloading performance of insoles that were optimized via numerical simulation techniques against shape-based devices. Twenty participants with diabetes and at-risk feet were enrolled in this study. Three pairs of personalized insoles: one based on shape data and subsequently manufactured via direct milling; and two were based on a design derived from shape, pressure, and ultrasound data which underwent a finite element analysis-based virtual optimization procedure. For the latter set of insole designs, one pair was manufactured via direct milling, and a second pair was manufactured through 3D printing. The offloading performance of the insoles was analyzed for forefoot regions identified as having elevated plantar pressures. In 88% of the regions of interest, the use of virtually optimized insoles resulted in lower peak plantar pressures compared to the shape-based devices. Overall, the virtually optimized insoles significantly reduced peak pressures by a mean of 41.3kPa (p<0.001, 95% CI [31.1, 51.5]) for milled and 40.5kPa (p<0.001, 95% CI [26.4, 54.5]) for printed devices compared to shape-based insoles. The integration of virtual optimization into the insole design process resulted in improved offloading performance compared to standard, shape-based devices. ISRCTN19805071, www.ISRCTN.org. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. An Augmented Virtuality Display for Improving UAV Usability

    DTIC Science & Technology

    2005-01-01

    cockpit. For a more universally-understood metaphor, we have turned to virtual environments of the type represented in video games . Many of the...people who have the need to fly UAVs (such as military personnel) have experience with playing video games . They are skilled in navigating virtual...Another aspect of tailoring the interface to those with video game experience is to use familiar controls. Microsoft has developed a popular and

  9. Advanced Technology for Portable Personal Visualization

    DTIC Science & Technology

    1993-01-01

    have no cable to drag. " We submitted a short article describing the ceiling tracker and the requirements demanded of trackers in see-through systems...Newspaper/Magazine Articles : "Virtual Reality: It’s All in the Mind," Atlanta Consnrution, 29 September 1992 "Virtual Reality: Exploring the Future...basic scientific investigation of the human haptic system or to serve as haptic interfaces for virtual environments and teleloperation. 2. Research

  10. Anomalous single-electron transfer in common-gate quadruple-dot single-electron devices with asymmetric junction capacitances

    NASA Astrophysics Data System (ADS)

    Imai, Shigeru; Ito, Masato

    2018-06-01

    In this paper, anomalous single-electron transfer in common-gate quadruple-dot turnstile devices with asymmetric junction capacitances is revealed. That is, the islands have the same total number of excess electrons at high and low gate voltages of the swing that transfers a single electron. In another situation, two electrons enter the islands from the source and two electrons leave the islands for the source and drain during a gate voltage swing cycle. First, stability diagrams of the turnstile devices are presented. Then, sequences of single-electron tunneling events by gate voltage swings are investigated, which demonstrate the above-mentioned anomalous single-electron transfer between the source and the drain. The anomalous single-electron transfer can be understood by regarding the four islands as “three virtual islands and a virtual source or drain electrode of a virtual triple-dot device”. The anomalous behaviors of the four islands are explained by the normal behavior of the virtual islands transferring a single electron and the behavior of the virtual electrode.

  11. Comparison of Walking and Traveling-Wave Piezoelectric Motors as Actuators in Kinesthetic Haptic Devices.

    PubMed

    Olsson, Pontus; Nysjo, Fredrik; Carlbom, Ingrid B; Johansson, Stefan

    2016-01-01

    Piezoelectric motors offer an attractive alternative to electromagnetic actuators in portable haptic interfaces: they are compact, have a high force-to-volume ratio, and can operate with limited or no gearing. However, the choice of a piezoelectric motor type is not obvious due to differences in performance characteristics. We present our evaluation of two commercial, operationally different, piezoelectric motors acting as actuators in two kinesthetic haptic grippers, a walking quasi-static motor and a traveling wave ultrasonic motor. We evaluate each gripper's ability to display common virtual objects including springs, dampers, and rigid walls, and conclude that the walking quasi-static motor is superior at low velocities. However, for applications where high velocity is required, traveling wave ultrasonic motors are a better option.

  12. Construction and validation of a distance learning module on premedication antisepsis for nursing professionals.

    PubMed

    Pereira, Barbara Juliana da Costa; Mendes, Isabel Amélia Costa; Beatriz Maria, Jorge; Mazzo, Alessandra

    2013-11-01

    The aim of this descriptive study, carried out at a public university, was to design, develop, and validate a distance learning module on intramuscular premedication antisepsis. The content was introduced in the Modular Object-Oriented Dynamic Learning Environment, based on the Systematic Model for Web-Based Training projects. Ten nurses and information technologists at work consented to participate, in compliance with ethical guidelines, and answered a questionnaire to validate the Virtual Learning Environment. The educational aspects of the environment interface were mostly evaluated as "excellent," whereas the assessment of didactic resources indicated interactivity difficulties. It is concluded that distance learning is an important tool for the teaching of premedication antisepsis. To ensure its effectiveness, appropriate methods and interactive devices must be used.

  13. Dual-polarity plasmonic metalens for visible light

    NASA Astrophysics Data System (ADS)

    Chen, Xianzhong; Huang, Lingling; Mühlenbernd, Holger; Li, Guixin; Bai, Benfeng; Tan, Qiaofeng; Jin, Guofan; Qiu, Cheng-Wei; Zhang, Shuang; Zentgraf, Thomas

    2012-11-01

    Surface topography and refractive index profile dictate the deterministic functionality of a lens. The polarity of most lenses reported so far, that is, either positive (convex) or negative (concave), depends on the curvatures of the interfaces. Here we experimentally demonstrate a counter-intuitive dual-polarity flat lens based on helicity-dependent phase discontinuities for circularly polarized light. Specifically, by controlling the helicity of the input light, the positive and negative polarity are interchangeable in one identical flat lens. Helicity-controllable real and virtual focal planes, as well as magnified and demagnified imaging, are observed on the same plasmonic lens at visible and near-infrared wavelengths. The plasmonic metalens with dual polarity may empower advanced research and applications in helicity-dependent focusing and imaging devices, angular-momentum-based quantum information processing and integrated nano-optoelectronics.

  14. SAFARI: An Environment for Creating Tutoring Systems in Industrial Training.

    ERIC Educational Resources Information Center

    Gecsei, J.; Frasson, C.

    Safari is a cooperative project involving four Quebec universities, two industrial partners (Virtual Prototypes, Inc., providing the VAPS software package, and Novasys, Inc., a consulting firm specializing in artificial intelligence and training), and government. VAPS (Virtual Applications Prototyping System) is a commercial interface-building and…

  15. Graphene as a platform for novel nanoelectronic devices

    NASA Astrophysics Data System (ADS)

    Standley, Brian

    Graphene's superlative electrical and mechanical properties, combined with its compatibility with existing planar silicon-based technology, make it an attractive platform for novel nanoelectronic devices. The development of two such devices is reported--a nonvolatile memory element exploiting the nanoscale graphene edge and a field-effect transistor using graphene for both the conducting channel and, in oxidized form, the gate dielectric. These experiments were enabled by custom software written to fully utilize both instrument-based and computer-based data acquisition hardware and provide a simple measurement automation system. Graphene break junctions were studied and found to exhibit switching behavior in response to an electric field. This switching allows the devices to act as nonvolatile memory elements which have demonstrated thousands of writing cycles and long retention times. A model for device operation is proposed based on the formation and breaking of carbon-atom chains that bridge the junctions. Information storage was demonstrated using the concept of rank coding, in which information is stored in the relative conductance of multiple graphene switches in a memory cell. The high mobility and two dimensional nature of graphene make it an attractive material for field-effect transistors. Another ultrathin layered materialmd graphene's insulating analogue, graphite oxidemd was studied as an alternative to bulk gate dielectric materials such as Al2O3 or HfO 2. Transistors were fabricated comprising single or bilayer graphene channels, graphite oxide gate insulators, and metal top-gates. Electron transport measurements reveal minimal leakage through the graphite oxide at room temperature. Its breakdown electric field was found to be comparable to SiO2, typically ˜1-3 x 108 V/m, while its dielectric constant is slightly higher, kappa ≈ 4.3. As nanoelectronics experiments and their associated instrumentation continue to grow in complexity the need for powerful data acquisition software has only increased. This role has traditionally been filled by semiconductor parameter analyzers or desktop computers running LabVIEW. Mezurit 2 represents a hybrid approach, providing basic virtual instruments which can be controlled in concert through a comprehensive scripting interface. Each virtual instrument's model of operation is described and an architectural overview is provided.

  16. Spectroscopic Studies of the Electronic Structure of Metal-Semiconductor and Vacuum-Semiconductor Interfaces.

    DTIC Science & Technology

    1982-12-31

    interfaces which are of importance in such semi- conductor devices as MOSFETS, CCD devices, photovoltaic devices, DD I jAN 73 1473 EDITION OF INOV 66 if...interfaces is interesting for the study of electrolytic cells . Our photoemission study reveals for the first time how the electronic structure of water

  17. Virtual reality in surgical training.

    PubMed

    Lange, T; Indelicato, D J; Rosen, J M

    2000-01-01

    Virtual reality in surgery and, more specifically, in surgical training, faces a number of challenges in the future. These challenges are building realistic models of the human body, creating interface tools to view, hear, touch, feel, and manipulate these human body models, and integrating virtual reality systems into medical education and treatment. A final system would encompass simulators specifically for surgery, performance machines, telemedicine, and telesurgery. Each of these areas will need significant improvement for virtual reality to impact medicine successfully in the next century. This article gives an overview of, and the challenges faced by, current systems in the fast-changing field of virtual reality technology, and provides a set of specific milestones for a truly realistic virtual human body.

  18. Distributed user interfaces for clinical ubiquitous computing applications.

    PubMed

    Bång, Magnus; Larsson, Anders; Berglund, Erik; Eriksson, Henrik

    2005-08-01

    Ubiquitous computing with multiple interaction devices requires new interface models that support user-specific modifications to applications and facilitate the fast development of active workspaces. We have developed NOSTOS, a computer-augmented work environment for clinical personnel to explore new user interface paradigms for ubiquitous computing. NOSTOS uses several devices such as digital pens, an active desk, and walk-up displays that allow the system to track documents and activities in the workplace. We present the distributed user interface (DUI) model that allows standalone applications to distribute their user interface components to several devices dynamically at run-time. This mechanism permit clinicians to develop their own user interfaces and forms to clinical information systems to match their specific needs. We discuss the underlying technical concepts of DUIs and show how service discovery, component distribution, events and layout management are dealt with in the NOSTOS system. Our results suggest that DUIs--and similar network-based user interfaces--will be a prerequisite of future mobile user interfaces and essential to develop clinical multi-device environments.

  19. Physical interface dynamics alter how robotic exosuits augment human movement: implications for optimizing wearable assistive devices.

    PubMed

    Yandell, Matthew B; Quinlivan, Brendan T; Popov, Dmitry; Walsh, Conor; Zelik, Karl E

    2017-05-18

    Wearable assistive devices have demonstrated the potential to improve mobility outcomes for individuals with disabilities, and to augment healthy human performance; however, these benefits depend on how effectively power is transmitted from the device to the human user. Quantifying and understanding this power transmission is challenging due to complex human-device interface dynamics that occur as biological tissues and physical interface materials deform and displace under load, absorbing and returning power. Here we introduce a new methodology for quickly estimating interface power dynamics during movement tasks using common motion capture and force measurements, and then apply this method to quantify how a soft robotic ankle exosuit interacts with and transfers power to the human body during walking. We partition exosuit end-effector power (i.e., power output from the device) into power that augments ankle plantarflexion (termed augmentation power) vs. power that goes into deformation and motion of interface materials and underlying soft tissues (termed interface power). We provide empirical evidence of how human-exosuit interfaces absorb and return energy, reshaping exosuit-to-human power flow and resulting in three key consequences: (i) During exosuit loading (as applied forces increased), about 55% of exosuit end-effector power was absorbed into the interfaces. (ii) However, during subsequent exosuit unloading (as applied forces decreased) most of the absorbed interface power was returned viscoelastically. Consequently, the majority (about 75%) of exosuit end-effector work over each stride contributed to augmenting ankle plantarflexion. (iii) Ankle augmentation power (and work) was delayed relative to exosuit end-effector power, due to these interface energy absorption and return dynamics. Our findings elucidate the complexities of human-exosuit interface dynamics during transmission of power from assistive devices to the human body, and provide insight into improving the design and control of wearable robots. We conclude that in order to optimize the performance of wearable assistive devices it is important, throughout design and evaluation phases, to account for human-device interface dynamics that affect power transmission and thus human augmentation benefits.

  20. Nature and origins of virtual environments - A bibliographical essay

    NASA Technical Reports Server (NTRS)

    Ellis, S. R.

    1991-01-01

    Virtual environments presented via head-mounted, computer-driven displays provide a new media for communication. They may be analyzed by considering: (1) what may be meant by an environment; (2) what is meant by the process of virtualization; and (3) some aspects of human performance that constrain environmental design. Their origins are traced from previous work in vehicle simulation and multimedia research. Pointers are provided to key technical references, in the dispersed, archival literature, that are relevant to the development and evaluation of virtual-environment interface systems.

  1. NEDE: an open-source scripting suite for developing experiments in 3D virtual environments.

    PubMed

    Jangraw, David C; Johri, Ansh; Gribetz, Meron; Sajda, Paul

    2014-09-30

    As neuroscientists endeavor to understand the brain's response to ecologically valid scenarios, many are leaving behind hyper-controlled paradigms in favor of more realistic ones. This movement has made the use of 3D rendering software an increasingly compelling option. However, mastering such software and scripting rigorous experiments requires a daunting amount of time and effort. To reduce these startup costs and make virtual environment studies more accessible to researchers, we demonstrate a naturalistic experimental design environment (NEDE) that allows experimenters to present realistic virtual stimuli while still providing tight control over the subject's experience. NEDE is a suite of open-source scripts built on the widely used Unity3D game development software, giving experimenters access to powerful rendering tools while interfacing with eye tracking and EEG, randomizing stimuli, and providing custom task prompts. Researchers using NEDE can present a dynamic 3D virtual environment in which randomized stimulus objects can be placed, allowing subjects to explore in search of these objects. NEDE interfaces with a research-grade eye tracker in real-time to maintain precise timing records and sync with EEG or other recording modalities. Python offers an alternative for experienced programmers who feel comfortable mastering and integrating the various toolboxes available. NEDE combines many of these capabilities with an easy-to-use interface and, through Unity's extensive user base, a much more substantial body of assets and tutorials. Our flexible, open-source experimental design system lowers the barrier to entry for neuroscientists interested in developing experiments in realistic virtual environments. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. Brain-computer interface users speak up: the Virtual Users' Forum at the 2013 International Brain-Computer Interface Meeting.

    PubMed

    Peters, Betts; Bieker, Gregory; Heckman, Susan M; Huggins, Jane E; Wolf, Catherine; Zeitlin, Debra; Fried-Oken, Melanie

    2015-03-01

    More than 300 researchers gathered at the 2013 International Brain-Computer Interface (BCI) Meeting to discuss current practice and future goals for BCI research and development. The authors organized the Virtual Users' Forum at the meeting to provide the BCI community with feedback from users. We report on the Virtual Users' Forum, including initial results from ongoing research being conducted by 2 BCI groups. Online surveys and in-person interviews were used to solicit feedback from people with disabilities who are expert and novice BCI users. For the Virtual Users' Forum, their responses were organized into 4 major themes: current (non-BCI) communication methods, experiences with BCI research, challenges of current BCIs, and future BCI developments. Two authors with severe disabilities gave presentations during the Virtual Users' Forum, and their comments are integrated with the other results. While participants' hopes for BCIs of the future remain high, their comments about available systems mirror those made by consumers about conventional assistive technology. They reflect concerns about reliability (eg, typing accuracy/speed), utility (eg, applications and the desire for real-time interactions), ease of use (eg, portability and system setup), and support (eg, technical support and caregiver training). People with disabilities, as target users of BCI systems, can provide valuable feedback and input on the development of BCI as an assistive technology. To this end, participatory action research should be considered as a valuable methodology for future BCI research. Copyright © 2015 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  3. High power microwave generator

    DOEpatents

    Minich, Roger W.

    1988-01-01

    A device (10) for producing high-powered and coherent microwaves is described. The device comprises an evacuated, cylindrical, and hollow real cathode (20) that is driven to inwardly field emit relativistic electrons. The electrons pass through an internally disposed cylindrical and substantially electron-transparent cylindrical anode (24), proceed toward a cylindrical electron collector electrode (26), and form a cylindrical virtual cathode (32). Microwaves are produced by spatial and temporal oscillations of the cylindrical virtual cathode (32), and by electrons that reflex back and forth between the cylindrical virtual cathode (32) and the cylindrical real cathode (20).

  4. Design and Development of a Virtual Facility Tour Using iPIX(TM) Technology

    NASA Technical Reports Server (NTRS)

    Farley, Douglas L.

    2002-01-01

    The capabilities of the iPIX virtual tour software, in conjunction with a web-based interface create a unique and valuable system that provides users with an efficient virtual capability to tour facilities while being able to acquire the necessary technical content is demonstrated. A users guide to the Mechanics and Durability Branch's virtual tour is presented. The guide provides the user with instruction on operating both scripted and unscripted tours as well as a discussion of the tours for Buildings 1148, 1205 and 1256 and NASA Langley Research Center. Furthermore, an indepth discussion has been presented on how to develop a virtual tour using the iPIX software interface with conventional html and JavaScript. The main aspects for discussion are on network and computing issues associated with using this capability. A discussion of how to take the iPIX pictures, manipulate them and bond them together to form hemispherical images is also presented. Linking of images with additional multimedia content is discussed. Finally, a method to integrate the iPIX software with conventional HTML and JavaScript to facilitate linking with multi-media is presented.

  5. Can walking motions improve visually induced rotational self-motion illusions in virtual reality?

    PubMed

    Riecke, Bernhard E; Freiberg, Jacob B; Grechkin, Timofey Y

    2015-02-04

    Illusions of self-motion (vection) can provide compelling sensations of moving through virtual environments without the need for complex motion simulators or large tracked physical walking spaces. Here we explore the interaction between biomechanical cues (stepping along a rotating circular treadmill) and visual cues (viewing simulated self-rotation) for providing stationary users a compelling sensation of rotational self-motion (circular vection). When tested individually, biomechanical and visual cues were similarly effective in eliciting self-motion illusions. However, in combination they yielded significantly more intense self-motion illusions. These findings provide the first compelling evidence that walking motions can be used to significantly enhance visually induced rotational self-motion perception in virtual environments (and vice versa) without having to provide for physical self-motion or motion platforms. This is noteworthy, as linear treadmills have been found to actually impair visually induced translational self-motion perception (Ash, Palmisano, Apthorp, & Allison, 2013). Given the predominant focus on linear walking interfaces for virtual-reality locomotion, our findings suggest that investigating circular and curvilinear walking interfaces offers a promising direction for future research and development and can help to enhance self-motion illusions, presence and immersion in virtual-reality systems. © 2015 ARVO.

  6. Virtual reality applied to teletesting

    NASA Astrophysics Data System (ADS)

    van den Berg, Thomas J.; Smeenk, Roland J. M.; Mazy, Alain; Jacques, Patrick; Arguello, Luis; Mills, Simon

    2003-05-01

    The activity "Virtual Reality applied to Teletesting" is related to a wider European Space Agency (ESA) initiative of cost reduction, in particular the reduction of test costs. Reduction of costs of space related projects have to be performed on test centre operating costs and customer company costs. This can accomplished by increasing the automation and remote testing ("teletesting") capabilities of the test centre. Main problems related to teletesting are a lack of situational awareness and the separation of control over the test environment. The objective of the activity is to evaluate the use of distributed computing and Virtual Reality technology to support the teletesting of a payload under vacuum conditions, and to provide a unified man-machine interface for the monitoring and control of payload, vacuum chamber and robotics equipment. The activity includes the development and testing of a "Virtual Reality Teletesting System" (VRTS). The VRTS is deployed at one of the ESA certified test centres to perform an evaluation and test campaign using a real payload. The VRTS is entirely written in the Java programming language, using the J2EE application model. The Graphical User Interface runs as an applet in a Web browser, enabling easy access from virtually any place.

  7. Polymer-based actuators for virtual reality devices

    NASA Astrophysics Data System (ADS)

    Bolzmacher, Christian; Hafez, Moustapha; Benali Khoudja, Mohamed; Bernardoni, Paul; Dubowsky, Steven

    2004-07-01

    Virtual Reality (VR) is gaining more importance in our society. For many years, VR has been limited to the entertainment applications. Today, practical applications such as training and prototyping find a promising future in VR. Therefore there is an increasing demand for low-cost, lightweight haptic devices in virtual reality (VR) environment. Electroactive polymers seem to be a potential actuation technology that could satisfy these requirements. Dielectric polymers developed the past few years have shown large displacements (more than 300%). This feature makes them quite interesting for integration in haptic devices due to their muscle-like behaviour. Polymer actuators are flexible and lightweight as compared to traditional actuators. Using stacks with several layers of elatomeric film increase the force without limiting the output displacement. The paper discusses some design methods for a linear dielectric polymer actuator for VR devices. Experimental results of the actuator performance is presented.

  8. The roles of carrier concentration and interface, bulk, and grain-boundary recombination for 25% efficient CdTe solar cells

    DOE PAGES

    Kanevce, A.; Reese, Matthew O.; Barnes, T. M.; ...

    2017-06-06

    CdTe devices have reached efficiencies of 22% due to continuing improvements in bulk material properties, including minority carrier lifetime. Device modeling has helped to guide these device improvements by quantifying the impacts of material properties and different device designs on device performance. One of the barriers to truly predictive device modeling is the interdependence of these material properties. For example, interfaces become more critical as bulk properties, particularly, hole density and carrier lifetime, increase. We present device-modeling analyses that describe the effects of recombination at the interfaces and grain boundaries as lifetime and doping of the CdTe layer change. Themore » doping and lifetime should be priorities for maximizing open-circuit voltage (V oc) and efficiency improvements. However, interface and grain boundary recombination become bottlenecks for device performance at increased lifetime and doping levels. In conclusion, this work quantifies and discusses these emerging challenges for next-generation CdTe device efficiency.« less

  9. An Online Virtual Laboratory of Electricity

    ERIC Educational Resources Information Center

    Gómez Tejedor, J. A.; Moltó Martínez, G.; Barros Vidaurre, C.

    2008-01-01

    In this article, we describe a Java-based virtual laboratory, accessible via the Internet by means of a Web browser. This remote laboratory enables the students to build both direct and alternating current circuits. The program includes a graphical user interface which resembles the connection board, and also the electrical components and tools…

  10. Technology-Enhanced Learning and Community with Market Appeal.

    ERIC Educational Resources Information Center

    Young, Brian Alexander

    2000-01-01

    Describes the University of Dayton's Personalized Virtual Room. This Web interface to a virtual space that looks and feels like a campus residence was designed to encourage communication and connectivity among first-year students before they arrive on campus. Discusses the initiative's goals and successes, student reaction, and lessons learned.…

  11. A high performance two degree-of-freedom kinesthetic interface

    NASA Technical Reports Server (NTRS)

    Adelstein, Bernard D.; Rosen, Michael J.

    1991-01-01

    This summary focuses on the kinesthetic interface of a virtual environment system that was developed at the Newman Laboratory for Biomechanics and Human Rehabilitation at M.I.T. for the study of manual control in both motorically impaired and able-bodied individuals.

  12. Scalable Multi-Platform Distribution of Spatial 3d Contents

    NASA Astrophysics Data System (ADS)

    Klimke, J.; Hagedorn, B.; Döllner, J.

    2013-09-01

    Virtual 3D city models provide powerful user interfaces for communication of 2D and 3D geoinformation. Providing high quality visualization of massive 3D geoinformation in a scalable, fast, and cost efficient manner is still a challenging task. Especially for mobile and web-based system environments, software and hardware configurations of target systems differ significantly. This makes it hard to provide fast, visually appealing renderings of 3D data throughout a variety of platforms and devices. Current mobile or web-based solutions for 3D visualization usually require raw 3D scene data such as triangle meshes together with textures delivered from server to client, what makes them strongly limited in terms of size and complexity of the models they can handle. In this paper, we introduce a new approach for provisioning of massive, virtual 3D city models on different platforms namely web browsers, smartphones or tablets, by means of an interactive map assembled from artificial oblique image tiles. The key concept is to synthesize such images of a virtual 3D city model by a 3D rendering service in a preprocessing step. This service encapsulates model handling and 3D rendering techniques for high quality visualization of massive 3D models. By generating image tiles using this service, the 3D rendering process is shifted from the client side, which provides major advantages: (a) The complexity of the 3D city model data is decoupled from data transfer complexity (b) the implementation of client applications is simplified significantly as 3D rendering is encapsulated on server side (c) 3D city models can be easily deployed for and used by a large number of concurrent users, leading to a high degree of scalability of the overall approach. All core 3D rendering techniques are performed on a dedicated 3D rendering server, and thin-client applications can be compactly implemented for various devices and platforms.

  13. Human factors optimization of virtual environment attributes for a space telerobotic control station

    NASA Astrophysics Data System (ADS)

    Lane, Jason Corde

    2000-10-01

    Remote control of underwater vehicles and other robotic systems has, up until now, proved to be a challenging task for the human operator. With technology advancements in computers and displays, computer interfaces can be used to alleviate the workload on the operator. This research introduces the concept of a commanded display, which is a graphical simulation that shows the commands sent to the actual system in real-time. The primary goal of this research was to show a commanded display as an alternative to the traditional predictive display for reducing the effects of time delay. Several experiments were used to investigate how subjects compensated for time delay under a variety of conditions while controlling a 7-degree of freedom robotic manipulator. Results indicate that time delay increased completion time linearly; this linear relationship occurred even at different manipulator speeds, varying levels of error, and when using a commanded display. The commanded display alleviated the majority of time delay effects, up to 91% reduction. The commanded display also facilitated more accurate control, reducing the number of inadvertent impacts to the task worksite, even when compared to no time delay. Even with a moderate error between the commanded and actual displays, the commanded display was still a useful tool for mitigating time delay. The way subjects controlled the manipulator with the input device was tracked and their control strategies were extracted. A correlation between the subjects' use of the input device and their task completion time was determined. The importance of stereo vision and head tracking was examined and shown to improve a subject's depth perception within a virtual environment. Reports of simulator sickness induced by display equipment, including a head mounted display and LCD shutter glasses, were compared. The results of the above testing were used to develop an effective virtual environment control station to control a multi-arm robot.

  14. Spatial issues in user interface design from a graphic design perspective

    NASA Technical Reports Server (NTRS)

    Marcus, Aaron

    1989-01-01

    The user interface of a computer system is a visual display that provides information about the status of operations on data within the computer and control options to the user that enable adjustments to these operations. From the very beginning of computer technology the user interface was a spatial display, although its spatial features were not necessarily complex or explicitly recognized by the users. All text and nonverbal signs appeared in a virtual space generally thought of as a single flat plane of symbols. Current technology of high performance workstations permits any element of the display to appear as dynamic, multicolor, 3-D signs in a virtual 3-D space. The complexity of appearance and the user's interaction with the display provide significant challenges to the graphic designer of current and future user interfaces. In particular, spatial depiction provides many opportunities for effective communication of objects, structures, processes, navigation, selection, and manipulation. Issues are presented that are relevant to the graphic designer seeking to optimize the user interface's spatial attributes for effective visual communication.

  15. Fragment-Based Docking: Development of the CHARMMing Web User Interface as a Platform for Computer-Aided Drug Design

    PubMed Central

    2015-01-01

    Web-based user interfaces to scientific applications are important tools that allow researchers to utilize a broad range of software packages with just an Internet connection and a browser.1 One such interface, CHARMMing (CHARMM interface and graphics), facilitates access to the powerful and widely used molecular software package CHARMM. CHARMMing incorporates tasks such as molecular structure analysis, dynamics, multiscale modeling, and other techniques commonly used by computational life scientists. We have extended CHARMMing’s capabilities to include a fragment-based docking protocol that allows users to perform molecular docking and virtual screening calculations either directly via the CHARMMing Web server or on computing resources using the self-contained job scripts generated via the Web interface. The docking protocol was evaluated by performing a series of “re-dockings” with direct comparison to top commercial docking software. Results of this evaluation showed that CHARMMing’s docking implementation is comparable to many widely used software packages and validates the use of the new CHARMM generalized force field for docking and virtual screening. PMID:25151852

  16. Fragment-based docking: development of the CHARMMing Web user interface as a platform for computer-aided drug design.

    PubMed

    Pevzner, Yuri; Frugier, Emilie; Schalk, Vinushka; Caflisch, Amedeo; Woodcock, H Lee

    2014-09-22

    Web-based user interfaces to scientific applications are important tools that allow researchers to utilize a broad range of software packages with just an Internet connection and a browser. One such interface, CHARMMing (CHARMM interface and graphics), facilitates access to the powerful and widely used molecular software package CHARMM. CHARMMing incorporates tasks such as molecular structure analysis, dynamics, multiscale modeling, and other techniques commonly used by computational life scientists. We have extended CHARMMing's capabilities to include a fragment-based docking protocol that allows users to perform molecular docking and virtual screening calculations either directly via the CHARMMing Web server or on computing resources using the self-contained job scripts generated via the Web interface. The docking protocol was evaluated by performing a series of "re-dockings" with direct comparison to top commercial docking software. Results of this evaluation showed that CHARMMing's docking implementation is comparable to many widely used software packages and validates the use of the new CHARMM generalized force field for docking and virtual screening.

  17. Virtual Reality and Online Databases: Will "Look and Feel" Literally Mean "Look" and "Feel"? [and]"Online" Interviews Dr. Thomas A. Furness III, Virtual Reality Pioneer.

    ERIC Educational Resources Information Center

    Miller, Carmen

    1992-01-01

    The first of two articles discusses virtual reality (VR) and online databases; the second one reports on an interview with Thomas A. Furness III, who defines VR and explains work at the Human Interface Technology Laboratory (HIT). Sidebars contain a glossary of VR terms and a conversation with Toni Emerson, the HIT lab's librarian. (LRW)

  18. minimega v. 3.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crussell, Jonathan; Erickson, Jeremy; Fritz, David

    minimega is an emulytics platform for creating testbeds of networked devices. The platoform consists of easily deployable tools to facilitate bringing up large networks of virtual machines including Windows, Linux, and Android. minimega allows experiments to be brought up quickly with almost no configuration. minimega also includes tools for simple cluster, management, as well as tools for creating Linux-based virtual machines. This release of minimega includes new emulated sensors for Android devices to improve the fidelity of testbeds that include mobile devices. Emulated sensors include GPS and

  19. Introduction into the Virtual Olympic Games Framework for online communities.

    PubMed

    Stoilescu, Dorian

    2009-06-01

    This paper presents the design of the Virtual Olympic Games Framework (VOGF), a computer application designated for athletics, health care, general well-being, nutrition and fitness, which offers multiple benefits for its participants. A special interest in starting the design of the framework was in exploring how people can connect and participate together using existing computer technologies (i.e. gaming consoles, exercise equipment with computer interfaces, devices of measuring health, speed, force and distance and Web 2.0 applications). A stationary bike set-up offering information to users about their individual health and athletic performances has been considered as a starting model. While this model is in the design stage, some preliminary findings are encouraging, suggesting the potential for various fields: sports, medicine, theories of learning, technologies and cybercultural studies. First, this framework would allow participants to perform a variety of sports and improve their health. Second, this would involve creating an online environment able to store health information and sport performances correlated with accessing multi-media data and research about performing sports. Third, participants could share experiences with other athletes, coaches and researchers. Fourth, this framework also provides support for the research community in their future investigations.

  20. Programmable Nano-Bio Interfaces for Functional Biointegrated Devices.

    PubMed

    Cai, Pingqiang; Leow, Wan Ru; Wang, Xiaoyuan; Wu, Yun-Long; Chen, Xiaodong

    2017-07-01

    A large amount of evidence has demonstrated the revolutionary role of nanosystems in the screening and shielding of biological systems. The explosive development of interfacing bioentities with programmable nanomaterials has conveyed the intriguing concept of nano-bio interfaces. Here, recent advances in functional biointegrated devices through the precise programming of nano-bio interactions are outlined, especially with regard to the rational assembly of constituent nanomaterials on multiple dimension scales (e.g., nanoparticles, nanowires, layered nanomaterials, and 3D-architectured nanomaterials), in order to leverage their respective intrinsic merits for different functions. Emerging nanotechnological strategies at nano-bio interfaces are also highlighted, such as multimodal diagnosis or "theragnostics", synergistic and sequential therapeutics delivery, and stretchable and flexible nanoelectronic devices, and their implementation into a broad range of biointegrated devices (e.g., implantable, minimally invasive, and wearable devices). When utilized as functional modules of biointegrated devices, these programmable nano-bio interfaces will open up a new chapter for precision nanomedicine. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Dynamic Extension of a Virtualized Cluster by using Cloud Resources

    NASA Astrophysics Data System (ADS)

    Oberst, Oliver; Hauth, Thomas; Kernert, David; Riedel, Stephan; Quast, Günter

    2012-12-01

    The specific requirements concerning the software environment within the HEP community constrain the choice of resource providers for the outsourcing of computing infrastructure. The use of virtualization in HPC clusters and in the context of cloud resources is therefore a subject of recent developments in scientific computing. The dynamic virtualization of worker nodes in common batch systems provided by ViBatch serves each user with a dynamically virtualized subset of worker nodes on a local cluster. Now it can be transparently extended by the use of common open source cloud interfaces like OpenNebula or Eucalyptus, launching a subset of the virtual worker nodes within the cloud. This paper demonstrates how a dynamically virtualized computing cluster is combined with cloud resources by attaching remotely started virtual worker nodes to the local batch system.

  2. Biplane reconstruction and visualization of virtual endoscopic and fluoroscopic views for interventional device navigation

    NASA Astrophysics Data System (ADS)

    Wagner, Martin G.; Strother, Charles M.; Schafer, Sebastian; Mistretta, Charles A.

    2016-03-01

    Biplane fluoroscopic imaging is an important tool for minimally invasive procedures for the treatment of cerebrovascular diseases. However, finding a good working angle for the C-arms of the angiography system as well as navigating based on the 2D projection images can be a difficult task. The purpose of this work is to propose a novel 4D reconstruction algorithm for interventional devices from biplane fluoroscopy images and to propose new techniques for a better visualization of the results. The proposed reconstruction methods binarizes the fluoroscopic images using a dedicated noise reduction algorithm for curvilinear structures and a global thresholding approach. A topology preserving thinning algorithm is then applied and a path search algorithm minimizing the curvature of the device is used to extract the 2D device centerlines. Finally, the 3D device path is reconstructed using epipolar geometry. The point correspondences are determined by a monotonic mapping function that minimizes the reconstruction error. The three dimensional reconstruction of the device path allows the rendering of virtual fluoroscopy images from arbitrary angles as well as 3D visualizations like virtual endoscopic views or glass pipe renderings, where the vessel wall is rendered with a semi-transparent material. This work also proposes a combination of different visualization techniques in order to increase the usability and spatial orientation for the user. A combination of synchronized endoscopic and glass pipe views is proposed, where the virtual endoscopic camera position is determined based on the device tip location as well as the previous camera position using a Kalman filter in order to create a smooth path. Additionally, vessel centerlines are displayed and the path to the target is highlighted. Finally, the virtual endoscopic camera position is also visualized in the glass pipe view to further improve the spatial orientation. The proposed techniques could considerably improve the workflow of minimally invasive procedures for the treatment of cerebrovascular diseases.

  3. A Multi-purpose Brain-Computer Interface Output Device

    PubMed Central

    Thompson, David E; Huggins, Jane E

    2012-01-01

    While brain-computer interfaces (BCIs) are a promising alternative access pathway for individuals with severe motor impairments, many BCI systems are designed as standalone communication and control systems, rather than as interfaces to existing systems built for these purposes. While an individual communication and control system may be powerful or flexible, no single system can compete with the variety of options available in the commercial assistive technology (AT) market. BCIs could instead be used as an interface to these existing AT devices and products, which are designed for improving access and agency of people with disabilities and are highly configurable to individual user needs. However, interfacing with each AT device and program requires significant time and effort on the part of researchers and clinicians. This work presents the Multi-Purpose BCI Output Device (MBOD), a tool to help researchers and clinicians provide BCI control of many forms of AT in a plug-and-play fashion, i.e. without the installation of drivers or software on the AT device, and a proof-of-concept of the practicality of such an approach. The MBOD was designed to meet the goals of target device compatibility, BCI input device compatibility, convenience, and intuitive command structure. The MBOD was successfully used to interface a BCI with multiple AT devices (including two wheelchair seating systems), as well as computers running Windows (XP and 7), Mac and Ubuntu Linux operating systems. PMID:22208120

  4. A multi-purpose brain-computer interface output device.

    PubMed

    Thompson, David E; Huggins, Jane E

    2011-10-01

    While brain-computer interfaces (BCIs) are a promising alternative access pathway for individuals with severe motor impairments, many BCI systems are designed as stand-alone communication and control systems, rather than as interfaces to existing systems built for these purposes. An individual communication and control system may be powerful or flexible, but no single system can compete with the variety of options available in the commercial assistive technology (AT) market. BCls could instead be used as an interface to these existing AT devices and products, which are designed for improving access and agency of people with disabilities and are highly configurable to individual user needs. However, interfacing with each AT device and program requires significant time and effort on the part of researchers and clinicians. This work presents the Multi-Purpose BCI Output Device (MBOD), a tool to help researchers and clinicians provide BCI control of many forms of AT in a plug-and-play fashion, i.e., without the installation of drivers or software on the AT device, and a proof-of-concept of the practicality of such an approach. The MBOD was designed to meet the goals of target device compatibility, BCI input device compatibility, convenience, and intuitive command structure. The MBOD was successfully used to interface a BCI with multiple AT devices (including two wheelchair seating systems), as well as computers running Windows (XP and 7), Mac and Ubuntu Linux operating systems.

  5. The Human Interface Technology Laboratory.

    ERIC Educational Resources Information Center

    Washington Univ., Seattle. Washington Technology Center.

    This booklet contains information about the Human Interface Technology Laboratory (HITL), which was established by the Washington Technology Center at the University of Washington to transform virtual world concepts and research into practical, economically viable technology products. The booklet is divided into seven sections: (1) a brief…

  6. Creating widely accessible spatial interfaces: mobile VR for managing persistent pain.

    PubMed

    Schroeder, David; Korsakov, Fedor; Jolton, Joseph; Keefe, Francis J; Haley, Alex; Keefe, Daniel F

    2013-01-01

    Using widely accessible VR technologies, researchers have implemented a series of multimodal spatial interfaces and virtual environments. The results demonstrate the degree to which we can now use low-cost (for example, mobile-phone based) VR environments to create rich virtual experiences involving motion sensing, physiological inputs, stereoscopic imagery, sound, and haptic feedback. Adapting spatial interfaces to these new platforms can open up exciting application areas for VR. In this case, the application area was in-home VR therapy for patients suffering from persistent pain (for example, arthritis and cancer pain). For such therapy to be successful, a rich spatial interface and rich visual aesthetic are particularly important. So, an interdisciplinary team with expertise in technology, design, meditation, and the psychology of pain collaborated to iteratively develop and evaluate several prototype systems. The video at http://youtu.be/mMPE7itReds demonstrates how the sine wave fitting responds to walking motions, for a walking-in-place application.

  7. Device USB interface and software development for electric parameter measuring instrument

    NASA Astrophysics Data System (ADS)

    Li, Deshi; Chen, Jian; Wu, Yadong

    2003-09-01

    Aimed at general devices development, this paper discussed the development of USB interface and software development. With an example, using PDIUSBD12 which support parallel interface, the paper analyzed its technical characteristics. Designed different interface circuit with 80C52 singlechip microcomputer and TMS320C54 series digital signal processor, analyzed the address allocation, register access. According to USB1.1 standard protocol, designed the device software and application layer protocol. The paper designed the data exchange protocol, and carried out system functions.

  8. Virtual Observatory Interfaces to the Chandra Data Archive

    NASA Astrophysics Data System (ADS)

    Tibbetts, M.; Harbo, P.; Van Stone, D.; Zografou, P.

    2014-05-01

    The Chandra Data Archive (CDA) plays a central role in the operation of the Chandra X-ray Center (CXC) by providing access to Chandra data. Proprietary interfaces have been the backbone of the CDA throughout the Chandra mission. While these interfaces continue to provide the depth and breadth of mission specific access Chandra users expect, the CXC has been adding Virtual Observatory (VO) interfaces to the Chandra proposal catalog and observation catalog. VO interfaces provide standards-based access to Chandra data through simple positional queries or more complex queries using the Astronomical Data Query Language. Recent development at the CDA has generalized our existing VO services to create a suite of services that can be configured to provide VO interfaces to any dataset. This approach uses a thin web service layer for the individual VO interfaces, a middle-tier query component which is shared among the VO interfaces for parsing, scheduling, and executing queries, and existing web services for file and data access. The CXC VO services provide Simple Cone Search (SCS), Simple Image Access (SIA), and Table Access Protocol (TAP) implementations for both the Chandra proposal and observation catalogs within the existing archive architecture. Our work with the Chandra proposal and observation catalogs, as well as additional datasets beyond the CDA, illustrates how we can provide configurable VO services to extend core archive functionality.

  9. Transforming Clinical Imaging and 3D Data for Virtual Reality Learning Objects: HTML5 and Mobile Devices Implementation

    ERIC Educational Resources Information Center

    Trelease, Robert B.; Nieder, Gary L.

    2013-01-01

    Web deployable anatomical simulations or "virtual reality learning objects" can easily be produced with QuickTime VR software, but their use for online and mobile learning is being limited by the declining support for web browser plug-ins for personal computers and unavailability on popular mobile devices like Apple iPad and Android…

  10. Enhancing the versatility of wireless biopotential acquisition for myoelectric prosthetic control.

    PubMed

    Bercich, Rebecca A; Wang, Zhi; Mei, Henry; Hammer, Lauren H; Seburn, Kevin L; Hargrove, Levi J; Irazoqui, Pedro P

    2016-08-01

    A significant challenge in rehabilitating upper-limb amputees with sophisticated, electric-powered prostheses is sourcing reliable and independent channels of motor control information sufficient to precisely direct multiple degrees of freedom simultaneously. In response to the expressed needs of clinicians, we have developed a miniature, batteryless recording device that utilizes emerging integrated circuit technology and optimal impedance matching for magnetic resonantly coupled (MRC) wireless power transfer to improve the performance and versatility of wireless electrode interfaces with muscle. In this work we describe the fabrication and performance of a fully wireless and batteryless EMG recording system and use of this system to direct virtual and electric-powered limbs in real-time. The advantage of using MRC to optimize power transfer to a network of wireless devices is exhibited by EMG collected from an array of eight devices placed circumferentially around a human subject's forearm. This is a comprehensive, low-cost, and non-proprietary solution that provides unprecedented versatility of configuration to direct myoelectric prostheses without wired connections to the body. The amenability of MRC to varied coil geometries and arrangements has the potential to improve the efficiency and robustness of wireless power transfer links at all levels of upper-limb amputation. Additionally, the wireless recording device's programmable flash memory and selectable features will grant clinicians the unique ability to adapt and personalize the recording system's functional protocol for patient- or algorithm-specific needs.

  11. Real-time global illumination on mobile device

    NASA Astrophysics Data System (ADS)

    Ahn, Minsu; Ha, Inwoo; Lee, Hyong-Euk; Kim, James D. K.

    2014-02-01

    We propose a novel method for real-time global illumination on mobile devices. Our approach is based on instant radiosity, which uses a sequence of virtual point lights in order to represent the e ect of indirect illumination. Our rendering process consists of three stages. With the primary light, the rst stage generates a local illumination with the shadow map on GPU The second stage of the global illumination uses the re ective shadow map on GPU and generates the sequence of virtual point lights on CPU. Finally, we use the splatting method of Dachsbacher et al 1 and add the indirect illumination to the local illumination on GPU. With the limited computing resources in mobile devices, a small number of virtual point lights are allowed for real-time rendering. Our approach uses the multi-resolution sampling method with 3D geometry and attributes simultaneously and reduce the total number of virtual point lights. We also use the hybrid strategy, which collaboratively combines the CPUs and GPUs available in a mobile SoC due to the limited computing resources in mobile devices. Experimental results demonstrate the global illumination performance of the proposed method.

  12. Nomad devices for interactions in immersive virtual environments

    NASA Astrophysics Data System (ADS)

    George, Paul; Kemeny, Andras; Merienne, Frédéric; Chardonnet, Jean-Rémy; Thouvenin, Indira Mouttapa; Posselt, Javier; Icart, Emmanuel

    2013-03-01

    Renault is currently setting up a new CAVE™, a 5 rear-projected wall virtual reality room with a combined 3D resolution of 100 Mpixels, distributed over sixteen 4k projectors and two 2k projector as well as an additional 3D HD collaborative powerwall. Renault's CAVE™ aims at answering needs of the various vehicle conception steps [1]. Starting from vehicle Design, through the subsequent Engineering steps, Ergonomic evaluation and perceived quality control, Renault has built up a list of use-cases and carried out an early software evaluation in the four sided CAVE™ of Institute Image, called MOVE. One goal of the project is to study interactions in a CAVE™, especially with nomad devices such as IPhone or IPad to manipulate virtual objects and to develop visualization possibilities. Inspired by nomad devices current uses (multi-touch gestures, IPhone UI look'n'feel and AR applications), we have implemented an early feature set taking advantage of these popular input devices. In this paper, we present its performance through measurement data collected in our test platform, a 4-sided homemade low-cost virtual reality room, powered by ultra-short-range and standard HD home projectors.

  13. Proof of Concept of Home IoT Connected Vehicles

    PubMed Central

    Kim, Younsun; Oh, Hyunggoy; Kang, Sungho

    2017-01-01

    The way in which we interact with our cars is changing, driven by the increased use of mobile devices, cloud-based services, and advanced automotive technology. In particular, the requirements and market demand for the Internet of Things (IoT) device-connected vehicles will continuously increase. In addition, the advances in cloud computing and IoT have provided a promising opportunity for developing vehicular software and services in the automotive domain. In this paper, we introduce the concept of a home IoT connected vehicle with a voice-based virtual personal assistant comprised of a vehicle agent and a home agent. The proposed concept is evaluated by implementing a smartphone linked with home IoT devices that are connected to an infotainment system for the vehicle, a smartphone-based natural language interface input device, and cloud-based home IoT devices for the home. The home-to-vehicle connected service scenarios that aim to reduce the inconvenience due to simple and repetitive tasks by improving the urban mobility efficiency in IoT environments are substantiated by analyzing real vehicle testing and lifestyle research. Remarkable benefits are derived by making repetitive routine tasks one task that is executed by a command and by executing essential tasks automatically, without any request. However, it should be used with authorized permission, applied without any error at the right time, and applied under limited conditions to sense the habitants’ intention correctly and to gain the required trust regarding the remote execution of tasks. PMID:28587246

  14. Proof of Concept of Home IoT Connected Vehicles.

    PubMed

    Kim, Younsun; Oh, Hyunggoy; Kang, Sungho

    2017-06-05

    The way in which we interact with our cars is changing, driven by the increased use of mobile devices, cloud-based services, and advanced automotive technology. In particular, the requirements and market demand for the Internet of Things (IoT) device-connected vehicles will continuously increase. In addition, the advances in cloud computing and IoT have provided a promising opportunity for developing vehicular software and services in the automotive domain. In this paper, we introduce the concept of a home IoT connected vehicle with a voice-based virtual personal assistant comprised of a vehicle agent and a home agent. The proposed concept is evaluated by implementing a smartphone linked with home IoT devices that are connected to an infotainment system for the vehicle, a smartphone-based natural language interface input device, and cloud-based home IoT devices for the home. The home-to-vehicle connected service scenarios that aim to reduce the inconvenience due to simple and repetitive tasks by improving the urban mobility efficiency in IoT environments are substantiated by analyzing real vehicle testing and lifestyle research. Remarkable benefits are derived by making repetitive routine tasks one task that is executed by a command and by executing essential tasks automatically, without any request. However, it should be used with authorized permission, applied without any error at the right time, and applied under limited conditions to sense the habitants' intention correctly and to gain the required trust regarding the remote execution of tasks.

  15. Network device interface for digitally interfacing data channels to a controller via a network

    NASA Technical Reports Server (NTRS)

    Ellerbrock, Philip J. (Inventor); Grant, Robert L. (Inventor); Winkelmann, Joseph P. (Inventor); Konz, Daniel W. (Inventor)

    2009-01-01

    A communications system and method are provided for digitally connecting a plurality of data channels, such as sensors, actuators, and subsystems, to a controller using a network bus. The network device interface interprets commands and data received from the controller and polls the data channels in accordance with these commands. Specifically, the network device interface receives digital commands and data from the controller, and based on these commands and data, communicates with the data channels to either retrieve data in the case of a sensor or send data to activate an actuator. Data retrieved from the sensor is converted into digital signals and transmitted to the controller. Network device interfaces associated with different data channels can coordinate communications with the other interfaces based on either a transition in a command message sent by the bus controller or a synchronous clock signal.

  16. Incorporating an optical waveguide into a neural interface

    DOEpatents

    Tolosa, Vanessa; Delima, Terri L.; Felix, Sarah H.; Pannu, Satinderpall S.; Shah, Kedar G.; Sheth, Heeral; Tooker, Angela C.

    2016-11-08

    An optical waveguide integrated into a multielectrode array (MEA) neural interface includes a device body, at least one electrode in the device body, at least one electrically conducting lead coupled to the at least one electrode, at least one optical channel in the device body, and waveguide material in the at least one optical channel. The fabrication of a neural interface device includes the steps of providing a device body, providing at least one electrode in the device body, providing at least one electrically conducting lead coupled to the at least one electrode, providing at least one optical channel in the device body, and providing a waveguide material in the at least one optical channel.

  17. Virtual patients in a real clinical context using augmented reality: impact on antibiotics prescription behaviors.

    PubMed

    Nifakos, Sokratis; Zary, Nabil

    2014-01-01

    The research community has called for the development of effective educational interventions for addressing prescription behaviour since antimicrobial resistance remains a global health issue. Examining the potential to displace the educational process from Personal Computers to Mobile devices, in this paper we investigated a new method of integration of Virtual Patients into Mobile devices with augmented reality technology, enriching the practitioner's education in prescription behavior. Moreover, we also explored which information are critical during the prescription behavior education and we visualized these information on real context with augmented reality technology, simultaneously with a running Virtual Patient's scenario. Following this process, we set the educational frame of experiential knowledge to a mixed (virtual and real) environment.

  18. VirGO: A Visual Browser for the ESO Science Archive Facility

    NASA Astrophysics Data System (ADS)

    Hatziminaoglou, Evanthia; Chéreau, Fabien

    2009-03-01

    VirGO is the next generation Visual Browser for the ESO Science Archive Facility (SAF) developed in the Virtual Observatory Project Office. VirGO enables astronomers to discover and select data easily from millions of observations in a visual and intuitive way. It allows real-time access and the graphical display of a large number of observations by showing instrumental footprints and image previews, as well as their selection and filtering for subsequent download from the ESO SAF web interface. It also permits the loading of external FITS files or VOTables, as well as the superposition of Digitized Sky Survey images to be used as background. All data interfaces are based on Virtual Observatory (VO) standards that allow access to images and spectra from external data centres, and interaction with the ESO SAF web interface or any other VO applications.

  19. A 3D character animation engine for multimodal interaction on mobile devices

    NASA Astrophysics Data System (ADS)

    Sandali, Enrico; Lavagetto, Fabio; Pisano, Paolo

    2005-03-01

    Talking virtual characters are graphical simulations of real or imaginary persons that enable natural and pleasant multimodal interaction with the user, by means of voice, eye gaze, facial expression and gestures. This paper presents an implementation of a 3D virtual character animation and rendering engine, compliant with the MPEG-4 standard, running on Symbian-based SmartPhones. Real-time animation of virtual characters on mobile devices represents a challenging task, since many limitations must be taken into account with respect to processing power, graphics capabilities, disk space and execution memory size. The proposed optimization techniques allow to overcome these issues, guaranteeing a smooth and synchronous animation of facial expressions and lip movements on mobile phones such as Sony-Ericsson's P800 and Nokia's 6600. The animation engine is specifically targeted to the development of new "Over The Air" services, based on embodied conversational agents, with applications in entertainment (interactive story tellers), navigation aid (virtual guides to web sites and mobile services), news casting (virtual newscasters) and education (interactive virtual teachers).

  20. Role of point defects and HfO2/TiN interface stoichiometry on effective work function modulation in ultra-scaled complementary metal-oxide-semiconductor devices

    NASA Astrophysics Data System (ADS)

    Pandey, R. K.; Sathiyanarayanan, Rajesh; Kwon, Unoh; Narayanan, Vijay; Murali, K. V. R. M.

    2013-07-01

    We investigate the physical properties of a portion of the gate stack of an ultra-scaled complementary metal-oxide-semiconductor (CMOS) device. The effects of point defects, such as oxygen vacancy, oxygen, and aluminum interstitials at the HfO2/TiN interface, on the effective work function of TiN are explored using density functional theory. We compute the diffusion barriers of such point defects in the bulk TiN and across the HfO2/TiN interface. Diffusion of these point defects across the HfO2/TiN interface occurs during the device integration process. This results in variation of the effective work function and hence in the threshold voltage variation in the devices. Further, we simulate the effects of varying the HfO2/TiN interface stoichiometry on the effective work function modulation in these extremely-scaled CMOS devices. Our results show that the interface rich in nitrogen gives higher effective work function, whereas the interface rich in titanium gives lower effective work function, compared to a stoichiometric HfO2/TiN interface. This theoretical prediction is confirmed by the experiment, demonstrating over 700 meV modulation in the effective work function.

  1. Defining brain-machine interface applications by matching interface performance with device requirements.

    PubMed

    Tonet, Oliver; Marinelli, Martina; Citi, Luca; Rossini, Paolo Maria; Rossini, Luca; Megali, Giuseppe; Dario, Paolo

    2008-01-15

    Interaction with machines is mediated by human-machine interfaces (HMIs). Brain-machine interfaces (BMIs) are a particular class of HMIs and have so far been studied as a communication means for people who have little or no voluntary control of muscle activity. In this context, low-performing interfaces can be considered as prosthetic applications. On the other hand, for able-bodied users, a BMI would only be practical if conceived as an augmenting interface. In this paper, a method is introduced for pointing out effective combinations of interfaces and devices for creating real-world applications. First, devices for domotics, rehabilitation and assistive robotics, and their requirements, in terms of throughput and latency, are described. Second, HMIs are classified and their performance described, still in terms of throughput and latency. Then device requirements are matched with performance of available interfaces. Simple rehabilitation and domotics devices can be easily controlled by means of BMI technology. Prosthetic hands and wheelchairs are suitable applications but do not attain optimal interactivity. Regarding humanoid robotics, the head and the trunk can be controlled by means of BMIs, while other parts require too much throughput. Robotic arms, which have been controlled by means of cortical invasive interfaces in animal studies, could be the next frontier for non-invasive BMIs. Combining smart controllers with BMIs could improve interactivity and boost BMI applications.

  2. Usability evaluation of low-cost virtual reality hand and arm rehabilitation games.

    PubMed

    Seo, Na Jin; Arun Kumar, Jayashree; Hur, Pilwon; Crocher, Vincent; Motawar, Binal; Lakshminarayanan, Kishor

    2016-01-01

    The emergence of lower-cost motion tracking devices enables home-based virtual reality rehabilitation activities and increased accessibility to patients. Currently, little documentation on patients' expectations for virtual reality rehabilitation is available. This study surveyed 10 people with stroke for their expectations of virtual reality rehabilitation games. This study also evaluated the usability of three lower-cost virtual reality rehabilitation games using a survey and House of Quality analysis. The games (kitchen, archery, and puzzle) were developed in the laboratory to encourage coordinated finger and arm movements. Lower-cost motion tracking devices, the P5 Glove and Microsoft Kinect, were used to record the movements. People with stroke were found to desire motivating and easy-to-use games with clinical insights and encouragement from therapists. The House of Quality analysis revealed that the games should be improved by obtaining evidence for clinical effectiveness, including clinical feedback regarding improving functional abilities, adapting the games to the user's changing functional ability, and improving usability of the motion-tracking devices. This study reports the expectations of people with stroke for rehabilitation games and usability analysis that can help guide development of future games.

  3. Virtual prototyping and testing of in-vehicle interfaces.

    PubMed

    Bullinger, Hans-Jörg; Dangelmaier, Manfred

    2003-01-15

    Electronic innovations that are slowly but surely changing the very nature of driving need to be tested before being introduced to the market. To meet this need a system for integrated virtual prototyping and testing has been developed. Functional virtual prototypes of various traffic systems, such as driver assistance, driver information, and multimedia systems can now be easily tested in a driving simulator by a rapid prototyping approach. The system has been applied in recent R&D projects.

  4. Guidelines for developing distributed virtual environment applications

    NASA Astrophysics Data System (ADS)

    Stytz, Martin R.; Banks, Sheila B.

    1998-08-01

    We have conducted a variety of projects that served to investigate the limits of virtual environments and distributed virtual environment (DVE) technology for the military and medical professions. The projects include an application that allows the user to interactively explore a high-fidelity, dynamic scale model of the Solar System and a high-fidelity, photorealistic, rapidly reconfigurable aircraft simulator. Additional projects are a project for observing, analyzing, and understanding the activity in a military distributed virtual environment, a project to develop a distributed threat simulator for training Air Force pilots, a virtual spaceplane to determine user interface requirements for a planned military spaceplane system, and an automated wingman for use in supplementing or replacing human-controlled systems in a DVE. The last two projects are a virtual environment user interface framework; and a project for training hospital emergency department personnel. In the process of designing and assembling the DVE applications in support of these projects, we have developed rules of thumb and insights into assembling DVE applications and the environment itself. In this paper, we open with a brief review of the applications that were the source for our insights and then present the lessons learned as a result of these projects. The lessons we have learned fall primarily into five areas. These areas are requirements development, software architecture, human-computer interaction, graphical database modeling, and construction of computer-generated forces.

  5. Affective Interaction with a Virtual Character Through an fNIRS Brain-Computer Interface.

    PubMed

    Aranyi, Gabor; Pecune, Florian; Charles, Fred; Pelachaud, Catherine; Cavazza, Marc

    2016-01-01

    Affective brain-computer interfaces (BCI) harness Neuroscience knowledge to develop affective interaction from first principles. In this article, we explore affective engagement with a virtual agent through Neurofeedback (NF). We report an experiment where subjects engage with a virtual agent by expressing positive attitudes towards her under a NF paradigm. We use for affective input the asymmetric activity in the dorsolateral prefrontal cortex (DL-PFC), which has been previously found to be related to the high-level affective-motivational dimension of approach/avoidance. The magnitude of left-asymmetric DL-PFC activity, measured using functional near infrared spectroscopy (fNIRS) and treated as a proxy for approach, is mapped onto a control mechanism for the virtual agent's facial expressions, in which action units (AUs) are activated through a neural network. We carried out an experiment with 18 subjects, which demonstrated that subjects are able to successfully engage with the virtual agent by controlling their mental disposition through NF, and that they perceived the agent's responses as realistic and consistent with their projected mental disposition. This interaction paradigm is particularly relevant in the case of affective BCI as it facilitates the volitional activation of specific areas normally not under conscious control. Overall, our contribution reconciles a model of affect derived from brain metabolic data with an ecologically valid, yet computationally controllable, virtual affective communication environment.

  6. Flexible Architecture for FPGAs in Embedded Systems

    NASA Technical Reports Server (NTRS)

    Clark, Duane I.; Lim, Chester N.

    2012-01-01

    Commonly, field-programmable gate arrays (FPGAs) being developed in cPCI embedded systems include the bus interface in the FPGA. This complicates the development because the interface is complicated and requires a lot of development time and FPGA resources. In addition, flight qualification requires a substantial amount of time be devoted to just this interface. Another complication of putting the cPCI interface into the FPGA being developed is that configuration information loaded into the device by the cPCI microprocessor is lost when a new bit file is loaded, requiring cumbersome operations to return the system to an operational state. Finally, SRAM-based FPGAs are typically programmed via specialized cables and software, with programming files being loaded either directly into the FPGA, or into PROM devices. This can be cumbersome when doing FPGA development in an embedded environment, and does not have an easy path to flight. Currently, FPGAs used in space applications are usually programmed via multiple space-qualified PROM devices that are physically large and require extra circuitry (typically including a separate one-time programmable FPGA) to enable them to be used for this application. This technology adds a cPCI interface device with a simple, flexible, high-performance backend interface supporting multiple backend FPGAs. It includes a mechanism for programming the FPGAs directly via the microprocessor in the embedded system, eliminating specialized hardware, software, and PROM devices and their associated circuitry. It has a direct path to flight, and no extra hardware and minimal software are required to support reprogramming in flight. The device added is currently a small FPGA, but an advantage of this technology is that the design of the device does not change, regardless of the application in which it is being used. This means that it needs to be qualified for flight only once, and is suitable for one-time programmable devices or an application specific integrated circuit (ASIC). An application programming interface (API) further reduces the development time needed to use the interface device in a system.

  7. Development and Implementation of the X.25 Protocol for the Universal Network Interface Device (UNID) II. Volume 1.

    DTIC Science & Technology

    1985-12-01

    development of an improved Universal Network Interface Device (UNID II). The UNID II’s architecture was based on a preliminary design project at...interface device, performing all functions required ,: the multi-ring LAN. The device depicted by RADC’s studies would connect a highly variable group of host...used the ISO Open Systems Ilterconnection (OSI) seven layer model as the basic structure for data flow and program development . In 1982 Cuomo

  8. Virtual Reality: An Experiential Tool for Clinical Psychology

    ERIC Educational Resources Information Center

    Riva, Giuseppe

    2009-01-01

    Several Virtual Reality (VR) applications for the understanding, assessment and treatment of mental health problems have been developed in the last 15 years. Typically, in VR the patient learns to manipulate problematic situations related to his/her problem. In fact, VR can be described as an advanced form of human-computer interface that is able…

  9. The Best of All Worlds: Immersive Interfaces for Art Education in Virtual and Real World Teaching and Learning Environments

    ERIC Educational Resources Information Center

    Grenfell, Janette

    2013-01-01

    Selected ubiquitous technologies encourage collaborative participation between higher education students and educators within a virtual socially networked e-learning landscape. Multiple modes of teaching and learning, ranging from real world experiences, to text and digital images accessed within the Deakin studies online learning management…

  10. WebIntera-Classroom: An Interaction-Aware Virtual Learning Environment for Augmenting Learning Interactions

    ERIC Educational Resources Information Center

    Chen, Jingjing; Xu, Jianliang; Tang, Tao; Chen, Rongchao

    2017-01-01

    Interaction is critical for successful teaching and learning in a virtual learning environment (VLE). This paper presents a web-based interaction-aware VLE--WebIntera-classroom--which aims to augment learning interactions by increasing the learner-to-content and learner-to-instructor interactions. We design a ubiquitous interactive interface that…

  11. Virtual Classroom for Business Planning Formulation.

    ERIC Educational Resources Information Center

    Osorio, J.; Rubio-Royo, E.; Ocon, A.

    One of the most promising possibilities of the World Wide Web resides in its potential to support distance education. In 1996, the University of Las Palmas de Gran Canaria developed the "INNOVA Project" in order to promote Web-based training and learning. As a result, the Virtual Classroom Interface (IVA) was created. Several software…

  12. Active Learning Environments with Robotic Tangibles: Children's Physical and Virtual Spatial Programming Experiences

    ERIC Educational Resources Information Center

    Burleson, Winslow S.; Harlow, Danielle B.; Nilsen, Katherine J.; Perlin, Ken; Freed, Natalie; Jensen, Camilla Nørgaard; Lahey, Byron; Lu, Patrick; Muldner, Kasia

    2018-01-01

    As computational thinking becomes increasingly important for children to learn, we must develop interfaces that leverage the ways that young children learn to provide opportunities for them to develop these skills. Active Learning Environments with Robotic Tangibles (ALERT) and Robopad, an analogous on-screen virtual spatial programming…

  13. Robot Teleoperation and Perception Assistance with a Virtual Holographic Display

    NASA Technical Reports Server (NTRS)

    Goddard, Charles O.

    2012-01-01

    Teleoperation of robots in space from Earth has historically been dfficult. Speed of light delays make direct joystick-type control infeasible, so it is desirable to command a robot in a very high-level fashion. However, in order to provide such an interface, knowledge of what objects are in the robot's environment and how they can be interacted with is required. In addition, many tasks that would be desirable to perform are highly spatial, requiring some form of six degree of freedom input. These two issues can be combined, allowing the user to assist the robot's perception by identifying the locations of objects in the scene. The zSpace system, a virtual holographic environment, provides a virtual three-dimensional space superimposed over real space and a stylus tracking position and rotation inside of it. Using this system, a possible interface for this sort of robot control is proposed.

  14. The expert surgical assistant. An intelligent virtual environment with multimodal input.

    PubMed

    Billinghurst, M; Savage, J; Oppenheimer, P; Edmond, C

    1996-01-01

    Virtual Reality has made computer interfaces more intuitive but not more intelligent. This paper shows how an expert system can be coupled with multimodal input in a virtual environment to provide an intelligent simulation tool or surgical assistant. This is accomplished in three steps. First, voice and gestural input is interpreted and represented in a common semantic form. Second, a rule-based expert system is used to infer context and user actions from this semantic representation. Finally, the inferred user actions are matched against steps in a surgical procedure to monitor the user's progress and provide automatic feedback. In addition, the system can respond immediately to multimodal commands for navigational assistance and/or identification of critical anatomical structures. To show how these methods are used we present a prototype sinus surgery interface. The approach described here may easily be extended to a wide variety of medical and non-medical training applications by making simple changes to the expert system database and virtual environment models. Successful implementation of an expert system in both simulated and real surgery has enormous potential for the surgeon both in training and clinical practice.

  15. Strategies for combining physics videos and virtual laboratories in the training of physics teachers

    NASA Astrophysics Data System (ADS)

    Dickman, Adriana; Vertchenko, Lev; Martins, Maria Inés

    2007-03-01

    Among the multimedia resources used in physics education, the most prominent are virtual laboratories and videos. On one hand, computer simulations and applets have very attractive graphic interfaces, showing an incredible amount of detail and movement. On the other hand, videos, offer the possibility of displaying high quality images, and are becoming more feasible with the increasing availability of digital resources. We believe it is important to discuss, throughout the teacher training program, both the functionality of information and communication technology (ICT) in physics education and, the varied applications of these resources. In our work we suggest the introduction of ICT resources in a sequence integrating these important tools in the teacher training program, as opposed to the traditional approach, in which virtual laboratories and videos are introduced separately. In this perspective, when we introduce and utilize virtual laboratory techniques we also provide for its use in videos, taking advantage of graphic interfaces. Thus the students in our program learn to use instructional software in the production of videos for classroom use.

  16. Stereoscopic visualization and haptic technology used to create a virtual environment for remote surgery - biomed 2011.

    PubMed

    Bornhoft, J M; Strabala, K W; Wortman, T D; Lehman, A C; Oleynikov, D; Farritor, S M

    2011-01-01

    The objective of this research is to study the effectiveness of using a stereoscopic visualization system for performing remote surgery. The use of stereoscopic vision has become common with the advent of the da Vinci® system (Intuitive, Sunnyvale CA). This system creates a virtual environment that consists of a 3-D display for visual feedback and haptic tactile feedback, together providing an intuitive environment for remote surgical applications. This study will use simple in vivo robotic surgical devices and compare the performance of surgeons using the stereoscopic interfacing system to the performance of surgeons using one dimensional monitors. The stereoscopic viewing system consists of two cameras, two monitors, and four mirrors. The cameras are mounted to a multi-functional miniature in vivo robot; and mimic the depth perception of the actual human eyes. This is done by placing the cameras at a calculated angle and distance apart. Live video streams from the left and right cameras are displayed on the left and right monitors, respectively. A system of angled mirrors allows the left and right eyes to see the video stream from the left and right monitor, respectively, creating the illusion of depth. The haptic interface consists of two PHANTOM Omni® (SensAble, Woburn Ma) controllers. These controllers measure the position and orientation of a pen-like end effector with three degrees of freedom. As the surgeon uses this interface, they see a 3-D image and feel force feedback for collision and workspace limits. The stereoscopic viewing system has been used in several surgical training tests and shows a potential improvement in depth perception and 3-D vision. The haptic system accurately gives force feedback that aids in surgery. Both have been used in non-survival animal surgeries, and have successfully been used in suturing and gallbladder removal. Bench top experiments using the interfacing system have also been conducted. A group of participants completed two different surgical training tasks using both a two dimensional visual system and the stereoscopic visual system. Results suggest that the stereoscopic visual system decreased the amount of time taken to complete the tasks. All participants also reported that the stereoscopic system was easier to utilize than the two dimensional system. Haptic controllers combined with stereoscopic vision provides for a more intuitive virtual environment. This system provides the surgeon with 3-D vision, depth perception, and the ability to receive feedback through forces applied in the haptic controller while performing surgery. These capabilities potentially enable the performance of more complex surgeries with a higher level of precision.

  17. Human-computer interface

    DOEpatents

    Anderson, Thomas G.

    2004-12-21

    The present invention provides a method of human-computer interfacing. Force feedback allows intuitive navigation and control near a boundary between regions in a computer-represented space. For example, the method allows a user to interact with a virtual craft, then push through the windshield of the craft to interact with the virtual world surrounding the craft. As another example, the method allows a user to feel transitions between different control domains of a computer representation of a space. The method can provide for force feedback that increases as a user's locus of interaction moves near a boundary, then perceptibly changes (e.g., abruptly drops or changes direction) when the boundary is traversed.

  18. Force Sensitive Handles and Capacitive Touch Sensor for Driving a Flexible Haptic-Based Immersive System

    PubMed Central

    Covarrubias, Mario; Bordegoni, Monica; Cugini, Umberto

    2013-01-01

    In this article, we present an approach that uses both two force sensitive handles (FSH) and a flexible capacitive touch sensor (FCTS) to drive a haptic-based immersive system. The immersive system has been developed as part of a multimodal interface for product design. The haptic interface consists of a strip that can be used by product designers to evaluate the quality of a 3D virtual shape by using touch, vision and hearing and, also, to interactively change the shape of the virtual object. Specifically, the user interacts with the FSH to move the virtual object and to appropriately position the haptic interface for retrieving the six degrees of freedom required for both manipulation and modification modalities. The FCTS allows the system to track the movement and position of the user's fingers on the strip, which is used for rendering visual and sound feedback. Two evaluation experiments are described, which involve both the evaluation and the modification of a 3D shape. Results show that the use of the haptic strip for the evaluation of aesthetic shapes is effective and supports product designers in the appreciation of the aesthetic qualities of the shape. PMID:24113680

  19. Force sensitive handles and capacitive touch sensor for driving a flexible haptic-based immersive system.

    PubMed

    Covarrubias, Mario; Bordegoni, Monica; Cugini, Umberto

    2013-10-09

    In this article, we present an approach that uses both two force sensitive handles (FSH) and a flexible capacitive touch sensor (FCTS) to drive a haptic-based immersive system. The immersive system has been developed as part of a multimodal interface for product design. The haptic interface consists of a strip that can be used by product designers to evaluate the quality of a 3D virtual shape by using touch, vision and hearing and, also, to interactively change the shape of the virtual object. Specifically, the user interacts with the FSH to move the virtual object and to appropriately position the haptic interface for retrieving the six degrees of freedom required for both manipulation and modification modalities. The FCTS allows the system to track the movement and position of the user's fingers on the strip, which is used for rendering visual and sound feedback. Two evaluation experiments are described, which involve both the evaluation and the modification of a 3D shape. Results show that the use of the haptic strip for the evaluation of aesthetic shapes is effective and supports product designers in the appreciation of the aesthetic qualities of the shape.

  20. Development and application of virtual reality for man/systems integration

    NASA Technical Reports Server (NTRS)

    Brown, Marcus

    1991-01-01

    While the graphical presentation of computer models signified a quantum leap over presentations limited to text and numbers, it still has the problem of presenting an interface barrier between the human user and the computer model. The user must learn a command language in order to orient themselves in the model. For example, to move left from the current viewpoint of the model, they might be required to type 'LEFT' at a keyboard. This command is fairly intuitive, but if the viewpoint moves far enough that there are no visual cues overlapping with the first view, the user does not know if the viewpoint has moved inches, feet, or miles to the left, or perhaps remained in the same position, but rotated to the left. Until the user becomes quite familiar with the interface language of the computer model presentation, they will be proned to lossing their bearings frequently. Even a highly skilled user will occasionally get lost in the model. A new approach to presenting type type of information is to directly interpret the user's body motions as the input language for determining what view to present. When the user's head turns 45 degrees to the left, the viewpoint should be rotated 45 degrees to the left. Since the head moves through several intermediate angles between the original view and the final one, several intermediate views should be presented, providing the user with a sense of continuity between the original view and the final one. Since the primary way a human physically interacts with their environment should monitor the movements of the user's hands and alter objects in the virtual model in a way consistent with the way an actual object would move when manipulated using the same hand movements. Since this approach to the man-computer interface closely models the same type of interface that humans have with the physical world, this type of interface is often called virtual reality, and the model is referred to as a virtual world. The task of this summer fellowship was to set up a virtual reality system at MSFC and begin applying it to some of the questions which concern scientists and engineers involved in space flight. A brief discussion of this work is presented.

  1. Semiconductor nanowire devices: Novel morphologies and applications to electrogenic biological systems

    NASA Astrophysics Data System (ADS)

    Timko, Brian Paul

    The interface between nanoscale semiconductors and biological systems represents a powerful means for molecular-scale, two-way communication between these two diverse yet complementary systems. In this thesis, I present a general methodology for the synthesis of semiconductor nanowires with rationally-defined material composition and geometry. Specifically, I demonstrate that this technique can be used to fabricate silicon nanowires, hollow nanostructures (e.g. nanotubes, nanocones and branched tubular networks), and Ge/Si heterostructures that exhibit 1D hole gasses. Using bottom-up assembly techniques, nanostructures are subsequently built into arrays containing up to tens of nanowire field-effect transistors (NW-FETs) that exhibit exquisite sensitivity to local charges. Significantly, this robust assembly technique enables integration of disparate materials (e.g. n- and p-type silicon nanowires) on virtually any type of substrate. These arrays are particularly useful for integration with biological systems. I will demonstrate that at the single-cell level, silicon nanowire device arrays can be integrated with mammalian neurons. Discrete hybrid structures enable neuronal stimulation and recording at the axon, dendrite, or soma with high sensitivity and spatial resolution, while aligned arrays containing up to 50 devices can be used to measure the speed and temporal evolution of signals or to interact with a single cell as multiple inputs and outputs. I analyze the shape and magnitude of reported signals, and place within the context of previously reported results. Hybrid interfaces can also be extended to entire organs such as embryonic chicken hearts. NW-FET signals are synchronized with the beating heart, and the signal amplitude is directly related to the device sensitivity. Multiplexed measurements made from NW-FET arrays further show that signal propagation across the myocardium can be mapped, with a potential resolution significantly better than microelectrode techniques. I exploit the unique capability of the bottom-up approach to fabricate NW-FET arrays on flexible and transparent plastic substrates, and demonstrate that these novel device arrays enable signal recording in a number of conformations as well as registration of devices to the heart surface. Taken together, these findings demonstrate that nanowire device arrays are a robust platform for studying electrically-active systems at the single-cell or whole-tissue level, and could enable fundamental studies of cellular-level biophysics, real-time drug assays, and novel implants.

  2. Virtual Reality, Combat, and Communication.

    ERIC Educational Resources Information Center

    Thrush, Emily Austin; Bodary, Michael

    2000-01-01

    Presents a brief examination of the evolution of virtual reality devices that illustrates how the development of this new medium is influenced by emerging technologies and by marketing pressures. Notes that understanding these influences may help prepare for the role of technical communicators in building virtual reality applications for education…

  3. An adaptable navigation strategy for Virtual Microscopy from mobile platforms.

    PubMed

    Corredor, Germán; Romero, Eduardo; Iregui, Marcela

    2015-04-01

    Real integration of Virtual Microscopy with the pathologist service workflow requires the design of adaptable strategies for any hospital service to interact with a set of Whole Slide Images. Nowadays, mobile devices have the actual potential of supporting an online pervasive network of specialists working together. However, such devices are still very limited. This article introduces a novel highly adaptable strategy for streaming and visualizing WSI from mobile devices. The presented approach effectively exploits and extends the granularity of the JPEG2000 standard and integrates it with different strategies to achieve a lossless, loosely-coupled, decoder and platform independent implementation, adaptable to any interaction model. The performance was evaluated by two expert pathologists interacting with a set of 20 virtual slides. The method efficiently uses the available device resources: the memory usage did not exceed a 7% of the device capacity while the decoding times were smaller than the 200 ms per Region of Interest, i.e., a window of 256×256 pixels. This model is easily adaptable to other medical imaging scenarios. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. Efficient operating system level virtualization techniques for cloud resources

    NASA Astrophysics Data System (ADS)

    Ansu, R.; Samiksha; Anju, S.; Singh, K. John

    2017-11-01

    Cloud computing is an advancing technology which provides the servcies of Infrastructure, Platform and Software. Virtualization and Computer utility are the keys of Cloud computing. The numbers of cloud users are increasing day by day. So it is the need of the hour to make resources available on demand to satisfy user requirements. The technique in which resources namely storage, processing power, memory and network or I/O are abstracted is known as Virtualization. For executing the operating systems various virtualization techniques are available. They are: Full System Virtualization and Para Virtualization. In Full Virtualization, the whole architecture of hardware is duplicated virtually. No modifications are required in Guest OS as the OS deals with the VM hypervisor directly. In Para Virtualization, modifications of OS is required to run in parallel with other OS. For the Guest OS to access the hardware, the host OS must provide a Virtual Machine Interface. OS virtualization has many advantages such as migrating applications transparently, consolidation of server, online maintenance of OS and providing security. This paper briefs both the virtualization techniques and discusses the issues in OS level virtualization.

  5. Robotics virtual rail system and method

    DOEpatents

    Bruemmer, David J [Idaho Falls, ID; Few, Douglas A [Idaho Falls, ID; Walton, Miles C [Idaho Falls, ID

    2011-07-05

    A virtual track or rail system and method is described for execution by a robot. A user, through a user interface, generates a desired path comprised of at least one segment representative of the virtual track for the robot. Start and end points are assigned to the desired path and velocities are also associated with each of the at least one segment of the desired path. A waypoint file is generated including positions along the virtual track representing the desired path with the positions beginning from the start point to the end point including the velocities of each of the at least one segment. The waypoint file is sent to the robot for traversing along the virtual track.

  6. Using CORBA to integrate manufacturing cells to a virtual enterprise

    NASA Astrophysics Data System (ADS)

    Pancerella, Carmen M.; Whiteside, Robert A.

    1997-01-01

    It is critical in today's enterprises that manufacturing facilities are not isolated from design, planning, and other business activities and that information flows easily and bidirectionally between these activities. It is also important and cost-effective that COTS software, databases, and corporate legacy codes are well integrated in the information architecture. Further, much of the information generated during manufacturing must be dynamically accessible to engineering and business operations both in a restricted corporate intranet and on the internet. The software integration strategy in the Sandia Agile Manufacturing Testbed supports these enterprise requirements. We are developing a CORBA-based distributed object software system for manufacturing. Each physical machining device is a CORBA object and exports a common IDL interface to allow for rapid and dynamic insertion, deletion, and upgrading within the manufacturing cell. Cell management CORBA components access manufacturing devices without knowledge of any device-specific implementation. To support information flow from design to planning data is accessible to machinists on the shop floor. CORBA allows manufacturing components to be easily accessible to the enterprise. Dynamic clients can be created using web browsers and portable Java GUI's. A CORBA-OLE adapter allows integration to PC desktop applications. Other commercial software can access CORBA network objects in the information architecture through vendor API's.

  7. Mobility for GCSS-MC through virtual PCs

    DTIC Science & Technology

    2017-06-01

    their productivity. Mobile device access to GCSS-MC would allow Marines to access a required program for their mission using a form of computing ...network throughput applications with a device running on various operating systems with limited computational ability. The use of VPCs leads to a...reduced need for network throughput and faster overall execution. 14. SUBJECT TERMS GCSS-MC, enterprise resource planning, virtual personal computer

  8. Brain-Computer Interface application: auditory serial interface to control a two-class motor-imagery-based wheelchair.

    PubMed

    Ron-Angevin, Ricardo; Velasco-Álvarez, Francisco; Fernández-Rodríguez, Álvaro; Díaz-Estrella, Antonio; Blanca-Mena, María José; Vizcaíno-Martín, Francisco Javier

    2017-05-30

    Certain diseases affect brain areas that control the movements of the patients' body, thereby limiting their autonomy and communication capacity. Research in the field of Brain-Computer Interfaces aims to provide patients with an alternative communication channel not based on muscular activity, but on the processing of brain signals. Through these systems, subjects can control external devices such as spellers to communicate, robotic prostheses to restore limb movements, or domotic systems. The present work focus on the non-muscular control of a robotic wheelchair. A proposal to control a wheelchair through a Brain-Computer Interface based on the discrimination of only two mental tasks is presented in this study. The wheelchair displacement is performed with discrete movements. The control signals used are sensorimotor rhythms modulated through a right-hand motor imagery task or mental idle state. The peculiarity of the control system is that it is based on a serial auditory interface that provides the user with four navigation commands. The use of two mental tasks to select commands may facilitate control and reduce error rates compared to other endogenous control systems for wheelchairs. Seventeen subjects initially participated in the study; nine of them completed the three sessions of the proposed protocol. After the first calibration session, seven subjects were discarded due to a low control of their electroencephalographic signals; nine out of ten subjects controlled a virtual wheelchair during the second session; these same nine subjects achieved a medium accuracy level above 0.83 on the real wheelchair control session. The results suggest that more extensive training with the proposed control system can be an effective and safe option that will allow the displacement of a wheelchair in a controlled environment for potential users suffering from some types of motor neuron diseases.

  9. A Trusted Portable Computing Device

    NASA Astrophysics Data System (ADS)

    Ming-wei, Fang; Jun-jun, Wu; Peng-fei, Yu; Xin-fang, Zhang

    A trusted portable computing device and its security mechanism were presented to solve the security issues, such as the attack of virus and Trojan horse, the lost and stolen of storage device, in mobile office. It used smart card to build a trusted portable security base, virtualization to create a secure virtual execution environment, two-factor authentication mechanism to identify legitimate users, and dynamic encryption to protect data privacy. The security environment described in this paper is characteristic of portability, security and reliability. It can meet the security requirement of mobile office.

  10. Virtually pure near-infrared electroluminescence from exciplexes at polyfluorene/hexaazatrinaphthylene interfaces

    NASA Astrophysics Data System (ADS)

    Tregnago, G.; Fléchon, C.; Choudhary, S.; Gozalvez, C.; Mateo-Alonso, A.; Cacialli, F.

    2014-10-01

    Electronic processes at the heterojunction between chemically different organic semiconductors are of special significance for devices such as light-emitting diodes (LEDs) and photovoltaic diodes. Here, we report the formation of an exciplex state at the heterojunction of an electron-transporting material, a functionalized hexaazatrinaphthylene, and a hole-transporting material, poly(9,9-dioctylfluorene-alt-N-(4-butylphenyl)diphenylamine) (TFB). The energetics of the exciplex state leads to a spectral shift of ˜1 eV between the exciton and the exciplex peak energies (at 2.58 eV and 1.58 eV, respectively). LEDs incorporating such bulk heterojunctions display complete quenching of the exciton luminescence, and a nearly pure near-infrared electroluminescence arising from the exciplex (at ˜1.52 eV) with >98% of the emission at wavelengths above 700 nm at any operational voltage.

  11. Interfacing LabVIEW With Instrumentation for Electronic Failure Analysis and Beyond

    NASA Technical Reports Server (NTRS)

    Buchanan, Randy K.; Bryan, Coleman; Ludwig, Larry

    1996-01-01

    The Laboratory Virtual Instrumentation Engineering Workstation (LabVIEW) software is designed such that equipment and processes related to control systems can be operationally lined and controlled by the use of a computer. Various processes within the failure analysis laboratories of NASA's Kennedy Space Center (KSC) demonstrate the need for modernization and, in some cases, automation, using LabVIEW. An examination of procedures and practices with the Failure Analaysis Laboratory resulted in the conclusion that some device was necessary to elevate the potential users of LabVIEW to an operational level in minimum time. This paper outlines the process involved in creating a tutorial application to enable personnel to apply LabVIEW to their specific projects. Suggestions for furthering the extent to which LabVIEW is used are provided in the areas of data acquisition and process control.

  12. Intelligent Motion and Interaction Within Virtual Environments

    NASA Technical Reports Server (NTRS)

    Ellis, Stephen R. (Editor); Slater, Mel (Editor); Alexander, Thomas (Editor)

    2007-01-01

    What makes virtual actors and objects in virtual environments seem real? How can the illusion of their reality be supported? What sorts of training or user-interface applications benefit from realistic user-environment interactions? These are some of the central questions that designers of virtual environments face. To be sure simulation realism is not necessarily the major, or even a required goal, of a virtual environment intended to communicate specific information. But for some applications in entertainment, marketing, or aspects of vehicle simulation training, realism is essential. The following chapters will examine how a sense of truly interacting with dynamic, intelligent agents may arise in users of virtual environments. These chapters are based on presentations at the London conference on Intelligent Motion and Interaction within a Virtual Environments which was held at University College, London, U.K., 15-17 September 2003.

  13. Virtual reality hardware and graphic display options for brain-machine interfaces

    PubMed Central

    Marathe, Amar R.; Carey, Holle L.; Taylor, Dawn M.

    2009-01-01

    Virtual reality hardware and graphic displays are reviewed here as a development environment for brain-machine interfaces (BMIs). Two desktop stereoscopic monitors and one 2D monitor were compared in a visual depth discrimination task and in a 3D target-matching task where able-bodied individuals used actual hand movements to match a virtual hand to different target hands. Three graphic representations of the hand were compared: a plain sphere, a sphere attached to the fingertip of a realistic hand and arm, and a stylized pacman-like hand. Several subjects had great difficulty using either stereo monitor for depth perception when perspective size cues were removed. A mismatch in stereo and size cues generated inappropriate depth illusions. This phenomenon has implications for choosing target and virtual hand sizes in BMI experiments. Target matching accuracy was about as good with the 2D monitor as with either 3D monitor. However, users achieved this accuracy by exploring the boundaries of the hand in the target with carefully controlled movements. This method of determining relative depth may not be possible in BMI experiments if movement control is more limited. Intuitive depth cues, such as including a virtual arm, can significantly improve depth perception accuracy with or without stereo viewing. PMID:18006069

  14. STS-134 crew in Virtual Reality Lab during their MSS/EVAA SUPT2 Team training

    NASA Image and Video Library

    2010-08-27

    JSC2010-E-121049 (27 Aug. 2010) --- NASA astronaut Andrew Feustel (foreground), STS-134 mission specialist, uses the virtual reality lab in the Space Vehicle Mock-up Facility at NASA's Johnson Space Center to train for some of his duties aboard the space shuttle and space station. This type of computer interface, paired with virtual reality training hardware and software, helps to prepare crew members for dealing with space station elements. Photo credit: NASA or National Aeronautics and Space Administration

  15. STS-133 crew during MSS/EVAA TEAM training in Virtual Reality Lab

    NASA Image and Video Library

    2010-10-01

    JSC2010-E-170878 (1 Oct. 2010) --- NASA astronaut Michael Barratt, STS-133 mission specialist, uses the virtual reality lab in the Space Vehicle Mock-up Facility at NASA's Johnson Space Center to train for some of his duties aboard the space shuttle and space station. This type of computer interface, paired with virtual reality training hardware and software, helps to prepare crew members for dealing with space station elements. Photo credit: NASA or National Aeronautics and Space Administration

  16. STS-134 crew in Virtual Reality Lab during their MSS/EVAA SUPT2 Team training

    NASA Image and Video Library

    2010-08-27

    JSC2010-E-121056 (27 Aug. 2010) --- NASA astronaut Gregory H. Johnson, STS-134 pilot, uses the virtual reality lab in the Space Vehicle Mock-up Facility at NASA's Johnson Space Center to train for some of his duties aboard the space shuttle and space station. This type of computer interface, paired with virtual reality training hardware and software, helps to prepare crew members for dealing with space station elements. Photo credit: NASA or National Aeronautics and Space Administration

  17. STS-133 crew during MSS/EVAA TEAM training in Virtual Reality Lab

    NASA Image and Video Library

    2010-10-01

    JSC2010-E-170888 (1 Oct. 2010) --- NASA astronaut Nicole Stott, STS-133 mission specialist, uses the virtual reality lab in the Space Vehicle Mock-up Facility at NASA's Johnson Space Center to train for some of her duties aboard the space shuttle and space station. This type of computer interface, paired with virtual reality training hardware and software, helps to prepare crew members for dealing with space station elements. Photo credit: NASA or National Aeronautics and Space Administration

  18. STS-133 crew during MSS/EVAA TEAM training in Virtual Reality Lab

    NASA Image and Video Library

    2010-10-01

    JSC2010-E-170882 (1 Oct. 2010) --- NASA astronaut Nicole Stott, STS-133 mission specialist, uses the virtual reality lab in the Space Vehicle Mock-up Facility at NASA's Johnson Space Center to train for some of her duties aboard the space shuttle and space station. This type of computer interface, paired with virtual reality training hardware and software, helps to prepare crew members for dealing with space station elements. Photo credit: NASA or National Aeronautics and Space Administration

  19. Building the Joint Battlespace Infosphere. Volume 2: Interactive Information Technologies

    DTIC Science & Technology

    1999-12-17

    G. A . Vouros, “ A Knowledge- Based Methodology for Supporting Multilingual and User -Tailored Interfaces ,” Interacting With Computers, Vol. 9 (1998), p...project is to develop a two-handed user interface to the stereoscopic field analyzer, an interactive 3-D scientific visualization system. The...62 See http://www.hitl.washington.edu/research/vrd/. 63 R. Baumann and R. Clavel, “Haptic Interface for Virtual Reality Based

  20. An Empirical Study on Operator Interface Design for Handheld Devices to Control Micro Aerial Vehicles

    DTIC Science & Technology

    2010-10-01

    An Empirical Study on Operator Interface Design for Handheld Devices to Control Micro Aerial Vehicles Ming Hou...Report DRDC Toronto TR 2010-075 October 2010 An Empirical Study on Operator Interface Design for Handheld Devices to...drives the need for a small and light controller which will not hinder a soldier carrying it. This requirement brings an issue of designing an

  1. Virtual Environments in Scientific Visualization

    NASA Technical Reports Server (NTRS)

    Bryson, Steve; Lisinski, T. A. (Technical Monitor)

    1994-01-01

    Virtual environment technology is a new way of approaching the interface between computers and humans. Emphasizing display and user control that conforms to the user's natural ways of perceiving and thinking about space, virtual environment technologies enhance the ability to perceive and interact with computer generated graphic information. This enhancement potentially has a major effect on the field of scientific visualization. Current examples of this technology include the Virtual Windtunnel being developed at NASA Ames Research Center. Other major institutions such as the National Center for Supercomputing Applications and SRI International are also exploring this technology. This talk will be describe several implementations of virtual environments for use in scientific visualization. Examples include the visualization of unsteady fluid flows (the virtual windtunnel), the visualization of geodesics in curved spacetime, surface manipulation, and examples developed at various laboratories.

  2. The effects of virtual experience on attitudes toward real brands.

    PubMed

    Dobrowolski, Pawel; Pochwatko, Grzegorz; Skorko, Maciek; Bielecki, Maksymilian

    2014-02-01

    Although the commercial availability and implementation of virtual reality interfaces has seen rapid growth in recent years, little research has been conducted on the potential for virtual reality to affect consumer behavior. One unaddressed issue is how our real world attitudes are affected when we have a virtual experience with the target of those attitudes. This study compared participant (N=60) attitudes toward car brands before and after a virtual test drive of those cars was provided. Results indicated that attitudes toward test brands changed after experience with virtual representations of those brands. Furthermore, manipulation of the quality of this experience (in this case modification of driving difficulty) was reflected in the direction of attitude change. We discuss these results in the context of the associative-propositional evaluation model.

  3. An Introduction to 3-D Sound

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Null, Cynthia H. (Technical Monitor)

    1997-01-01

    This talk will overview the basic technologies related to the creation of virtual acoustic images, and the potential of including spatial auditory displays in human-machine interfaces. Research into the perceptual error inherent in both natural and virtual spatial hearing is reviewed, since the formation of improved technologies is tied to psychoacoustic research. This includes a discussion of Head Related Transfer Function (HRTF) measurement techniques (the HRTF provides important perceptual cues within a virtual acoustic display). Many commercial applications of virtual acoustics have so far focused on games and entertainment ; in this review, other types of applications are examined, including aeronautic safety, voice communications, virtual reality, and room acoustic simulation. In particular, the notion that realistic simulation is optimized within a virtual acoustic display when head motion and reverberation cues are included within a perceptual model.

  4. Optical-to-optical interface device

    NASA Technical Reports Server (NTRS)

    Jacobson, A. D.; Bleha, W. P.; Miller, L.; Grinberg, J.; Fraas, L.; Margerum, D.

    1975-01-01

    An investigation was conducted to develop an optical-to-optical interface device capable of performing real-time incoherent-to-incoherent optical image conversion. The photoactivated liquid crystal light valve developed earlier represented a prototype liquid crystal light valve device capable of performing these functions. A device was developed which had high performance and extended lifetime.

  5. 47 CFR 15.115 - TV interface devices, including cable system terminal devices.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... output terminal(s) of the device terminated by a resistance equal to the rated output impedance. The... ohms) matching the rated output impedance of the TV interface device, shall not exceed the following... during maximum amplitude peaks across a resistance (R in ohms) matching the rated output impedance of the...

  6. 47 CFR 15.115 - TV interface devices, including cable system terminal devices.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... output terminal(s) of the device terminated by a resistance equal to the rated output impedance. The... ohms) matching the rated output impedance of the TV interface device, shall not exceed the following... during maximum amplitude peaks across a resistance (R in ohms) matching the rated output impedance of the...

  7. Collaboration in a Wireless Grid Innovation Testbed by Virtual Consortium

    NASA Astrophysics Data System (ADS)

    Treglia, Joseph; Ramnarine-Rieks, Angela; McKnight, Lee

    This paper describes the formation of the Wireless Grid Innovation Testbed (WGiT) coordinated by a virtual consortium involving academic and non-academic entities. Syracuse University and Virginia Tech are primary university partners with several other academic, government, and corporate partners. Objectives include: 1) coordinating knowledge sharing, 2) defining key parameters for wireless grids network applications, 3) dynamically connecting wired and wireless devices, content and users, 4) linking to VT-CORNET, Virginia Tech Cognitive Radio Network Testbed, 5) forming ad hoc networks or grids of mobile and fixed devices without a dedicated server, 6) deepening understanding of wireless grid application, device, network, user and market behavior through academic, trade and popular publications including online media, 7) identifying policy that may enable evaluated innovations to enter US and international markets and 8) implementation and evaluation of the international virtual collaborative process.

  8. Virtual reality: past, present and future.

    PubMed

    Gobbetti, E; Scateni, R

    1998-01-01

    This report provides a short survey of the field of virtual reality, highlighting application domains, technological requirements, and currently available solutions. The report is organized as follows: section 1 presents the background and motivation of virtual environment research and identifies typical application domain, section 2 discusses the characteristics a virtual reality system must have in order to exploit the perceptual and spatial skills of users, section 3 surveys current input/output devices for virtual reality, section 4 surveys current software approaches to support the creation of virtual reality systems, and section 5 summarizes the report.

  9. New Tools to Search for Data in the European Space Agency's Planetary Science Archive

    NASA Astrophysics Data System (ADS)

    Grotheer, E.; Macfarlane, A. J.; Rios, C.; Arviset, C.; Heather, D.; Fraga, D.; Vallejo, F.; De Marchi, G.; Barbarisi, I.; Saiz, J.; Barthelemy, M.; Docasal, R.; Martinez, S.; Besse, S.; Lim, T.

    2016-12-01

    The European Space Agency's (ESA) Planetary Science Archive (PSA), which can be accessed at http://archives.esac.esa.int/psa, provides public access to the archived data of Europe's missions to our neighboring planets. These datasets are compliant with the Planetary Data System (PDS) standards. Recently, a new interface has been released, which includes upgrades to make PDS4 data available from newer missions such as ExoMars and BepiColombo. Additionally, the PSA development team has been working to ensure that the legacy PDS3 data will be more easily accessible via the new interface as well. In addition to a new querying interface, the new PSA also allows access via the EPN-TAP and PDAP protocols. This makes the PSA data sets compatible with other archive-related tools and projects, such as the Virtual European Solar and Planetary Access (VESPA) project for creating a virtual observatory.

  10. RoboCup-Rescue: an international cooperative research project of robotics and AI for the disaster mitigation problem

    NASA Astrophysics Data System (ADS)

    Tadokoro, Satoshi; Kitano, Hiroaki; Takahashi, Tomoichi; Noda, Itsuki; Matsubara, Hitoshi; Shinjoh, Atsushi; Koto, Tetsuo; Takeuchi, Ikuo; Takahashi, Hironao; Matsuno, Fumitoshi; Hatayama, Mitsunori; Nobe, Jun; Shimada, Susumu

    2000-07-01

    This paper introduces the RoboCup-Rescue Simulation Project, a contribution to the disaster mitigation, search and rescue problem. A comprehensive urban disaster simulator is constructed on distributed computers. Heterogeneous intelligent agents such as fire fighters, victims and volunteers conduct search and rescue activities in this virtual disaster world. A real world interface integrates various sensor systems and controllers of infrastructures in the real cities with the real world. Real-time simulation is synchronized with actual disasters, computing complex relationship between various damage factors and agent behaviors. A mission-critical man-machine interface provides portability and robustness of disaster mitigation centers, and augmented-reality interfaces for rescue in real disasters. It also provides a virtual- reality training function for the public. This diverse spectrum of RoboCup-Rescue contributes to the creation of the safer social system.

  11. Ambient Intelligence in Multimeda and Virtual Reality Environments for the rehabilitation

    NASA Astrophysics Data System (ADS)

    Benko, Attila; Cecilia, Sik Lanyi

    This chapter presents a general overview about the use of multimedia and virtual reality in rehabilitation and assistive and preventive healthcare. This chapter deals with multimedia, virtual reality applications based AI intended for use by medical doctors, nurses, special teachers and further interested persons. It describes methods how multimedia and virtual reality is able to assist their work. These include the areas how multimedia and virtual reality can help the patients everyday life and their rehabilitation. In the second part of the chapter we present the Virtual Therapy Room (VTR) a realized application for aphasic patients that was created for practicing communication and expressing emotions in a group therapy setting. The VTR shows a room that contains a virtual therapist and four virtual patients (avatars). The avatars are utilizing their knowledge base in order to answer the questions of the user providing an AI environment for the rehabilitation. The user of the VTR is the aphasic patient who has to solve the exercises. The picture that is relevant for the actual task appears on the virtual blackboard. Patient answers questions of the virtual therapist. Questions are about pictures describing an activity or an object in different levels. Patient can ask an avatar for answer. If the avatar knows the answer the avatars emotion changes to happy instead of sad. The avatar expresses its emotions in different dimensions. Its behavior, face-mimic, voice-tone and response also changes. The emotion system can be described as a deterministic finite automaton where places are emotion-states and the transition function of the automaton is derived from the input-response reaction of an avatar. Natural language processing techniques were also implemented in order to establish highquality human-computer interface windows for each of the avatars. Aphasic patients are able to interact with avatars via these interfaces. At the end of the chapter we visualize the possible future research field.

  12. Extending the Body to Virtual Tools Using a Robotic Surgical Interface: Evidence from the Crossmodal Congruency Task

    PubMed Central

    Sengül, Ali; van Elk, Michiel; Rognini, Giulio; Aspell, Jane Elizabeth; Bleuler, Hannes; Blanke, Olaf

    2012-01-01

    The effects of real-world tool use on body or space representations are relatively well established in cognitive neuroscience. Several studies have shown, for example, that active tool use results in a facilitated integration of multisensory information in peripersonal space, i.e. the space directly surrounding the body. However, it remains unknown to what extent similar mechanisms apply to the use of virtual-robotic tools, such as those used in the field of surgical robotics, in which a surgeon may use bimanual haptic interfaces to control a surgery robot at a remote location. This paper presents two experiments in which participants used a haptic handle, originally designed for a commercial surgery robot, to control a virtual tool. The integration of multisensory information related to the virtual-robotic tool was assessed by means of the crossmodal congruency task, in which subjects responded to tactile vibrations applied to their fingers while ignoring visual distractors superimposed on the tip of the virtual-robotic tool. Our results show that active virtual-robotic tool use changes the spatial modulation of the crossmodal congruency effects, comparable to changes in the representation of peripersonal space observed during real-world tool use. Moreover, when the virtual-robotic tools were held in a crossed position, the visual distractors interfered strongly with tactile stimuli that was connected with the hand via the tool, reflecting a remapping of peripersonal space. Such remapping was not only observed when the virtual-robotic tools were actively used (Experiment 1), but also when passively held the tools (Experiment 2). The present study extends earlier findings on the extension of peripersonal space from physical and pointing tools to virtual-robotic tools using techniques from haptics and virtual reality. We discuss our data with respect to learning and human factors in the field of surgical robotics and discuss the use of new technologies in the field of cognitive neuroscience. PMID:23227142

  13. Extending the body to virtual tools using a robotic surgical interface: evidence from the crossmodal congruency task.

    PubMed

    Sengül, Ali; van Elk, Michiel; Rognini, Giulio; Aspell, Jane Elizabeth; Bleuler, Hannes; Blanke, Olaf

    2012-01-01

    The effects of real-world tool use on body or space representations are relatively well established in cognitive neuroscience. Several studies have shown, for example, that active tool use results in a facilitated integration of multisensory information in peripersonal space, i.e. the space directly surrounding the body. However, it remains unknown to what extent similar mechanisms apply to the use of virtual-robotic tools, such as those used in the field of surgical robotics, in which a surgeon may use bimanual haptic interfaces to control a surgery robot at a remote location. This paper presents two experiments in which participants used a haptic handle, originally designed for a commercial surgery robot, to control a virtual tool. The integration of multisensory information related to the virtual-robotic tool was assessed by means of the crossmodal congruency task, in which subjects responded to tactile vibrations applied to their fingers while ignoring visual distractors superimposed on the tip of the virtual-robotic tool. Our results show that active virtual-robotic tool use changes the spatial modulation of the crossmodal congruency effects, comparable to changes in the representation of peripersonal space observed during real-world tool use. Moreover, when the virtual-robotic tools were held in a crossed position, the visual distractors interfered strongly with tactile stimuli that was connected with the hand via the tool, reflecting a remapping of peripersonal space. Such remapping was not only observed when the virtual-robotic tools were actively used (Experiment 1), but also when passively held the tools (Experiment 2). The present study extends earlier findings on the extension of peripersonal space from physical and pointing tools to virtual-robotic tools using techniques from haptics and virtual reality. We discuss our data with respect to learning and human factors in the field of surgical robotics and discuss the use of new technologies in the field of cognitive neuroscience.

  14. The effect of visualizing the flow of multimedia content among and inside devices.

    PubMed

    Lee, Dong-Seok

    2009-05-01

    This study introduces a user interface, referred to as the flow interface, which provides a graphical representation of the movement of content among and inside audio/video devices. The proposed interface provides a different frame of reference with content-oriented visualization of the generation, manipulation, storage, and display of content as well as input and output. The flow interface was applied to a VCR/DVD recorder combo, one of the most complicated consumer products. A between-group experiment was performed to determine whether the flow interface helps users to perform various tasks and to examine the learning effect of the flow interface, particularly in regard to hooking up and recording tasks. The results showed that participants with access to the flow interface performed better in terms of success rate and elapsed time. In addition, the participants indicated that they could easily understand the flow interface. The potential of the flow interface for application to other audio video devices, and design issues requiring further consideration, are discussed.

  15. Learning Intercultural Communication Skills with Virtual Humans: Feedback and Fidelity

    ERIC Educational Resources Information Center

    Lane, H. Chad; Hays, Matthew Jensen; Core, Mark G.; Auerbach, Daniel

    2013-01-01

    In the context of practicing intercultural communication skills, we investigated the role of fidelity in a game-based, virtual learning environment as well as the role of feedback delivered by an intelligent tutoring system. In 2 experiments, we compared variations on the game interface, use of the tutoring system, and the form of the feedback.…

  16. A Head in Virtual Reality: Development of A Dynamic Head and Neck Model

    ERIC Educational Resources Information Center

    Nguyen, Ngan; Wilson, Timothy D.

    2009-01-01

    Advances in computer and interface technologies have made it possible to create three-dimensional (3D) computerized models of anatomical structures for visualization, manipulation, and interaction in a virtual 3D environment. In the past few decades, a multitude of digital models have been developed to facilitate complex spatial learning of the…

  17. Designing Empathy: The Role of a "Control Room" in an E-Learning Environment

    ERIC Educational Resources Information Center

    Gentes, Annie; Cambone, Marie

    2013-01-01

    Purpose: The purpose of this paper is to focus on the challenge of designing an interface for a virtual class, where being represented together contributes to the learning process. It explores the possibility of virtual empathy. Design/methodology/approach: The challenges are: How can this feeling of empathy be recreated through a delicate staging…

  18. Eye-gaze and intent: Application in 3D interface control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schryver, J.C.; Goldberg, J.H.

    1993-06-01

    Computer interface control is typically accomplished with an input ``device`` such as keyboard, mouse, trackball, etc. An input device translates a users input actions, such as mouse clicks and key presses, into appropriate computer commands. To control the interface, the user must first convert intent into the syntax of the input device. A more natural means of computer control is possible when the computer can directly infer user intent, without need of intervening input devices. We describe an application of eye-gaze-contingent control of an interactive three-dimensional (3D) user interface. A salient feature of the user interface is natural input, withmore » a heightened impression of controlling the computer directly by the mind. With this interface, input of rotation and translation are intuitive, whereas other abstract features, such as zoom, are more problematic to match with user intent. This paper describes successes with implementation to date, and ongoing efforts to develop a more sophisticated intent inferencing methodology.« less

  19. Eye-gaze and intent: Application in 3D interface control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schryver, J.C.; Goldberg, J.H.

    1993-01-01

    Computer interface control is typically accomplished with an input device'' such as keyboard, mouse, trackball, etc. An input device translates a users input actions, such as mouse clicks and key presses, into appropriate computer commands. To control the interface, the user must first convert intent into the syntax of the input device. A more natural means of computer control is possible when the computer can directly infer user intent, without need of intervening input devices. We describe an application of eye-gaze-contingent control of an interactive three-dimensional (3D) user interface. A salient feature of the user interface is natural input, withmore » a heightened impression of controlling the computer directly by the mind. With this interface, input of rotation and translation are intuitive, whereas other abstract features, such as zoom, are more problematic to match with user intent. This paper describes successes with implementation to date, and ongoing efforts to develop a more sophisticated intent inferencing methodology.« less

  20. LEMON - LHC Era Monitoring for Large-Scale Infrastructures

    NASA Astrophysics Data System (ADS)

    Marian, Babik; Ivan, Fedorko; Nicholas, Hook; Hector, Lansdale Thomas; Daniel, Lenkes; Miroslav, Siket; Denis, Waldron

    2011-12-01

    At the present time computer centres are facing a massive rise in virtualization and cloud computing as these solutions bring advantages to service providers and consolidate the computer centre resources. However, as a result the monitoring complexity is increasing. Computer centre management requires not only to monitor servers, network equipment and associated software but also to collect additional environment and facilities data (e.g. temperature, power consumption, cooling efficiency, etc.) to have also a good overview of the infrastructure performance. The LHC Era Monitoring (Lemon) system is addressing these requirements for a very large scale infrastructure. The Lemon agent that collects data on every client and forwards the samples to the central measurement repository provides a flexible interface that allows rapid development of new sensors. The system allows also to report on behalf of remote devices such as switches and power supplies. Online and historical data can be visualized via a web-based interface or retrieved via command-line tools. The Lemon Alarm System component can be used for notifying the operator about error situations. In this article, an overview of the Lemon monitoring is provided together with a description of the CERN LEMON production instance. No direct comparison is made with other monitoring tool.

  1. Optical HMI with biomechanical energy harvesters integrated in textile supports

    NASA Astrophysics Data System (ADS)

    De Pasquale, G.; Kim, SG; De Pasquale, D.

    2015-12-01

    This paper reports the design, prototyping and experimental validation of a human-machine interface (HMI), named GoldFinger, integrated into a glove with energy harvesting from fingers motion. The device is addressed to medical applications, design tools, virtual reality field and to industrial applications where the interaction with machines is restricted by safety procedures. The HMI prototype includes four piezoelectric transducers applied to the fingers backside at PIP (proximal inter-phalangeal) joints, electric wires embedded in the fabric connecting the transducers, aluminum case for the electronics, wearable switch made with conductive fabrics to turn the communication channel on and off, and a LED. The electronic circuit used to manage the power and to control the light emitter includes a diodes bridge, leveling capacitors, storage battery and switch made by conductive fabric. The communication with the machine is managed by dedicated software, which includes the user interface, the optical tracking, and the continuous updating of the machine microcontroller. The energetic benefit of energy harvester on the battery lifetime is inversely proportional to the activation time of the optical emitter. In most applications, the optical port is active for 1 to 5% of the time, corresponding to battery lifetime increasing between about 14% and 70%.

  2. From Antarctica to space: Use of telepresence and virtual reality in control of remote vehicles

    NASA Technical Reports Server (NTRS)

    Stoker, Carol; Hine, Butler P., III; Sims, Michael; Rasmussen, Daryl; Hontalas, Phil; Fong, Terrence W.; Steele, Jay; Barch, Don; Andersen, Dale; Miles, Eric

    1994-01-01

    In the Fall of 1993, NASA Ames deployed a modified Phantom S2 Remotely-Operated underwater Vehicle (ROV) into an ice-covered sea environment near McMurdo Science Station, Antarctica. This deployment was part of the antarctic Space Analog Program, a joint program between NASA and the National Science Foundation to demonstrate technologies relevant for space exploration in realistic field setting in the Antarctic. The goal of the mission was to operationally test the use of telepresence and virtual reality technology in the operator interface to a remote vehicle, while performing a benthic ecology study. The vehicle was operated both locally, from above a dive hole in the ice through which it was launched, and remotely over a satellite communications link from a control room at NASA's Ames Research Center. Local control of the vehicle was accomplished using the standard Phantom control box containing joysticks and switches, with the operator viewing stereo video camera images on a stereo display monitor. Remote control of the vehicle over the satellite link was accomplished using the Virtual Environment Vehicle Interface (VEVI) control software developed at NASA Ames. The remote operator interface included either a stereo display monitor similar to that used locally or a stereo head-mounted head-tracked display. The compressed video signal from the vehicle was transmitted to NASA Ames over a 768 Kbps satellite channel. Another channel was used to provide a bi-directional Internet link to the vehicle control computer through which the command and telemetry signals traveled, along with a bi-directional telephone service. In addition to the live stereo video from the satellite link, the operator could view a computer-generated graphic representation of the underwater terrain, modeled from the vehicle's sensors. The virtual environment contained an animate graphic model of the vehicle which reflected the state of the actual vehicle, along with ancillary information such as the vehicle track, science markers, and locations of video snapshots. The actual vehicle was driven either from within the virtual environment or through a telepresence interface. All vehicle functions could be controlled remotely over the satellite link.

  3. Virtualisation Devices for Student Learning: Comparison between Desktop-Based (Oculus Rift) and Mobile-Based (Gear VR) Virtual Reality in Medical and Health Science Education

    ERIC Educational Resources Information Center

    Moro, Christian; Stromberga, Zane; Stirling, Allan

    2017-01-01

    Consumer-grade virtual reality has recently become available for both desktop and mobile platforms and may redefine the way that students learn. However, the decision regarding which device to utilise within a curriculum is unclear. Desktop-based VR has considerably higher setup costs involved, whereas mobile-based VR cannot produce the quality of…

  4. Interfacial rheology of model particles at liquid interfaces and its relation to (bicontinuous) Pickering emulsions

    NASA Astrophysics Data System (ADS)

    Thijssen, J. H. J.; Vermant, J.

    2018-01-01

    Interface-dominated materials are commonly encountered in both science and technology, and typical examples include foams and emulsions. Conventionally stabilised by surfactants, emulsions can also be stabilised by micron-sized particles. These so-called Pickering-Ramsden (PR) emulsions have received substantial interest, as they are model arrested systems, rather ubiquitous in industry and promising templates for advanced materials. The mechanical properties of the particle-laden liquid-liquid interface, probed via interfacial rheology, have been shown to play an important role in the formation and stability of PR emulsions. However, the morphological processes which control the formation of emulsions and foams in mixing devices, such as deformation, break-up, and coalescence, are complex and diverse, making it difficult to identify the precise role of the interfacial rheological properties. Interestingly, the role of interfacial rheology in the stability of bicontinuous PR emulsions (bijels) has been virtually unexplored, even though the phase separation process which leads to the formation of these systems is relatively simple and the interfacial deformation processes can be better conceptualised. Hence, the aims of this topical review are twofold. First, we review the existing literature on the interfacial rheology of particle-laden liquid interfaces in rheometrical flows, focussing mainly on model latex suspensions consisting of polystyrene particles carrying sulfate groups, which have been most extensively studied to date. The goal of this part of the review is to identify the generic features of the rheology of such systems. Secondly, we will discuss the relevance of these results to the formation and stability of PR emulsions and bijels.

  5. Translation of First North American 50 and 70 cc Total Artificial Heart Virtual and Clinical Implantations: Utility of 3D Computed Tomography to Test Fit Devices.

    PubMed

    Ferng, Alice S; Oliva, Isabel; Jokerst, Clinton; Avery, Ryan; Connell, Alana M; Tran, Phat L; Smith, Richard G; Khalpey, Zain

    2017-08-01

    Since the creation of SynCardia's 50 cc Total Artificial Hearts (TAHs), patients with irreversible biventricular failure now have two sizing options. Herein, a case series of three patients who have undergone successful 50 and 70 cc TAH implantation with complete closure of the chest cavity utilizing preoperative "virtual implantation" of different sized devices for surgical planning are presented. Computed tomography (CT) images were used for preoperative planning prior to TAH implantation. Three-dimensional (3D) reconstructions of preoperative chest CT images were generated and both 50 and 70 cc TAHs were virtually implanted into patients' thoracic cavities. During the simulation, the TAHs were projected over the native hearts in a similar position to the actual implantation, and the relationship between the devices and the atria, ventricles, chest wall, and diaphragm were assessed. The 3D reconstructed images and virtual modeling were used to simulate and determine for each patient if the 50 or 70 cc TAH would have a higher likelihood of successful implantation without complications. Subsequently, all three patients received clinical implants of the properly sized TAH based on virtual modeling, and their chest cavities were fully closed. This virtual implantation increases our confidence that the selected TAH will better fit within the thoracic cavity allowing for improved surgical outcome. Clinical implantation of the TAHs showed that our virtual modeling was an effective method for determining the correct fit and sizing of 50 and 70 cc TAHs. © 2016 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  6. The connector space reduction mechanism

    NASA Technical Reports Server (NTRS)

    Milam, M. Bruce

    1990-01-01

    The Connector Space Reduction Mechanism (CSRM) is a simple device that can reduce the number of electromechanical devices on the Payload Interface Adapter/Station Interface Adapter (PIA/SIA) from 4 to 1. The device uses simplicity to attack the heart of the connector mating problem for large interfaces. The CSRM allows blind mate connector mating with minimal alignment required over short distances. This eliminates potential interface binding problems and connector damage. The CSRM is compatible with G and H connectors and Moog Rotary Shutoff fluid couplings. The CSRM can be used also with less forgiving connectors, as was demonstrated in the lab. The CSRM is NASA-Goddard exclusive design with patent applied for. The CSRM is the correct mechanism for the PIA/SIA interface as well as other similar berthing interfaces.

  7. Virtual hand: a 3D tactile interface to virtual environments

    NASA Astrophysics Data System (ADS)

    Rogowitz, Bernice E.; Borrel, Paul

    2008-02-01

    We introduce a novel system that allows users to experience the sensation of touch in a computer graphics environment. In this system, the user places his/her hand on an array of pins, which is moved about space on a 6 degree-of-freedom robot arm. The surface of the pins defines a surface in the virtual world. This "virtual hand" can move about the virtual world. When the virtual hand encounters an object in the virtual world, the heights of the pins are adjusted so that they represent the object's shape, surface, and texture. A control system integrates pin and robot arm motions to transmit information about objects in the computer graphics world to the user. It also allows the user to edit, change and move the virtual objects, shapes and textures. This system provides a general framework for touching, manipulating, and modifying objects in a 3-D computer graphics environment, which may be useful in a wide range of applications, including computer games, computer aided design systems, and immersive virtual worlds.

  8. Tuning the Seebeck effect in C60-based hybrid thermoelectric devices through temperature-dependent surface polarization and thermally-modulated interface dipoles.

    PubMed

    Liu, Yuchun; Xu, Ling; Zhao, Chen; Shao, Ming; Hu, Bin

    2017-06-07

    Fullerene (C 60 ) is an important n-type organic semiconductor with high electron mobility and low thermal conductivity. In this work, we report the experimental results on the tunable Seebeck effect of C 60 hybrid thin-film devices by adopting different oxide layers. After inserting n-type high-dielectric constant titanium oxide (TiO x ) and zinc oxide (ZnO) layers, we observed a significantly enhanced n-type Seebeck effect in oxide/C 60 hybrid devices with Seebeck coefficients of -5.8 mV K -1 for TiO x /C 60 and -2.08 mV K -1 for ZnO/C 60 devices at 100 °C, compared with the value of -400 μV K -1 for the pristine C 60 device. However, when a p-type nickel oxide (NiO) layer is inserted, the C 60 hybrid devices show a p-type to n-type Seebeck effect transition when the temperature increases. The remarkable Seebeck effect and change in Seebeck coefficient in different oxide/C 60 hybrid devices can be attributed to two reasons: the temperature-dependent surface polarization difference and thermally-dependent interface dipoles. Firstly, the surface polarization difference due to temperature-dependent electron-phonon coupling can be enhanced by inserting an oxide layer and functions as an additional driving force for the Seebeck effect development. Secondly, thermally-dependent interface dipoles formed at the electrode/oxide interface play an important role in modifying the density of interface states and affecting the charge diffusion in hybrid devices. The surface polarization difference and interface dipoles function in the same direction in hybrid devices with TiO x and ZnO dielectric layers, leading to enhanced n-type Seebeck effect, while the surface polarization difference and interface dipoles generate the opposite impact on electron diffusion in ITO/NiO/C 60 /Al, leading to a p-type to n-type transition in the Seebeck effect. Therefore, inserting different oxide layers could effectively modulate the Seebeck effect of C 60 -based hybrid devices through the surface polarization difference and thermally-dependent interface dipoles, which represents an effective approach to tune the vertical Seebeck effect in organic functional devices.

  9. Local intelligent electronic device (IED) rendering templates over limited bandwidth communication link to manage remote IED

    DOEpatents

    Bradetich, Ryan; Dearien, Jason A; Grussling, Barry Jakob; Remaley, Gavin

    2013-11-05

    The present disclosure provides systems and methods for remote device management. According to various embodiments, a local intelligent electronic device (IED) may be in communication with a remote IED via a limited bandwidth communication link, such as a serial link. The limited bandwidth communication link may not support traditional remote management interfaces. According to one embodiment, a local IED may present an operator with a management interface for a remote IED by rendering locally stored templates. The local IED may render the locally stored templates using sparse data obtained from the remote IED. According to various embodiments, the management interface may be a web client interface and/or an HTML interface. The bandwidth required to present a remote management interface may be significantly reduced by rendering locally stored templates rather than requesting an entire management interface from the remote IED. According to various embodiments, an IED may comprise an encryption transceiver.

  10. Interfacing 3D micro/nanochannels with a branch-shaped reservoir enhances fluid and mass transport

    NASA Astrophysics Data System (ADS)

    Kumar, Prasoon; Gandhi, Prasanna S.; Majumder, Mainak

    2017-01-01

    Three-dimensional (3D) micro/nanofluidic devices can accelerate progress in numerous fields such as tissue engineering, drug delivery, self-healing and cooling devices. However, efficient connections between networks of micro/nanochannels and external fluidic ports are key to successful applications of 3D micro/nanofluidic devices. Therefore, in this work, the extent of the role of reservoir geometry in interfacing with vascular (micro/nanochannel) networks, and in the enabling of connections with external fluidic ports while maintaining the compactness of devices, has been experimentally and theoretically investigated. A statistical modelling suggested that a branch-shaped reservoir demonstrates enhanced interfacing with vascular networks when compared to other regular geometries of reservoirs. Time-lapse dye flow experiments by capillary action through fabricated 3D micro/nanofluidic devices confirmed the connectivity of branch-shaped reservoirs with micro/nanochannel networks in fluidic devices. This demonstrated a ~2.2-fold enhancement of the volumetric flow rate in micro/nanofluidic networks when interfaced to branch-shaped reservoirs over rectangular reservoirs. The enhancement is due to a ~2.8-fold increase in the perimeter of the reservoirs. In addition, the mass transfer experiments exhibited a ~1.7-fold enhancement in solute flux across 3D micro/nanofluidic devices that interfaced with branch-shaped reservoirs when compared to rectangular reservoirs. The fabrication of 3D micro/nanofluidic devices and their efficient interfacing through branch-shaped reservoirs to an external fluidic port can potentially enable their use in complex applications, in which enhanced surface-to-volume interactions are desirable.

  11. Surgery applications of virtual reality

    NASA Technical Reports Server (NTRS)

    Rosen, Joseph

    1994-01-01

    Virtual reality is a computer-generated technology which allows information to be displayed in a simulated, bus lifelike, environment. In this simulated 'world', users can move and interact as if they were actually a part of that world. This new technology will be useful in many different fields, including the field of surgery. Virtual reality systems can be used to teach surgical anatomy, diagnose surgical problems, plan operations, simulate and perform surgical procedures (telesurgery), and predict the outcomes of surgery. The authors of this paper describe the basic components of a virtual reality surgical system. These components include: the virtual world, the virtual tools, the anatomical model, the software platform, the host computer, the interface, and the head-coupled display. In the chapter they also review the progress towards using virtual reality for surgical training, planning, telesurgery, and predicting outcomes. Finally, the authors present a training system being developed for the practice of new procedures in abdominal surgery.

  12. myChEMBL: a virtual machine implementation of open data and cheminformatics tools.

    PubMed

    Ochoa, Rodrigo; Davies, Mark; Papadatos, George; Atkinson, Francis; Overington, John P

    2014-01-15

    myChEMBL is a completely open platform, which combines public domain bioactivity data with open source database and cheminformatics technologies. myChEMBL consists of a Linux (Ubuntu) Virtual Machine featuring a PostgreSQL schema with the latest version of the ChEMBL database, as well as the latest RDKit cheminformatics libraries. In addition, a self-contained web interface is available, which can be modified and improved according to user specifications. The VM is available at: ftp://ftp.ebi.ac.uk/pub/databases/chembl/VM/myChEMBL/current. The web interface and web services code is available at: https://github.com/rochoa85/myChEMBL.

  13. Experiments and Analysis on a Computer Interface to an Information-Retrieval Network.

    ERIC Educational Resources Information Center

    Marcus, Richard S.; Reintjes, J. Francis

    A primary goal of this project was to develop an interface that would provide direct access for inexperienced users to existing online bibliographic information retrieval networks. The experiment tested the concept of a virtual-system mode of access to a network of heterogeneous interactive retrieval systems and databases. An experimental…

  14. The Design of Motivational Agents and Avatars

    ERIC Educational Resources Information Center

    Baylor, Amy L.

    2011-01-01

    While the addition of an anthropomorphic interface agent to a learning system generally has little direct impact on learning, it potentially has a huge impact on learner motivation. As such agents become increasingly ubiquitous on the Internet, in virtual worlds, and as interfaces for learning and gaming systems, it is important to design them to…

  15. User Acceptance of a Haptic Interface for Learning Anatomy

    ERIC Educational Resources Information Center

    Yeom, Soonja; Choi-Lundberg, Derek; Fluck, Andrew; Sale, Arthur

    2013-01-01

    Visualizing the structure and relationships in three dimensions (3D) of organs is a challenge for students of anatomy. To provide an alternative way of learning anatomy engaging multiple senses, we are developing a force-feedback (haptic) interface for manipulation of 3D virtual organs, using design research methodology, with iterations of system…

  16. Self-Observation Model Employing an Instinctive Interface for Classroom Active Learning

    ERIC Educational Resources Information Center

    Chen, Gwo-Dong; Nurkhamid; Wang, Chin-Yeh; Yang, Shu-Han; Chao, Po-Yao

    2014-01-01

    In a classroom, obtaining active, whole-focused, and engaging learning results from a design is often difficult. In this study, we propose a self-observation model that employs an instinctive interface for classroom active learning. Students can communicate with virtual avatars in the vertical screen and can react naturally according to the…

  17. Telepresence: A "Real" Component in a Model to Make Human-Computer Interface Factors Meaningful in the Virtual Learning Environment

    ERIC Educational Resources Information Center

    Selverian, Melissa E. Markaridian; Lombard, Matthew

    2009-01-01

    A thorough review of the research relating to Human-Computer Interface (HCI) form and content factors in the education, communication and computer science disciplines reveals strong associations of meaningful perceptual "illusions" with enhanced learning and satisfaction in the evolving classroom. Specifically, associations emerge…

  18. iHand: an interactive bare-hand-based augmented reality interface on commercial mobile phones

    NASA Astrophysics Data System (ADS)

    Choi, Junyeong; Park, Jungsik; Park, Hanhoon; Park, Jong-Il

    2013-02-01

    The performance of mobile phones has rapidly improved, and they are emerging as a powerful platform. In many vision-based applications, human hands play a key role in natural interaction. However, relatively little attention has been paid to the interaction between human hands and the mobile phone. Thus, we propose a vision- and hand gesture-based interface in which the user holds a mobile phone in one hand but sees the other hand's palm through a built-in camera. The virtual contents are faithfully rendered on the user's palm through palm pose estimation, and reaction with hand and finger movements is achieved that is recognized by hand shape recognition. Since the proposed interface is based on hand gestures familiar to humans and does not require any additional sensors or markers, the user can freely interact with virtual contents anytime and anywhere without any training. We demonstrate that the proposed interface works at over 15 fps on a commercial mobile phone with a 1.2-GHz dual core processor and 1 GB RAM.

  19. A Simple and Customizable Web Interface to the Virtual Solar Observatory

    NASA Astrophysics Data System (ADS)

    Hughitt, V. Keith; Hourcle, J.; Suarez-Sola, I.; Davey, A.

    2010-05-01

    As the variety and number of solar data sources continue to increase at a rapid rate, the importance of providing methods to search through these sources becomes increasingly important. By taking advantage of the power of modern JavaScript libraries, a new version of the Virtual Solar Observatory's web interface aims to provide a significantly faster and simpler way to explore the multitude of data repositories available. Querying asynchroniously serves not only to eliminates bottlenecks resulting from slow or unresponsive data providers, but also allows for displaying of results as soon as they are returned. Implicit pagination and post-query filtering enables users to work with large result-sets, while a more modular and customizable UI provides a mechanism for customizing both the look-and-feel and behavior of the VSO web interface. Finally, the new web interface features a custom widget system capable of displaying additional tools and information along-side of the standard VSO search form. Interested users can also write their own widgets and submit them for future incorporation into VSO.

  20. A virtual tour of geological heritage: Valourising geodiversity using Google Earth and QR code

    NASA Astrophysics Data System (ADS)

    Martínez-Graña, A. M.; Goy, J. L.; Cimarra, C. A.

    2013-12-01

    When making land-use plans, it is necessary to inventory and catalogue the geological heritage and geodiversity of a site to establish an apolitical conservation protection plan to meet the educational and social needs of society. New technologies make it possible to create virtual databases using virtual globes - e.g., Google Earth - and other personal-use geomatics applications (smartphones, tablets, PDAs) for accessing geological heritage information in “real time” for scientific, educational, and cultural purposes via a virtual geological itinerary. Seventeen mapped and georeferenced geosites have been created in Keyhole Markup Language for use in map layers used in geological itinerary stops for different applications. A virtual tour has been developed for Las Quilamas Natural Park, which is located in the Spanish Central System, using geological layers and topographic and digital terrain models that can be overlaid in a 3D model. The Google Earth application was used to import the geosite placemarks. For each geosite, a tab has been developed that shows a description of the geology with photographs and diagrams and that evaluates the scientific, educational, and tourism quality. Augmented reality allows the user to access these georeferenced thematic layers and overlay data, images, and graphics in real time on their mobile devices. These virtual tours can be incorporated into subject guides designed by public. Seven educational and interpretive panels describing some of the geosites were designed and tagged with a QR code that could be printed at each stop or in the printed itinerary. These QR codes can be scanned with the camera found on most mobile devices, and video virtual tours can be viewed on these devices. The virtual tour of the geological heritage can be used to show tourists the geological history of the Las Quilamas Natural Park using new geomatics technologies (virtual globes, augmented reality, and QR codes).

  1. Building Interfaces: Mechanisms, fabrication, and applications at the biotic/abiotic interface for silk fibroin based bioelectronic and biooptical devices

    NASA Astrophysics Data System (ADS)

    Brenckle, Mark

    Recent efforts in bioelectronics and biooptics have led to a shift in the materials and form factors used to make medical devices, including high performance, implantable, and wearable sensors. In this context, biopolymer-based devices must be processed to interface the soft, curvilinear biological world with the rigid, inorganic world of traditional electronics and optics. This poses new material-specific fabrication challenges in designing such devices, which in turn requires further understanding of the fundamental physical behaviors of the materials in question. As a biopolymer, silk fibroin protein has remarkable promise in this space, due to its bioresorbability, mechanical strength, optical clarity, ability to be reshaped on the micro- and nano-scale, and ability to stabilize labile compounds. Application of this material to devices at the biotic/abiotic interface will require the development of fabrication techniques for nano-patterning, lithography, multilayer adhesion, and transfer printing in silk materials. In this work, we address this need through fundamental study of the thermal and diffusional properties of silk protein as it relates to these fabrication strategies. We then leverage these properties to fabricate devices well suited to the biotic/abiotic interface in three areas: shelf-ready sensing, implantable transient electronics, and wearable biosensing. These example devices will illustrate the advantages of silk in this class of bioelectronic and biooptical devices, from fundamentals through application, and contribute to a silk platform for the development of future devices that combine biology with high technology.

  2. Rodent wearable ultrasound system for wireless neural recording.

    PubMed

    Piech, David K; Kay, Joshua E; Boser, Bernhard E; Maharbiz, Michel M

    2017-07-01

    Advances in minimally-invasive, distributed biological interface nodes enable possibilities for networks of sensors and actuators to connect the brain with external devices. The recent development of the neural dust sensor mote has shown that utilizing ultrasound backscatter communication enables untethered sub-mm neural recording devices. These implanted sensor motes require a wearable external ultrasound interrogation device to enable in-vivo, freely-behaving neural interface experiments. However, minimizing the complexity and size of the implanted sensors shifts the power and processing burden to the external interrogator. In this paper, we present an ultrasound backscatter interrogator that supports real-time backscatter processing in a rodent-wearable, completely wireless device. We demonstrate a generic digital encoding scheme which is intended for transmitting neural information. The system integrates a front-end ultrasonic interface ASIC with off-the-shelf components to enable a highly compact ultrasound interrogation device intended for rodent neural interface experiments but applicable to other model systems.

  3. Ray Tracing with Virtual Objects.

    ERIC Educational Resources Information Center

    Leinoff, Stuart

    1991-01-01

    Introduces the method of ray tracing to analyze the refraction or reflection of real or virtual images from multiple optical devices. Discusses ray-tracing techniques for locating images using convex and concave lenses or mirrors. (MDH)

  4. Design of a haptic device with grasp and push-pull force feedback for a master-slave surgical robot.

    PubMed

    Hu, Zhenkai; Yoon, Chae-Hyun; Park, Samuel Byeongjun; Jo, Yung-Ho

    2016-07-01

    We propose a portable haptic device providing grasp (kinesthetic) and push-pull (cutaneous) sensations for optical-motion-capture master interfaces. Although optical-motion-capture master interfaces for surgical robot systems can overcome the stiffness, friction, and coupling problems of mechanical master interfaces, it is difficult to add haptic feedback to an optical-motion-capture master interface without constraining the free motion of the operator's hands. Therefore, we utilized a Bowden cable-driven mechanism to provide the grasp and push-pull sensation while retaining the free hand motion of the optical-motion capture master interface. To evaluate the haptic device, we construct a 2-DOF force sensing/force feedback system. We compare the sensed force and the reproduced force of the haptic device. Finally, a needle insertion test was done to evaluate the performance of the haptic interface in the master-slave system. The results demonstrate that both the grasp force feedback and the push-pull force feedback provided by the haptic interface closely matched with the sensed forces of the slave robot. We successfully apply our haptic interface in the optical-motion-capture master-slave system. The results of the needle insertion test showed that our haptic feedback can provide more safety than merely visual observation. We develop a suitable haptic device to produce both kinesthetic grasp force feedback and cutaneous push-pull force feedback. Our future research will include further objective performance evaluations of the optical-motion-capture master-slave robot system with our haptic interface in surgical scenarios.

  5. From planes to brains: parallels between military development of virtual reality environments and virtual neurological surgery.

    PubMed

    Schmitt, Paul J; Agarwal, Nitin; Prestigiacomo, Charles J

    2012-01-01

    Military explorations of the practical role of simulators have served as a driving force for much of the virtual reality technology that we have today. The evolution of 3-dimensional and virtual environments from the early flight simulators used during World War II to the sophisticated training simulators in the modern military followed a path that virtual surgical and neurosurgical devices have already begun to parallel. By understanding the evolution of military simulators as well as comparing and contrasting that evolution with current and future surgical simulators, it may be possible to expedite the development of appropriate devices and establish their validity as effective training tools. As such, this article presents a historical perspective examining the progression of neurosurgical simulators, the establishment of effective and appropriate curricula for using them, and the contributions that the military has made during the ongoing maturation of this exciting treatment and training modality. Copyright © 2012. Published by Elsevier Inc.

  6. The Application of Leap Motion in Astronaut Virtual Training

    NASA Astrophysics Data System (ADS)

    Qingchao, Xie; Jiangang, Chao

    2017-03-01

    With the development of computer vision, virtual reality has been applied in astronaut virtual training. As an advanced optic equipment to track hand, Leap Motion can provide precise and fluid tracking of hands. Leap Motion is suitable to be used as gesture input device in astronaut virtual training. This paper built an astronaut virtual training based Leap Motion, and established the mathematics model of hands occlusion. At last the ability of Leap Motion to handle occlusion was analysed. A virtual assembly simulation platform was developed for astronaut training, and occlusion gesture would influence the recognition process. The experimental result can guide astronaut virtual training.

  7. Optical processing for semiconductor device fabrication

    NASA Technical Reports Server (NTRS)

    Sopori, Bhushan L.

    1994-01-01

    A new technique for semiconductor device processing is described that uses optical energy to produce local heating/melting in the vicinity of a preselected interface of the device. This process, called optical processing, invokes assistance of photons to enhance interface reactions such as diffusion and melting, as compared to the use of thermal heating alone. Optical processing is performed in a 'cold wall' furnace, and requires considerably lower energies than furnace or rapid thermal annealing. This technique can produce some device structures with unique properties that cannot be produced by conventional thermal processing. Some applications of optical processing involving semiconductor-metal interfaces are described.

  8. Effect of two layouts on high technology AAC navigation and content location by people with aphasia.

    PubMed

    Wallace, Sarah E; Hux, Karen

    2014-03-01

    Navigating high-technology augmentative and alternative communication (AAC) devices with dynamic displays can be challenging for people with aphasia. The purpose of this study was to determine which of two AAC interfaces two people with aphasia could use most efficiently and accurately. The researchers used a BCB'C' alternating treatment design to provide device-use instruction to two people with severe aphasia regarding two personalised AAC interfaces that had different navigation layouts but identical content. One interface had static buttons for homepage and go-back features, and the other interface had static buttons in a navigation ring layout. Throughout treatment, the researchers monitored participants' mastery patterns regarding navigation efficiency and accuracy when locating target messages. Participants' accuracy and efficiency improved with both interfaces given intervention; however, the navigation ring layout appeared more transparent and better facilitated navigation than the homepage layout. People with aphasia can learn to navigate computerised devices; however, interface layout can substantially affect the efficiency and accuracy with which they locate messages. Given intervention incorporating errorless learning principles, people with chronic aphasia can learn to navigate across multiple device levels to locate target sentences. Both navigation ring and homepage interfaces may be used by people with aphasia. Some people with aphasia may be more consistent and efficient in finding target sentences using the navigation ring interface than the homepage interface. Additionally, the navigation ring interface may be more transparent and easier for people with aphasia to master--that is, they may require fewer intervention sessions to learn to navigate the navigation ring interface. Generalisation of learning may result from use of the navigation ring interface. Specifically, people with aphasia may improve navigation with the homepage interface as a result of instruction on the navigation interface, but not vice versa.

  9. The Design of Wayfinding Affordance and Its Influence on Task Performance and Perceptual Experience in Desktop Virtual Environments

    ERIC Educational Resources Information Center

    Choi, Gil Ok

    2008-01-01

    For the past few years, virtual environments (VEs) have gained broad attention from both scholarly and practitioner communities. However, in spite of intense and widespread efforts, most VE-related research has focused on the technical aspects of applications, and the necessary theoretical framework to assess the quality of interfaces and designs…

  10. STS-105 Crew Training in VR Lab

    NASA Image and Video Library

    2001-03-15

    JSC2001-00751 (15 March 2001) --- Astronaut Scott J. Horowitz, STS-105 mission commander, uses the virtual reality lab at the Johnson Space Center (JSC) to train for his duties aboard the Space Shuttle Discovery. This type of computer interface paired with virtual reality training hardware and software helps to prepare the entire team for dealing with International Space Station (ISS) elements.

  11. Photographic coverage of STS-112 during EVA 3 in VR Lab.

    NASA Image and Video Library

    2002-08-21

    JSC2002-E-34622 (21 August 2002) --- Astronaut David A. Wolf, STS-112 mission specialist, uses the virtual reality lab at the Johnson Space Center (JSC) to train for his duties aboard the Space Shuttle Atlantis. This type of computer interface paired with virtual reality training hardware and software helps to prepare the entire team for dealing with ISS elements.

  12. STS-115 Vitual Lab Training

    NASA Image and Video Library

    2005-06-07

    JSC2005-E-21191 (7 June 2005) --- Astronaut Steven G. MacLean, STS-115 mission specialist representing the Canadian Space Agency, uses the virtual reality lab at the Johnson Space Center to train for his duties aboard the space shuttle. This type of computer interface, paired with virtual reality training hardware and software, helps to prepare the entire team for dealing with space station elements.

  13. STS-105 Crew Training in VR Lab

    NASA Image and Video Library

    2001-03-15

    JSC2001-00758 (15 March 2001) --- Astronaut Frederick W. Sturckow, STS-105 pilot, uses the virtual reality lab at the Johnson Space Center (JSC) to train for his duties aboard the Space Shuttle Discovery. This type of computer interface paired with virtual reality training hardware and software helps to prepare the entire team for dealing with International Space Station (ISS) elements.

  14. STS-115 Vitual Lab Training

    NASA Image and Video Library

    2005-06-07

    JSC2005-E-21192 (7 June 2005) --- Astronauts Christopher J. Ferguson (left), STS-115 pilot, and Daniel C. Burbank, mission specialist, use the virtual reality lab at the Johnson Space Center to train for their duties aboard the space shuttle. This type of computer interface, paired with virtual reality training hardware and software, helps to prepare the entire team for dealing with space station elements.

  15. Real-Life Migrants on the MUVE: Stories of Virtual Transitions

    ERIC Educational Resources Information Center

    Perkins, Ross A.; Arreguin, Cathy

    2007-01-01

    The communication and collaborative interface known as a multi-user virtual environment (MUVE), has existed since as early as the late 1970s. MUVEs refer to programs that have an animated character ("avatar") controlled by a user within a wider environment that can be explored--or built--at will. Second Life, a MUVE created by San Francisco-based…

  16. Virtual Teleoperation for Unmanned Aerial Vehicles

    DTIC Science & Technology

    2012-01-24

    Gilbert, S., “Wayfinder: Evaluating Multitouch Interaction in Supervisory Control of Unmanned Vehicles,” Proceedings of ASME 2nd World Conference on... interactive virtual reality environment that fuses available information into a coherent picture that can be viewed from multiple perspectives and scales...for multimodal interaction • Generally abstracted controller hardware and graphical interfaces facilitating deployment on a variety of VR platform

  17. A Closed-loop Brain Computer Interface to a Virtual Reality Avatar: Gait Adaptation to Visual Kinematic Perturbations

    PubMed Central

    Luu, Trieu Phat; He, Yongtian; Brown, Samuel; Nakagome, Sho; Contreras-Vidal, Jose L.

    2016-01-01

    The control of human bipedal locomotion is of great interest to the field of lower-body brain computer interfaces (BCIs) for rehabilitation of gait. While the feasibility of a closed-loop BCI system for the control of a lower body exoskeleton has been recently shown, multi-day closed-loop neural decoding of human gait in a virtual reality (BCI-VR) environment has yet to be demonstrated. In this study, we propose a real-time closed-loop BCI that decodes lower limb joint angles from scalp electroencephalography (EEG) during treadmill walking to control the walking movements of a virtual avatar. Moreover, virtual kinematic perturbations resulting in asymmetric walking gait patterns of the avatar were also introduced to investigate gait adaptation using the closed-loop BCI-VR system over a period of eight days. Our results demonstrate the feasibility of using a closed-loop BCI to learn to control a walking avatar under normal and altered visuomotor perturbations, which involved cortical adaptations. These findings have implications for the development of BCI-VR systems for gait rehabilitation after stroke and for understanding cortical plasticity induced by a closed-loop BCI system. PMID:27713915

  18. Aerospace Ground Equipment for model 4080 sequence programmer. A standard computer terminal is adapted to provide convenient operator to device interface

    NASA Technical Reports Server (NTRS)

    Nissley, L. E.

    1979-01-01

    The Aerospace Ground Equipment (AGE) provides an interface between a human operator and a complete spaceborne sequence timing device with a memory storage program. The AGE provides a means for composing, editing, syntax checking, and storing timing device programs. The AGE is implemented with a standard Hewlett-Packard 2649A terminal system and a minimum of special hardware. The terminal's dual tape interface is used to store timing device programs and to read in special AGE operating system software. To compose a new program for the timing device the keyboard is used to fill in a form displayed on the screen.

  19. Virtual reality for intelligent and interactive operating, training, and visualization systems

    NASA Astrophysics Data System (ADS)

    Freund, Eckhard; Rossmann, Juergen; Schluse, Michael

    2000-10-01

    Virtual Reality Methods allow a new and intuitive way of communication between man and machine. The basic idea of Virtual Reality (VR) is the generation of artificial computer simulated worlds, which the user not only can look at but also can interact with actively using data glove and data helmet. The main emphasis for the use of such techniques at the IRF is the development of a new generation of operator interfaces for the control of robots and other automation components and for intelligent training systems for complex tasks. The basic idea of the methods developed at the IRF for the realization of Projective Virtual Reality is to let the user work in the virtual world as he would act in reality. The user actions are recognized by the Virtual reality System and by means of new and intelligent control software projected onto the automation components like robots which afterwards perform the necessary actions in reality to execute the users task. In this operation mode the user no longer has to be a robot expert to generate tasks for robots or to program them, because intelligent control software recognizes the users intention and generated automatically the commands for nearly every automation component. Now, Virtual Reality Methods are ideally suited for universal man-machine-interfaces for the control and supervision of a big class of automation components, interactive training and visualization systems. The Virtual Reality System of the IRF-COSIMIR/VR- forms the basis for different projects starting with the control of space automation systems in the projects CIROS, VITAL and GETEX, the realization of a comprehensive development tool for the International Space Station and last but not least with the realistic simulation fire extinguishing, forest machines and excavators which will be presented in the final paper in addition to the key ideas of this Virtual Reality System.

  20. The Language of Glove: Wireless gesture decoder with low-power and stretchable hybrid electronics.

    PubMed

    O'Connor, Timothy F; Fach, Matthew E; Miller, Rachel; Root, Samuel E; Mercier, Patrick P; Lipomi, Darren J

    2017-01-01

    This communication describes a glove capable of wirelessly translating the American Sign Language (ASL) alphabet into text displayable on a computer or smartphone. The key components of the device are strain sensors comprising a piezoresistive composite of carbon particles embedded in a fluoroelastomer. These sensors are integrated with a wearable electronic module consisting of digitizers, a microcontroller, and a Bluetooth radio. Finite-element analysis predicts a peak strain on the sensors of 5% when the knuckles are fully bent. Fatigue studies suggest that the sensors successfully detect the articulation of the knuckles even when bent to their maximal degree 1,000 times. In concert with an accelerometer and pressure sensors, the glove is able to translate all 26 letters of the ASL alphabet. Lastly, data taken from the glove are used to control a virtual hand; this application suggests new ways in which stretchable and wearable electronics can enable humans to interface with virtual environments. Critically, this system was constructed of components costing less than $100 and did not require chemical synthesis or access to a cleanroom. It can thus be used as a test bed for materials scientists to evaluate the performance of new materials and flexible and stretchable hybrid electronics.

Top