NASA Astrophysics Data System (ADS)
Lin, Chern-Sheng; Ho, Chien-Wa; Chang, Kai-Chieh; Hung, San-Shan; Shei, Hung-Jung; Yeh, Mau-Shiun
2006-06-01
This study describes the design and combination of an eye-controlled and a head-controlled human-machine interface system. This system is a highly effective human-machine interface, detecting head movement by changing positions and numbers of light sources on the head. When the users utilize the head-mounted display to browse a computer screen, the system will catch the images of the user's eyes with CCD cameras, which can also measure the angle and position of the light sources. In the eye-tracking system, the program in the computer will locate each center point of the pupils in the images, and record the information on moving traces and pupil diameters. In the head gesture measurement system, the user wears a double-source eyeglass frame, so the system catches images of the user's head by using a CCD camera in front of the user. The computer program will locate the center point of the head, transferring it to the screen coordinates, and then the user can control the cursor by head motions. We combine the eye-controlled and head-controlled human-machine interface system for the virtual reality applications.
Human-machine interface hardware: The next decade
NASA Technical Reports Server (NTRS)
Marcus, Elizabeth A.
1991-01-01
In order to understand where human-machine interface hardware is headed, it is important to understand where we are today, how we got there, and what our goals for the future are. As computers become more capable, faster, and programs become more sophisticated, it becomes apparent that the interface hardware is the key to an exciting future in computing. How can a user interact and control a seemingly limitless array of parameters effectively? Today, the answer is most often a limitless array of controls. The link between these controls and human sensory motor capabilities does not utilize existing human capabilities to their full extent. Interface hardware for teleoperation and virtual environments is now facing a crossroad in design. Therefore, we as developers need to explore how the combination of interface hardware, human capabilities, and user experience can be blended to get the best performance today and in the future.
An automatic eye detection and tracking technique for stereo video sequences
NASA Astrophysics Data System (ADS)
Paduru, Anirudh; Charalampidis, Dimitrios; Fouts, Brandon; Jovanovich, Kim
2009-05-01
Human-computer interfacing (HCI) describes a system or process with which two information processors, namely a human and a computer, attempt to exchange information. Computer-to-human (CtH) information transfer has been relatively effective through visual displays and sound devices. On the other hand, the human-tocomputer (HtC) interfacing avenue has yet to reach its full potential. For instance, the most common HtC communication means are the keyboard and mouse, which are already becoming a bottleneck in the effective transfer of information. The solution to the problem is the development of algorithms that allow the computer to understand human intentions based on their facial expressions, head motion patterns, and speech. In this work, we are investigating the feasibility of a stereo system to effectively determine the head position, including the head rotation angles, based on the detection of eye pupils.
Williams, Matthew R.; Kirsch, Robert F.
2013-01-01
We investigated the performance of three user interfaces for restoration of cursor control in individuals with tetraplegia: head orientation, EMG from face and neck muscles, and a standard computer mouse (for comparison). Subjects engaged in a 2D, center-out, Fitts’ Law style task and performance was evaluated using several measures. Overall, head orientation commanded motion resembled mouse commanded cursor motion (smooth, accurate movements to all targets), although with somewhat lower performance. EMG commanded movements exhibited a higher average speed, but other performance measures were lower, particularly for diagonal targets. Compared to head orientation, EMG as a cursor command source was less accurate, was more affected by target direction and was more prone to overshoot the target. In particular, EMG commands for diagonal targets were more sequential, moving first in one direction and then the other rather than moving simultaneous in the two directions. While the relative performance of each user interface differs, each has specific advantages depending on the application. PMID:18990652
DMA shared byte counters in a parallel computer
Chen, Dong; Gara, Alan G.; Heidelberger, Philip; Vranas, Pavlos
2010-04-06
A parallel computer system is constructed as a network of interconnected compute nodes. Each of the compute nodes includes at least one processor, a memory and a DMA engine. The DMA engine includes a processor interface for interfacing with the at least one processor, DMA logic, a memory interface for interfacing with the memory, a DMA network interface for interfacing with the network, injection and reception byte counters, injection and reception FIFO metadata, and status registers and control registers. The injection FIFOs maintain memory locations of the injection FIFO metadata memory locations including its current head and tail, and the reception FIFOs maintain the reception FIFO metadata memory locations including its current head and tail. The injection byte counters and reception byte counters may be shared between messages.
A video, text, and speech-driven realistic 3-d virtual head for human-machine interface.
Yu, Jun; Wang, Zeng-Fu
2015-05-01
A multiple inputs-driven realistic facial animation system based on 3-D virtual head for human-machine interface is proposed. The system can be driven independently by video, text, and speech, thus can interact with humans through diverse interfaces. The combination of parameterized model and muscular model is used to obtain a tradeoff between computational efficiency and high realism of 3-D facial animation. The online appearance model is used to track 3-D facial motion from video in the framework of particle filtering, and multiple measurements, i.e., pixel color value of input image and Gabor wavelet coefficient of illumination ratio image, are infused to reduce the influence of lighting and person dependence for the construction of online appearance model. The tri-phone model is used to reduce the computational consumption of visual co-articulation in speech synchronized viseme synthesis without sacrificing any performance. The objective and subjective experiments show that the system is suitable for human-machine interaction.
Preliminary Design and Evaluation of Portable Electronic Flight Progress Strips
NASA Technical Reports Server (NTRS)
Doble, Nathan A.; Hansman, R. John
2002-01-01
There has been growing interest in using electronic alternatives to the paper Flight Progress Strip (FPS) for air traffic control. However, most research has been centered on radar-based control environments, and has not considered the unique operational needs of the airport air traffic control tower. Based on an analysis of the human factors issues for control tower Decision Support Tool (DST) interfaces, a requirement has been identified for an interaction mechanism which replicates the advantages of the paper FPS (e.g., head-up operation, portability) but also enables input and output with DSTs. An approach has been developed which uses a Portable Electronic FPS that has attributes of both a paper strip and an electronic strip. The prototype flight strip system uses Personal Digital Assistants (PDAs) to replace individual paper strips in addition to a central management interface which is displayed on a desktop computer. Each PDA is connected to the management interface via a wireless local area network. The Portable Electronic FPSs replicate the core functionality of paper flight strips and have additional features which provide a heads-up interface to a DST. A departure DST is used as a motivating example. The central management interface is used for aircraft scheduling and sequencing and provides an overview of airport departure operations. This paper will present the design of the Portable Electronic FPS system as well as preliminary evaluation results.
Human computer interface guide, revision A
NASA Technical Reports Server (NTRS)
1993-01-01
The Human Computer Interface Guide, SSP 30540, is a reference document for the information systems within the Space Station Freedom Program (SSFP). The Human Computer Interface Guide (HCIG) provides guidelines for the design of computer software that affects human performance, specifically, the human-computer interface. This document contains an introduction and subparagraphs on SSFP computer systems, users, and tasks; guidelines for interactions between users and the SSFP computer systems; human factors evaluation and testing of the user interface system; and example specifications. The contents of this document are intended to be consistent with the tasks and products to be prepared by NASA Work Package Centers and SSFP participants as defined in SSP 30000, Space Station Program Definition and Requirements Document. The Human Computer Interface Guide shall be implemented on all new SSFP contractual and internal activities and shall be included in any existing contracts through contract changes. This document is under the control of the Space Station Control Board, and any changes or revisions will be approved by the deputy director.
NASA Astrophysics Data System (ADS)
Vassiliou, Marius S.; Sundareswaran, Venkataraman; Chen, S.; Behringer, Reinhold; Tam, Clement K.; Chan, M.; Bangayan, Phil T.; McGee, Joshua H.
2000-08-01
We describe new systems for improved integrated multimodal human-computer interaction and augmented reality for a diverse array of applications, including future advanced cockpits, tactical operations centers, and others. We have developed an integrated display system featuring: speech recognition of multiple concurrent users equipped with both standard air- coupled microphones and novel throat-coupled sensors (developed at Army Research Labs for increased noise immunity); lip reading for improving speech recognition accuracy in noisy environments, three-dimensional spatialized audio for improved display of warnings, alerts, and other information; wireless, coordinated handheld-PC control of a large display; real-time display of data and inferences from wireless integrated networked sensors with on-board signal processing and discrimination; gesture control with disambiguated point-and-speak capability; head- and eye- tracking coupled with speech recognition for 'look-and-speak' interaction; and integrated tetherless augmented reality on a wearable computer. The various interaction modalities (speech recognition, 3D audio, eyetracking, etc.) are implemented a 'modality servers' in an Internet-based client-server architecture. Each modality server encapsulates and exposes commercial and research software packages, presenting a socket network interface that is abstracted to a high-level interface, minimizing both vendor dependencies and required changes on the client side as the server's technology improves.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heckman, B.K.; Chinn, V.K.
1981-01-01
The development and use of computer programs written to produce the paper tape needed for the automation, or numeric control, of drill presses employed to fabricate computed-designed printed circuit boards are described. (LCL)
NASA Astrophysics Data System (ADS)
Figl, Michael; Birkfellner, Wolfgang; Watzinger, Franz; Wanschitz, Felix; Hummel, Johann; Hanel, Rudolf A.; Ewers, Rolf; Bergmann, Helmar
2002-05-01
Two main concepts of Head Mounted Displays (HMD) for augmented reality (AR) visualization exist, the optical and video-see through type. Several research groups have pursued both approaches for utilizing HMDs for computer aided surgery. While the hardware requirements for a video see through HMD to achieve acceptable time delay and frame rate seem to be enormous the clinical acceptance of such a device is doubtful from a practical point of view. Starting from previous work in displaying additional computer-generated graphics in operating microscopes, we have adapted a miniature head mounted operating microscope for AR by integrating two very small computer displays. To calibrate the projection parameters of this so called Varioscope AR we have used Tsai's Algorithm for camera calibration. Connection to a surgical navigation system was performed by defining an open interface to the control unit of the Varioscope AR. The control unit consists of a standard PC with a dual head graphics adapter to render and display the desired augmentation of the scene. We connected this control unit to a computer aided surgery (CAS) system by the TCP/IP interface. In this paper we present the control unit for the HMD and its software design. We tested two different optical tracking systems, the Flashpoint (Image Guided Technologies, Boulder, CO), which provided about 10 frames per second, and the Polaris (Northern Digital, Ontario, Canada) which provided at least 30 frames per second, both with a time delay of one frame.
Takano, Kouji; Hata, Naoki; Kansaku, Kenji
2011-01-01
The brain–machine interface (BMI) or brain–computer interface is a new interface technology that uses neurophysiological signals from the brain to control external machines or computers. This technology is expected to support daily activities, especially for persons with disabilities. To expand the range of activities enabled by this type of interface, here, we added augmented reality (AR) to a P300-based BMI. In this new system, we used a see-through head-mount display (HMD) to create control panels with flicker visual stimuli to support the user in areas close to controllable devices. When the attached camera detects an AR marker, the position and orientation of the marker are calculated, and the control panel for the pre-assigned appliance is created by the AR system and superimposed on the HMD. The participants were required to control system-compatible devices, and they successfully operated them without significant training. Online performance with the HMD was not different from that using an LCD monitor. Posterior and lateral (right or left) channel selections contributed to operation of the AR–BMI with both the HMD and LCD monitor. Our results indicate that AR–BMI systems operated with a see-through HMD may be useful in building advanced intelligent environments. PMID:21541307
Multimodal neuroelectric interface development
NASA Technical Reports Server (NTRS)
Trejo, Leonard J.; Wheeler, Kevin R.; Jorgensen, Charles C.; Rosipal, Roman; Clanton, Sam T.; Matthews, Bryan; Hibbs, Andrew D.; Matthews, Robert; Krupka, Michael
2003-01-01
We are developing electromyographic and electroencephalographic methods, which draw control signals for human-computer interfaces from the human nervous system. We have made progress in four areas: 1) real-time pattern recognition algorithms for decoding sequences of forearm muscle activity associated with control gestures; 2) signal-processing strategies for computer interfaces using electroencephalogram (EEG) signals; 3) a flexible computation framework for neuroelectric interface research; and d) noncontact sensors, which measure electromyogram or EEG signals without resistive contact to the body.
Implanted Miniaturized Antenna for Brain Computer Interface Applications: Analysis and Design
Zhao, Yujuan; Rennaker, Robert L.; Hutchens, Chris; Ibrahim, Tamer S.
2014-01-01
Implantable Brain Computer Interfaces (BCIs) are designed to provide real-time control signals for prosthetic devices, study brain function, and/or restore sensory information lost as a result of injury or disease. Using Radio Frequency (RF) to wirelessly power a BCI could widely extend the number of applications and increase chronic in-vivo viability. However, due to the limited size and the electromagnetic loss of human brain tissues, implanted miniaturized antennas suffer low radiation efficiency. This work presents simulations, analysis and designs of implanted antennas for a wireless implantable RF-powered brain computer interface application. The results show that thin (on the order of 100 micrometers thickness) biocompatible insulating layers can significantly impact the antenna performance. The proper selection of the dielectric properties of the biocompatible insulating layers and the implantation position inside human brain tissues can facilitate efficient RF power reception by the implanted antenna. While the results show that the effects of the human head shape on implanted antenna performance is somewhat negligible, the constitutive properties of the brain tissues surrounding the implanted antenna can significantly impact the electrical characteristics (input impedance, and operational frequency) of the implanted antenna. Three miniaturized antenna designs are simulated and demonstrate that maximum RF power of up to 1.8 milli-Watts can be received at 2 GHz when the antenna implanted around the dura, without violating the Specific Absorption Rate (SAR) limits. PMID:25079941
Virtually-augmented interfaces for tactical aircraft.
Haas, M W
1995-05-01
The term Fusion Interface is defined as a class of interface which integrally incorporates both virtual and non-virtual concepts and devices across the visual, auditory and haptic sensory modalities. A fusion interface is a multi-sensory virtually-augmented synthetic environment. A new facility has been developed within the Human Engineering Division of the Armstrong Laboratory dedicated to exploratory development of fusion-interface concepts. One of the virtual concepts to be investigated in the Fusion Interfaces for Tactical Environments facility (FITE) is the application of EEG and other physiological measures for virtual control of functions within the flight environment. FITE is a specialized flight simulator which allows efficient concept development through the use of rapid prototyping followed by direct experience of new fusion concepts. The FITE facility also supports evaluation of fusion concepts by operational fighter pilots in a high fidelity simulated air combat environment. The facility was utilized by a multi-disciplinary team composed of operational pilots, human-factors engineers, electronics engineers, computer scientists, and experimental psychologists to prototype and evaluate the first multi-sensory, virtually-augmented cockpit. The cockpit employed LCD-based head-down displays, a helmet-mounted display, three-dimensionally localized audio displays, and a haptic display. This paper will endeavor to describe the FITE facility architecture, some of the characteristics of the FITE virtual display and control devices, and the potential application of EEG and other physiological measures within the FITE facility.
Head Pose Estimation on Eyeglasses Using Line Detection and Classification Approach
NASA Astrophysics Data System (ADS)
Setthawong, Pisal; Vannija, Vajirasak
This paper proposes a unique approach for head pose estimation of subjects with eyeglasses by using a combination of line detection and classification approaches. Head pose estimation is considered as an important non-verbal form of communication and could also be used in the area of Human-Computer Interface. A major improvement of the proposed approach is that it allows estimation of head poses at a high yaw/pitch angle when compared with existing geometric approaches, does not require expensive data preparation and training, and is generally fast when compared with other approaches.
VEVI: A Virtual Reality Tool For Robotic Planetary Explorations
NASA Technical Reports Server (NTRS)
Piguet, Laurent; Fong, Terry; Hine, Butler; Hontalas, Phil; Nygren, Erik
1994-01-01
The Virtual Environment Vehicle Interface (VEVI), developed by the NASA Ames Research Center's Intelligent Mechanisms Group, is a modular operator interface for direct teleoperation and supervisory control of robotic vehicles. Virtual environments enable the efficient display and visualization of complex data. This characteristic allows operators to perceive and control complex systems in a natural fashion, utilizing the highly-evolved human sensory system. VEVI utilizes real-time, interactive, 3D graphics and position / orientation sensors to produce a range of interface modalities from the flat panel (windowed or stereoscopic) screen displays to head mounted/head-tracking stereo displays. The interface provides generic video control capability and has been used to control wheeled, legged, air bearing, and underwater vehicles in a variety of different environments. VEVI was designed and implemented to be modular, distributed and easily operated through long-distance communication links, using a communication paradigm called SYNERGY.
Robust human machine interface based on head movements applied to assistive robotics.
Perez, Elisa; López, Natalia; Orosco, Eugenio; Soria, Carlos; Mut, Vicente; Freire-Bastos, Teodiano
2013-01-01
This paper presents an interface that uses two different sensing techniques and combines both results through a fusion process to obtain the minimum-variance estimator of the orientation of the user's head. Sensing techniques of the interface are based on an inertial sensor and artificial vision. The orientation of the user's head is used to steer the navigation of a robotic wheelchair. Also, a control algorithm for assistive technology system is presented. The system is evaluated by four individuals with severe motors disability and a quantitative index was developed, in order to objectively evaluate the performance. The results obtained are promising since most users could perform the proposed tasks with the robotic wheelchair.
Robust Human Machine Interface Based on Head Movements Applied to Assistive Robotics
Perez, Elisa; López, Natalia; Orosco, Eugenio; Soria, Carlos; Mut, Vicente; Freire-Bastos, Teodiano
2013-01-01
This paper presents an interface that uses two different sensing techniques and combines both results through a fusion process to obtain the minimum-variance estimator of the orientation of the user's head. Sensing techniques of the interface are based on an inertial sensor and artificial vision. The orientation of the user's head is used to steer the navigation of a robotic wheelchair. Also, a control algorithm for assistive technology system is presented. The system is evaluated by four individuals with severe motors disability and a quantitative index was developed, in order to objectively evaluate the performance. The results obtained are promising since most users could perform the proposed tasks with the robotic wheelchair. PMID:24453877
MARTI: man-machine animation real-time interface
NASA Astrophysics Data System (ADS)
Jones, Christian M.; Dlay, Satnam S.
1997-05-01
The research introduces MARTI (man-machine animation real-time interface) for the realization of natural human-machine interfacing. The system uses simple vocal sound-tracks of human speakers to provide lip synchronization of computer graphical facial models. We present novel research in a number of engineering disciplines, which include speech recognition, facial modeling, and computer animation. This interdisciplinary research utilizes the latest, hybrid connectionist/hidden Markov model, speech recognition system to provide very accurate phone recognition and timing for speaker independent continuous speech, and expands on knowledge from the animation industry in the development of accurate facial models and automated animation. The research has many real-world applications which include the provision of a highly accurate and 'natural' man-machine interface to assist user interactions with computer systems and communication with one other using human idiosyncrasies; a complete special effects and animation toolbox providing automatic lip synchronization without the normal constraints of head-sets, joysticks, and skilled animators; compression of video data to well below standard telecommunication channel bandwidth for video communications and multi-media systems; assisting speech training and aids for the handicapped; and facilitating player interaction for 'video gaming' and 'virtual worlds.' MARTI has introduced a new level of realism to man-machine interfacing and special effect animation which has been previously unseen.
Fusion interfaces for tactical environments: An application of virtual reality technology
NASA Technical Reports Server (NTRS)
Haas, Michael W.
1994-01-01
The term Fusion Interface is defined as a class of interface which integrally incorporates both virtual and nonvirtual concepts and devices across the visual, auditory, and haptic sensory modalities. A fusion interface is a multisensory virtually-augmented synthetic environment. A new facility has been developed within the Human Engineering Division of the Armstrong Laboratory dedicated to exploratory development of fusion interface concepts. This new facility, the Fusion Interfaces for Tactical Environments (FITE) Facility is a specialized flight simulator enabling efficient concept development through rapid prototyping and direct experience of new fusion concepts. The FITE Facility also supports evaluation of fusion concepts by operation fighter pilots in an air combat environment. The facility is utilized by a multidisciplinary design team composed of human factors engineers, electronics engineers, computer scientists, experimental psychologists, and oeprational pilots. The FITE computational architecture is composed of twenty-five 80486-based microcomputers operating in real-time. The microcomputers generate out-the-window visuals, in-cockpit and head-mounted visuals, localized auditory presentations, haptic displays on the stick and rudder pedals, as well as executing weapons models, aerodynamic models, and threat models.
An intelligent multi-media human-computer dialogue system
NASA Technical Reports Server (NTRS)
Neal, J. G.; Bettinger, K. E.; Byoun, J. S.; Dobes, Z.; Thielman, C. Y.
1988-01-01
Sophisticated computer systems are being developed to assist in the human decision-making process for very complex tasks performed under stressful conditions. The human-computer interface is a critical factor in these systems. The human-computer interface should be simple and natural to use, require a minimal learning period, assist the user in accomplishing his task(s) with a minimum of distraction, present output in a form that best conveys information to the user, and reduce cognitive load for the user. In pursuit of this ideal, the Intelligent Multi-Media Interfaces project is devoted to the development of interface technology that integrates speech, natural language text, graphics, and pointing gestures for human-computer dialogues. The objective of the project is to develop interface technology that uses the media/modalities intelligently in a flexible, context-sensitive, and highly integrated manner modelled after the manner in which humans converse in simultaneous coordinated multiple modalities. As part of the project, a knowledge-based interface system, called CUBRICON (CUBRC Intelligent CONversationalist) is being developed as a research prototype. The application domain being used to drive the research is that of military tactical air control.
Redesigning the Human-Machine Interface for Computer-Mediated Visual Technologies.
ERIC Educational Resources Information Center
Acker, Stephen R.
1986-01-01
This study examined an application of a human machine interface which relies on the use of optical bar codes incorporated in a computer-based module to teach radio production. The sequencing procedure used establishes the user rather than the computer as the locus of control for the mediated instruction. (Author/MBR)
Deep Space Network (DSN), Network Operations Control Center (NOCC) computer-human interfaces
NASA Technical Reports Server (NTRS)
Ellman, Alvin; Carlton, Magdi
1993-01-01
The Network Operations Control Center (NOCC) of the DSN is responsible for scheduling the resources of DSN, and monitoring all multi-mission spacecraft tracking activities in real-time. Operations performs this job with computer systems at JPL connected to over 100 computers at Goldstone, Australia and Spain. The old computer system became obsolete, and the first version of the new system was installed in 1991. Significant improvements for the computer-human interfaces became the dominant theme for the replacement project. Major issues required innovating problem solving. Among these issues were: How to present several thousand data elements on displays without overloading the operator? What is the best graphical representation of DSN end-to-end data flow? How to operate the system without memorizing mnemonics of hundreds of operator directives? Which computing environment will meet the competing performance requirements? This paper presents the technical challenges, engineering solutions, and results of the NOCC computer-human interface design.
Enhancing Image Findability through a Dual-Perspective Navigation Framework
ERIC Educational Resources Information Center
Lin, Yi-Ling
2013-01-01
This dissertation focuses on investigating whether users will locate desired images more efficiently and effectively when they are provided with information descriptors from both experts and the general public. This study develops a way to support image finding through a human-computer interface by providing subject headings and social tags about…
Optimizations and Applications in Head-Mounted Video-Based Eye Tracking
ERIC Educational Resources Information Center
Li, Feng
2011-01-01
Video-based eye tracking techniques have become increasingly attractive in many research fields, such as visual perception and human-computer interface design. The technique primarily relies on the positional difference between the center of the eye's pupil and the first-surface reflection at the cornea, the corneal reflection (CR). This…
Effects of Airport Tower Controller Decision Support Tool on Controllers Head-Up Time
NASA Technical Reports Server (NTRS)
Hayashi, Miwa; Cruz Lopez, Jose M.
2013-01-01
Despite that aircraft positions and movements can be easily monitored on the radar displays at major airports nowadays, it is still important for the air traffic control tower (ATCT) controllers to look outside the window as much as possible to assure safe operations of traffic management. The present paper investigates whether an introduction of the NASA's proposed Spot and Runway Departure Advisor (SARDA), a decision support tool for the ATCT controller, would increase or decrease the controllers' head-up time. SARDA provides the controller departure-release schedule advisories, i.e., when to release each departure aircraft in order to minimize individual aircraft's fuel consumption on taxiways and simultaneously maximize the overall runway throughput. The SARDA advisories were presented on electronic flight strips (EFS). To investigate effects on the head-up time, a human-in-the-loop simulation experiment with two retired ATCT controller participants was conducted in a high-fidelity ATCT cab simulator with 360-degree computer-generated out-the-window view. Each controller participant wore a wearable video camera on a side of their head with the camera facing forward. The video data were later used to calculate their line of sight at each moment and eventually identify their head-up times. Four sessions were run with the SARDA advisories, and four sessions were run without (baseline). Traffic-load levels were varied in each session. The same set of user interface - EFS and the radar displays - were used in both the advisory and baseline sessions to make them directly comparable. The paper reports the findings and discusses their implications.
NASA Astrophysics Data System (ADS)
Setscheny, Stephan
The interaction between human beings and technology builds a central aspect in human life. The most common form of this human-technology interface is the graphical user interface which is controlled through the mouse and the keyboard. In consequence of continuous miniaturization and the increasing performance of microcontrollers and sensors for the detection of human interactions, developers receive new possibilities for realising innovative interfaces. As far as this movement is concerned, the relevance of computers in the common sense and graphical user interfaces is decreasing. Especially in the area of ubiquitous computing and the interaction through tangible user interfaces a highly impact of this technical evolution can be seen. Apart from this, tangible and experience able interaction offers users the possibility of an interactive and intuitive method for controlling technical objects. The implementation of microcontrollers for control functions and sensors enables the realisation of these experience able interfaces. Besides the theories about tangible user interfaces, the consideration about sensors and the Arduino platform builds a main aspect of this work.
A model for the control mode man-computer interface dialogue
NASA Technical Reports Server (NTRS)
Chafin, R. L.
1981-01-01
A four stage model is presented for the control mode man-computer interface dialogue. It consists of context development, semantic development syntactic development, and command execution. Each stage is discussed in terms of the operator skill levels (naive, novice, competent, and expert) and pertinent human factors issues. These issues are human problem solving, human memory, and schemata. The execution stage is discussed in terms of the operators typing skills. This model provides an understanding of the human process in command mode activity for computer systems and a foundation for relating system characteristics to operator characteristics.
Hands-Free, Heads-Up Control System for Unmanned Ground Vehicles
2011-08-10
interface evaluation Industry evaluated two commercial-off-the-shelf (COTS) brain computer interfaces from two companies – Neurosky and Emotiv ...useless, resulting in very low command recognition accuracy. In addition, latency issues plagued the system. Figure 6 Emotiv Headset The... Emotiv system, unlike the Neurosky, required great effort to use and calibrate. It requires 16 foam tips to be wet with saline solution and then
Human-computer interface including haptically controlled interactions
Anderson, Thomas G.
2005-10-11
The present invention provides a method of human-computer interfacing that provides haptic feedback to control interface interactions such as scrolling or zooming within an application. Haptic feedback in the present method allows the user more intuitive control of the interface interactions, and allows the user's visual focus to remain on the application. The method comprises providing a control domain within which the user can control interactions. For example, a haptic boundary can be provided corresponding to scrollable or scalable portions of the application domain. The user can position a cursor near such a boundary, feeling its presence haptically (reducing the requirement for visual attention for control of scrolling of the display). The user can then apply force relative to the boundary, causing the interface to scroll the domain. The rate of scrolling can be related to the magnitude of applied force, providing the user with additional intuitive, non-visual control of scrolling.
Eye Tracking and Head Movement Detection: A State-of-Art Survey
2013-01-01
Eye-gaze detection and tracking have been an active research field in the past years as it adds convenience to a variety of applications. It is considered a significant untraditional method of human computer interaction. Head movement detection has also received researchers' attention and interest as it has been found to be a simple and effective interaction method. Both technologies are considered the easiest alternative interface methods. They serve a wide range of severely disabled people who are left with minimal motor abilities. For both eye tracking and head movement detection, several different approaches have been proposed and used to implement different algorithms for these technologies. Despite the amount of research done on both technologies, researchers are still trying to find robust methods to use effectively in various applications. This paper presents a state-of-art survey for eye tracking and head movement detection methods proposed in the literature. Examples of different fields of applications for both technologies, such as human-computer interaction, driving assistance systems, and assistive technologies are also investigated. PMID:27170851
Human machine interface to manually drive rhombic like vehicles such as transport casks in ITER
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lopes, Pedro; Vale, Alberto; Ventura, Rodrigo
2015-07-01
The Cask and Plug Remote Handling System (CPRHS) and the respective Cask Transfer System (CTS) are designed to transport activated components between the reactor and the hot cell buildings of ITER during maintenance operations. In nominal operation, the CPRHS/CTS shall operate autonomously under human supervision. However, in some unexpected situations, the automatic mode must be overridden and the vehicle must be remotely guided by a human operator due to the harsh conditions of the environment. The CPRHS/CTS is a rhombic-like vehicle with two independent steerable and drivable wheels along its longitudinal axis, giving it omni-directional capabilities. During manual guidance, themore » human operator has to deal with four degrees of freedom, namely the orientations and speeds of two wheels. This work proposes a Human Machine Interface (HMI) to manage the degrees of freedom and to remotely guide the CPRHS/CTS in ITER taking the most advantages of rhombic like capabilities. Previous work was done to drive each wheel independently, i.e., control the orientation and speed of each wheel independently. The results have shown that the proposed solution is inefficient. The attention of the human operator becomes focused in a single wheel. In addition, the proposed solution cannot assure that the commands accomplish the physical constrains of the vehicle, resulting in slippage or even in clashes. This work proposes a solution that consists in the control of the vehicle looking at the position of its center of mass and its heading in the world frame. The solution is implemented using a rotational disk to control the vehicle heading and a common analogue joystick to control the vector speed of the center of the mass of the vehicle. The number of degrees of freedom reduces to three, i.e., two angles (vehicle heading and the orientation of the vector speed) and a scalar (the magnitude of the speed vector). This is possible using a kinematic model based on the vehicle Instantaneous Center of Rotation (ICR): a geometric approach where, at each time instant, the vehicle describes a circumference (either with a finite or infinite radius). The inverse of the kinematic model transforms the three input parameters of the center of mass into the four parameters for the wheels, preserving the omni-directional capabilities. The solution is implemented and tested using a HMI with a control disk and an analog joystick with two axis. The control disk was specially designed for this solution and implemented using a programmable micro-controller. In the first set of experiments, the HMI communicates with a computer running a simulator of the CPRHS/CTS, with the vehicle kinematics and dynamics, moving in a map of the ITER buildings. In the second set of experiments, the HMI communicates with a scaled prototype of the CPRHS running in a mock-up scenario to obtain more realistic results. Several type of tests were performed to evaluate the usability of the HMI. Different human operators without knowledge neither experience with this interface were invited to test the HMI. The operators had to drive the vehicle from an initial place to a final destination under the following conditions: with a pre-computed path to help guidance, without any path, with the information of the closest obstacles and without any help. The performance was evaluated using the time duration of the operation, the energy required to perform the described path, the risk of collision and, in case of a pre-computed path, the comparison between paths. In addition, each operator tested the HMI several times to evaluate the performance along consecutive trials. (authors)« less
A neural-based remote eye gaze tracker under natural head motion.
Torricelli, Diego; Conforto, Silvia; Schmid, Maurizio; D'Alessio, Tommaso
2008-10-01
A novel approach to view-based eye gaze tracking for human computer interface (HCI) is presented. The proposed method combines different techniques to address the problems of head motion, illumination and usability in the framework of low cost applications. Feature detection and tracking algorithms have been designed to obtain an automatic setup and strengthen the robustness to light conditions. An extensive analysis of neural solutions has been performed to deal with the non-linearity associated with gaze mapping under free-head conditions. No specific hardware, such as infrared illumination or high-resolution cameras, is needed, rather a simple commercial webcam working in visible light spectrum suffices. The system is able to classify the gaze direction of the user over a 15-zone graphical interface, with a success rate of 95% and a global accuracy of around 2 degrees , comparable with the vast majority of existing remote gaze trackers.
Rojas, Mario; Ponce, Pedro; Molina, Arturo
2016-08-01
This paper presents the evaluation, under standardized metrics, of alternative input methods to steer and maneuver a semi-autonomous electric wheelchair. The Human-Machine Interface (HMI), which includes a virtual joystick, head movements and speech recognition controls, was designed to facilitate mobility skills for severely disabled people. Thirteen tasks, which are common to all the wheelchair users, were attempted five times by controlling it with the virtual joystick and the hands-free interfaces in different areas for disabled and non-disabled people. Even though the prototype has an intelligent navigation control, based on fuzzy logic and ultrasonic sensors, the evaluation was done without assistance. The scored values showed that both controls, the head movements and the virtual joystick have similar capabilities, 92.3% and 100%, respectively. However, the 54.6% capacity score obtained for the speech control interface indicates the needs of the navigation assistance to accomplish some of the goals. Furthermore, the evaluation time indicates those skills which require more user's training with the interface and specifications to improve the total performance of the wheelchair.
NASA Astrophysics Data System (ADS)
Abbott, W. W.; Faisal, A. A.
2012-08-01
Eye movements are highly correlated with motor intentions and are often retained by patients with serious motor deficiencies. Despite this, eye tracking is not widely used as control interface for movement in impaired patients due to poor signal interpretation and lack of control flexibility. We propose that tracking the gaze position in 3D rather than 2D provides a considerably richer signal for human machine interfaces by allowing direct interaction with the environment rather than via computer displays. We demonstrate here that by using mass-produced video-game hardware, it is possible to produce an ultra-low-cost binocular eye-tracker with comparable performance to commercial systems, yet 800 times cheaper. Our head-mounted system has 30 USD material costs and operates at over 120 Hz sampling rate with a 0.5-1 degree of visual angle resolution. We perform 2D and 3D gaze estimation, controlling a real-time volumetric cursor essential for driving complex user interfaces. Our approach yields an information throughput of 43 bits s-1, more than ten times that of invasive and semi-invasive brain-machine interfaces (BMIs) that are vastly more expensive. Unlike many BMIs our system yields effective real-time closed loop control of devices (10 ms latency), after just ten minutes of training, which we demonstrate through a novel BMI benchmark—the control of the video arcade game ‘Pong’.
Ocular attention-sensing interface system
NASA Technical Reports Server (NTRS)
Zaklad, Allen; Glenn, Floyd A., III; Iavecchia, Helene P.; Stokes, James M.
1986-01-01
The purpose of the research was to develop an innovative human-computer interface based on eye movement and voice control. By eliminating a manual interface (keyboard, joystick, etc.), OASIS provides a control mechanism that is natural, efficient, accurate, and low in workload.
van de Kamp, Cornelis; Gawthrop, Peter J.; Gollee, Henrik; Lakie, Martin; Loram, Ian D.
2013-01-01
Modular organization in control architecture may underlie the versatility of human motor control; but the nature of the interface relating sensory input through task-selection in the space of performance variables to control actions in the space of the elemental variables is currently unknown. Our central question is whether the control architecture converges to a serial process along a single channel? In discrete reaction time experiments, psychologists have firmly associated a serial single channel hypothesis with refractoriness and response selection [psychological refractory period (PRP)]. Recently, we developed a methodology and evidence identifying refractoriness in sustained control of an external single degree-of-freedom system. We hypothesize that multi-segmental whole-body control also shows refractoriness. Eight participants controlled their whole body to ensure a head marker tracked a target as fast and accurately as possible. Analysis showed enhanced delays in response to stimuli with close temporal proximity to the preceding stimulus. Consistent with our preceding work, this evidence is incompatible with control as a linear time invariant process. This evidence is consistent with a single-channel serial ballistic process within the intermittent control paradigm with an intermittent interval of around 0.5 s. A control architecture reproducing intentional human movement control must reproduce refractoriness. Intermittent control is designed to provide computational time for an online optimization process and is appropriate for flexible adaptive control. For human motor control we suggest that parallel sensory input converges to a serial, single channel process involving planning, selection, and temporal inhibition of alternative responses prior to low dimensional motor output. Such design could aid robots to reproduce the flexibility of human control. PMID:23675342
Toward a practical mobile robotic aid system for people with severe physical disabilities.
Regalbuto, M A; Krouskop, T A; Cheatham, J B
1992-01-01
A simple, relatively inexpensive robotic system that can aid severely disabled persons by providing pick-and-place manipulative abilities to augment the functions of human or trained animal assistants is under development at Rice University and the Baylor College of Medicine. A stand-alone software application program runs on a Macintosh personal computer and provides the user with a selection of interactive windows for commanding the mobile robot via cursor action. A HERO 2000 robot has been modified such that its workspace extends from the floor to tabletop heights, and the robot is interfaced to a Macintosh SE via a wireless communications link for untethered operation. Integrated into the system are hardware and software which allow the user to control household appliances in addition to the robot. A separate Machine Control Interface device converts breath action and head or other three-dimensional motion inputs into cursor signals. Preliminary in-home and laboratory testing has demonstrated the utility of the system to perform useful navigational and manipulative tasks.
Wang, Fang; Han, Yong; Wang, Bingyu; Peng, Qian; Huang, Xiaoqun; Miller, Karol; Wittek, Adam
2018-05-12
In this study, we investigate the effects of modelling choices for the brain-skull interface (layers of tissues between the brain and skull that determine boundary conditions for the brain) and the constitutive model of brain parenchyma on the brain responses under violent impact as predicted using computational biomechanics model. We used the head/brain model from Total HUman Model for Safety (THUMS)-extensively validated finite element model of the human body that has been applied in numerous injury biomechanics studies. The computations were conducted using a well-established nonlinear explicit dynamics finite element code LS-DYNA. We employed four approaches for modelling the brain-skull interface and four constitutive models for the brain tissue in the numerical simulations of the experiments on post-mortem human subjects exposed to violent impacts reported in the literature. The brain-skull interface models included direct representation of the brain meninges and cerebrospinal fluid, outer brain surface rigidly attached to the skull, frictionless sliding contact between the brain and skull, and a layer of spring-type cohesive elements between the brain and skull. We considered Ogden hyperviscoelastic, Mooney-Rivlin hyperviscoelastic, neo-Hookean hyperviscoelastic and linear viscoelastic constitutive models of the brain tissue. Our study indicates that the predicted deformations within the brain and related brain injury criteria are strongly affected by both the approach of modelling the brain-skull interface and the constitutive model of the brain parenchyma tissues. The results suggest that accurate prediction of deformations within the brain and risk of brain injury due to violent impact using computational biomechanics models may require representation of the meninges and subarachnoidal space with cerebrospinal fluid in the model and application of hyperviscoelastic (preferably Ogden-type) constitutive model for the brain tissue.
A human factors approach to range scheduling for satellite control
NASA Technical Reports Server (NTRS)
Wright, Cameron H. G.; Aitken, Donald J.
1991-01-01
Range scheduling for satellite control presents a classical problem: supervisory control of a large-scale dynamic system, with unwieldy amounts of interrelated data used as inputs to the decision process. Increased automation of the task, with the appropriate human-computer interface, is highly desirable. The development and user evaluation of a semi-automated network range scheduling system is described. The system incorporates a synergistic human-computer interface consisting of a large screen color display, voice input/output, a 'sonic pen' pointing device, a touchscreen color CRT, and a standard keyboard. From a human factors standpoint, this development represents the first major improvement in almost 30 years to the satellite control network scheduling task.
Tidoni, Emmanuele; Gergondet, Pierre; Fusco, Gabriele; Kheddar, Abderrahmane; Aglioti, Salvatore M
2017-06-01
The efficient control of our body and successful interaction with the environment are possible through the integration of multisensory information. Brain-computer interface (BCI) may allow people with sensorimotor disorders to actively interact in the world. In this study, visual information was paired with auditory feedback to improve the BCI control of a humanoid surrogate. Healthy and spinal cord injured (SCI) people were asked to embody a humanoid robot and complete a pick-and-place task by means of a visual evoked potentials BCI system. Participants observed the remote environment from the robot's perspective through a head mounted display. Human-footsteps and computer-beep sounds were used as synchronous/asynchronous auditory feedback. Healthy participants achieved better placing accuracy when listening to human footstep sounds relative to a computer-generated sound. SCI people demonstrated more difficulty in steering the robot during asynchronous auditory feedback conditions. Importantly, subjective reports highlighted that the BCI mask overlaying the display did not limit the observation of the scenario and the feeling of being in control of the robot. Overall, the data seem to suggest that sensorimotor-related information may improve the control of external devices. Further studies are required to understand how the contribution of residual sensory channels could improve the reliability of BCI systems.
NASA Astrophysics Data System (ADS)
Kajiwara, Yusuke; Murata, Hiroaki; Kimura, Haruhiko; Abe, Koji
As a communication support tool for cases of amyotrophic lateral sclerosis (ALS), researches on eye gaze human-computer interfaces have been active. However, since voluntary and involuntary eye movements cannot be distinguished in the interfaces, their performance is still not sufficient for practical use. This paper presents a high performance human-computer interface system which unites high quality recognitions of horizontal directional eye movements and voluntary blinks. The experimental results have shown that the number of incorrect inputs is decreased by 35.1% in an existing system which equips recognitions of horizontal and vertical directional eye movements in addition to voluntary blinks and character inputs are speeded up by 17.4% from the existing system.
Human Factors Considerations in System Design
NASA Technical Reports Server (NTRS)
Mitchell, C. M. (Editor); Vanbalen, P. M. (Editor); Moe, K. L. (Editor)
1983-01-01
Human factors considerations in systems design was examined. Human factors in automated command and control, in the efficiency of the human computer interface and system effectiveness are outlined. The following topics are discussed: human factors aspects of control room design; design of interactive systems; human computer dialogue, interaction tasks and techniques; guidelines on ergonomic aspects of control rooms and highly automated environments; system engineering for control by humans; conceptual models of information processing; information display and interaction in real time environments.
LaFleur, Karl; Cassady, Kaitlin; Doud, Alexander; Shades, Kaleb; Rogin, Eitan; He, Bin
2013-01-01
Objective At the balanced intersection of human and machine adaptation is found the optimally functioning brain-computer interface (BCI). In this study, we report a novel experiment of BCI controlling a robotic quadcopter in three-dimensional physical space using noninvasive scalp EEG in human subjects. We then quantify the performance of this system using metrics suitable for asynchronous BCI. Lastly, we examine the impact that operation of a real world device has on subjects’ control with comparison to a two-dimensional virtual cursor task. Approach Five human subjects were trained to modulate their sensorimotor rhythms to control an AR Drone navigating a three-dimensional physical space. Visual feedback was provided via a forward facing camera on the hull of the drone. Individual subjects were able to accurately acquire up to 90.5% of all valid targets presented while travelling at an average straight-line speed of 0.69 m/s. Significance Freely exploring and interacting with the world around us is a crucial element of autonomy that is lost in the context of neurodegenerative disease. Brain-computer interfaces are systems that aim to restore or enhance a user’s ability to interact with the environment via a computer and through the use of only thought. We demonstrate for the first time the ability to control a flying robot in the three-dimensional physical space using noninvasive scalp recorded EEG in humans. Our work indicates the potential of noninvasive EEG based BCI systems to accomplish complex control in three-dimensional physical space. The present study may serve as a framework for the investigation of multidimensional non-invasive brain-computer interface control in a physical environment using telepresence robotics. PMID:23735712
Naumann, Benjamin; Warth, Peter; Olsson, Lennart; Konstantinidis, Peter
2017-11-01
The vertebrate head/trunk interface is the region of the body where the different developmental programs of the head and trunk come in contact. Many anatomical structures that develop in this transition zone differ from similar structures in the head or the trunk. This is best exemplified by the cucullaris/trapezius muscle, spanning the head/trunk interface by connecting the head to the pectoral girdle. The source of this muscle has been claimed to be either the unsegmented head mesoderm or the somites of the trunk. However most recent data on the development of the cucullaris muscle are derived from tetrapods and information from actinopterygian taxa is scarce. We used classical histology in combination with fluorescent whole-mount antibody staining and micro-computed tomography to investigate the developmental pattern of the cucullaris and the branchial muscles in a basal actinopterygian, the Longnose gar (Lepisosteus osseus). Our results show (1) that the cucullaris has been misidentified in earlier studies on its development in Lepisosteus. (2) Cucullaris development is delayed compared to other head and trunk muscles. (3) This developmental pattern of the cucullaris is similar to that reported from some tetrapod taxa. (4) That the retractor dorsalis muscle of L. osseus shows a delayed developmental pattern similar to the cucullaris. Our data are in agreement with an explanatory scenario for the cucullaris development in tetrapods, suggesting that these mechanisms are conserved throughout the Osteichthyes. Furthermore the developmental pattern of the retractor dorsalis, also spanning the head/trunk interface, seems to be controlled by similar mechanisms. © 2017 Wiley Periodicals, Inc.
Anderson, Thomas G.
2004-12-21
The present invention provides a method of human-computer interfacing. Force feedback allows intuitive navigation and control near a boundary between regions in a computer-represented space. For example, the method allows a user to interact with a virtual craft, then push through the windshield of the craft to interact with the virtual world surrounding the craft. As another example, the method allows a user to feel transitions between different control domains of a computer representation of a space. The method can provide for force feedback that increases as a user's locus of interaction moves near a boundary, then perceptibly changes (e.g., abruptly drops or changes direction) when the boundary is traversed.
Human-computer interface incorporating personal and application domains
Anderson, Thomas G [Albuquerque, NM
2011-03-29
The present invention provides a human-computer interface. The interface includes provision of an application domain, for example corresponding to a three-dimensional application. The user is allowed to navigate and interact with the application domain. The interface also includes a personal domain, offering the user controls and interaction distinct from the application domain. The separation into two domains allows the most suitable interface methods in each: for example, three-dimensional navigation in the application domain, and two- or three-dimensional controls in the personal domain. Transitions between the application domain and the personal domain are under control of the user, and the transition method is substantially independent of the navigation in the application domain. For example, the user can fly through a three-dimensional application domain, and always move to the personal domain by moving a cursor near one extreme of the display.
Human-computer interface incorporating personal and application domains
Anderson, Thomas G.
2004-04-20
The present invention provides a human-computer interface. The interface includes provision of an application domain, for example corresponding to a three-dimensional application. The user is allowed to navigate and interact with the application domain. The interface also includes a personal domain, offering the user controls and interaction distinct from the application domain. The separation into two domains allows the most suitable interface methods in each: for example, three-dimensional navigation in the application domain, and two- or three-dimensional controls in the personal domain. Transitions between the application domain and the personal domain are under control of the user, and the transition method is substantially independent of the navigation in the application domain. For example, the user can fly through a three-dimensional application domain, and always move to the personal domain by moving a cursor near one extreme of the display.
Development and application of virtual reality for man/systems integration
NASA Technical Reports Server (NTRS)
Brown, Marcus
1991-01-01
While the graphical presentation of computer models signified a quantum leap over presentations limited to text and numbers, it still has the problem of presenting an interface barrier between the human user and the computer model. The user must learn a command language in order to orient themselves in the model. For example, to move left from the current viewpoint of the model, they might be required to type 'LEFT' at a keyboard. This command is fairly intuitive, but if the viewpoint moves far enough that there are no visual cues overlapping with the first view, the user does not know if the viewpoint has moved inches, feet, or miles to the left, or perhaps remained in the same position, but rotated to the left. Until the user becomes quite familiar with the interface language of the computer model presentation, they will be proned to lossing their bearings frequently. Even a highly skilled user will occasionally get lost in the model. A new approach to presenting type type of information is to directly interpret the user's body motions as the input language for determining what view to present. When the user's head turns 45 degrees to the left, the viewpoint should be rotated 45 degrees to the left. Since the head moves through several intermediate angles between the original view and the final one, several intermediate views should be presented, providing the user with a sense of continuity between the original view and the final one. Since the primary way a human physically interacts with their environment should monitor the movements of the user's hands and alter objects in the virtual model in a way consistent with the way an actual object would move when manipulated using the same hand movements. Since this approach to the man-computer interface closely models the same type of interface that humans have with the physical world, this type of interface is often called virtual reality, and the model is referred to as a virtual world. The task of this summer fellowship was to set up a virtual reality system at MSFC and begin applying it to some of the questions which concern scientists and engineers involved in space flight. A brief discussion of this work is presented.
Multi-step EMG Classification Algorithm for Human-Computer Interaction
NASA Astrophysics Data System (ADS)
Ren, Peng; Barreto, Armando; Adjouadi, Malek
A three-electrode human-computer interaction system, based on digital processing of the Electromyogram (EMG) signal, is presented. This system can effectively help disabled individuals paralyzed from the neck down to interact with computers or communicate with people through computers using point-and-click graphic interfaces. The three electrodes are placed on the right frontalis, the left temporalis and the right temporalis muscles in the head, respectively. The signal processing algorithm used translates the EMG signals during five kinds of facial movements (left jaw clenching, right jaw clenching, eyebrows up, eyebrows down, simultaneous left & right jaw clenching) into five corresponding types of cursor movements (left, right, up, down and left-click), to provide basic mouse control. The classification strategy is based on three principles: the EMG energy of one channel is typically larger than the others during one specific muscle contraction; the spectral characteristics of the EMG signals produced by the frontalis and temporalis muscles during different movements are different; the EMG signals from adjacent channels typically have correlated energy profiles. The algorithm is evaluated on 20 pre-recorded EMG signal sets, using Matlab simulations. The results show that this method provides improvements and is more robust than other previous approaches.
NASA Technical Reports Server (NTRS)
Mitchell, C. M.
1982-01-01
The NASA-Goddard Space Flight Center is responsible for the control and ground support for all of NASA's unmanned near-earth satellites. Traditionally, each satellite had its own dedicated mission operations room. In the mid-seventies, an integration of some of these dedicated facilities was begun with the primary objective to reduce costs. In this connection, the Multi-Satellite Operations Control Center (MSOCC) was designed. MSOCC represents currently a labor intensive operation. Recently, Goddard has become increasingly aware of human factors and human-machine interface issues. A summary is provided of some of the attempts to apply human factors considerations in the design of command and control environments. Current and future activities with respect to human factors and systems design are discussed, giving attention to the allocation of tasks between human and computer, and the interface for the human-computer dialogue.
Information Presentation and Control in a Modern Air Traffic Control Tower Simulator
NASA Technical Reports Server (NTRS)
Haines, Richard F.; Doubek, Sharon; Rabin, Boris; Harke, Stanton
1996-01-01
The proper presentation and management of information in America's largest and busiest (Level V) air traffic control towers calls for an in-depth understanding of many different human-computer considerations: user interface design for graphical, radar, and text; manual and automated data input hardware; information/display output technology; reconfigurable workstations; workload assessment; and many other related subjects. This paper discusses these subjects in the context of the Surface Development and Test Facility (SDTF) currently under construction at NASA's Ames Research Center, a full scale, multi-manned, air traffic control simulator which will provide the "look and feel" of an actual airport tower cab. Special emphasis will be given to the human-computer interfaces required for the different kinds of information displayed at the various controller and supervisory positions and to the computer-aided design (CAD) and other analytic, computer-based tools used to develop the facility.
Concept of software interface for BCI systems
NASA Astrophysics Data System (ADS)
Svejda, Jaromir; Zak, Roman; Jasek, Roman
2016-06-01
Brain Computer Interface (BCI) technology is intended to control external system by brain activity. One of main part of such system is software interface, which carries about clear communication between brain and either computer or additional devices connected to computer. This paper is organized as follows. Firstly, current knowledge about human brain is briefly summarized to points out its complexity. Secondly, there is described a concept of BCI system, which is then used to build an architecture of proposed software interface. Finally, there are mentioned disadvantages of sensing technology discovered during sensing part of our research.
Virtual interface environment workstations
NASA Technical Reports Server (NTRS)
Fisher, S. S.; Wenzel, E. M.; Coler, C.; Mcgreevy, M. W.
1988-01-01
A head-mounted, wide-angle, stereoscopic display system controlled by operator position, voice and gesture has been developed at NASA's Ames Research Center for use as a multipurpose interface environment. This Virtual Interface Environment Workstation (VIEW) system provides a multisensory, interactive display environment in which a user can virtually explore a 360-degree synthesized or remotely sensed environment and can viscerally interact with its components. Primary applications of the system are in telerobotics, management of large-scale integrated information systems, and human factors research. System configuration, research scenarios, and research directions are described.
Human factors optimization of virtual environment attributes for a space telerobotic control station
NASA Astrophysics Data System (ADS)
Lane, Jason Corde
2000-10-01
Remote control of underwater vehicles and other robotic systems has, up until now, proved to be a challenging task for the human operator. With technology advancements in computers and displays, computer interfaces can be used to alleviate the workload on the operator. This research introduces the concept of a commanded display, which is a graphical simulation that shows the commands sent to the actual system in real-time. The primary goal of this research was to show a commanded display as an alternative to the traditional predictive display for reducing the effects of time delay. Several experiments were used to investigate how subjects compensated for time delay under a variety of conditions while controlling a 7-degree of freedom robotic manipulator. Results indicate that time delay increased completion time linearly; this linear relationship occurred even at different manipulator speeds, varying levels of error, and when using a commanded display. The commanded display alleviated the majority of time delay effects, up to 91% reduction. The commanded display also facilitated more accurate control, reducing the number of inadvertent impacts to the task worksite, even when compared to no time delay. Even with a moderate error between the commanded and actual displays, the commanded display was still a useful tool for mitigating time delay. The way subjects controlled the manipulator with the input device was tracked and their control strategies were extracted. A correlation between the subjects' use of the input device and their task completion time was determined. The importance of stereo vision and head tracking was examined and shown to improve a subject's depth perception within a virtual environment. Reports of simulator sickness induced by display equipment, including a head mounted display and LCD shutter glasses, were compared. The results of the above testing were used to develop an effective virtual environment control station to control a multi-arm robot.
Feasibility of Equivalent Dipole Models for Electroencephalogram-Based Brain Computer Interfaces.
Schimpf, Paul H
2017-09-15
This article examines the localization errors of equivalent dipolar sources inverted from the surface electroencephalogram in order to determine the feasibility of using their location as classification parameters for non-invasive brain computer interfaces. Inverse localization errors are examined for two head models: a model represented by four concentric spheres and a realistic model based on medical imagery. It is shown that the spherical model results in localization ambiguity such that a number of dipolar sources, with different azimuths and varying orientations, provide a near match to the electroencephalogram of the best equivalent source. No such ambiguity exists for the elevation of inverted sources, indicating that for spherical head models, only the elevation of inverted sources (and not the azimuth) can be expected to provide meaningful classification parameters for brain-computer interfaces. In a realistic head model, all three parameters of the inverted source location are found to be reliable, providing a more robust set of parameters. In both cases, the residual error hypersurfaces demonstrate local minima, indicating that a search for the best-matching sources should be global. Source localization error vs. signal-to-noise ratio is also demonstrated for both head models.
Evaluation of head orientation and neck muscle EMG signals as three-dimensional command sources.
Williams, Matthew R; Kirsch, Robert F
2015-03-05
High cervical spinal cord injuries result in significant functional impairments and affect both the injured individual as well as their family and care givers. To help restore function to these individuals, multiple user interfaces are available to enable command and control of external devices. However, little work has been performed to assess the 3D performance of these interfaces. We investigated the performance of eight human subjects in using three user interfaces (head orientation, EMG from muscles of the head and neck, and a three-axis joystick) to command the endpoint position of a multi-axis robotic arm within a 3D workspace to perform a novel out-to-center 3D Fitts' Law style task. Two of these interfaces (head orientation, EMG from muscles of the head and neck) could realistically be used by individuals with high tetraplegia, while the joystick was evaluated as a standard of high performance. Performance metrics were developed to assess the aspects of command source performance. Data were analyzed using a mixed model design ANOVA. Fixed effects were investigated between sources as well as for interactions between index of difficulty, command source, and the five performance measures used. A 5% threshold for statistical significance was used in the analysis. The performances of the three command interfaces were rather similar, though significant differences between command sources were observed. The apparent similarity is due in large part to the sequential command strategy (i.e., one dimension of movement at a time) typically adopted by the subjects. EMG-based commands were particularly pulsatile in nature. The use of sequential commands had a significant impact on each command source's performance for movements in two or three dimensions. While the sequential nature of the commands produced by the user did not fit with Fitts' Law, the other performance measures used were able to illustrate the properties of each command source. Though pulsatile, given the overall similarity between head orientation and the EMG interface, (which also could be readily included in a future implanted neuroprosthesis) the use of EMG as a command source for controlling an arm in 3D space is an attractive choice.
The design of an intelligent human-computer interface for the test, control and monitor system
NASA Technical Reports Server (NTRS)
Shoaff, William D.
1988-01-01
The graphical intelligence and assistance capabilities of a human-computer interface for the Test, Control, and Monitor System at Kennedy Space Center are explored. The report focuses on how a particular commercial off-the-shelf graphical software package, Data Views, can be used to produce tools that build widgets such as menus, text panels, graphs, icons, windows, and ultimately complete interfaces for monitoring data from an application; controlling an application by providing input data to it; and testing an application by both monitoring and controlling it. A complete set of tools for building interfaces is described in a manual for the TCMS toolkit. Simple tools create primitive widgets such as lines, rectangles and text strings. Intermediate level tools create pictographs from primitive widgets, and connect processes to either text strings or pictographs. Other tools create input objects; Data Views supports output objects directly, thus output objects are not considered. Finally, a set of utilities for executing, monitoring use, editing, and displaying the content of interfaces is included in the toolkit.
Automation in the graphic arts
NASA Astrophysics Data System (ADS)
Truszkowski, Walt
1995-04-01
The CHIMES (Computer-Human Interaction Models) tool was designed to help solve a simply-stated but important problem, i.e., the problem of generating a user interface to a system that complies with established human factors standards and guidelines. Though designed for use in a fairly restricted user domain, i.e., spacecraft mission operations, the CHIMES system is essentially domain independent and applicable wherever graphical user interfaces of displays are to be encountered. The CHIMES philosophy and operating strategy are quite simple. Instead of requiring a human designer to actively maintain in his or her head the now encyclopedic knowledge that human factors and user interface specialists have evolved, CHIMES incorporates this information in its knowledge bases. When directed to evaluated a design, CHIMES determines and accesses the appropriate knowledge, performs an evaluation of the design against that information, determines whether the design is compliant with the selected guidelines and suggests corrective actions if deviations from guidelines are discovered. This paper will provide an overview of the capabilities of the current CHIMES tool and discuss the potential integration of CHIMES-like technology in automated graphic arts systems.
Assisted navigation based on shared-control, using discrete and sparse human-machine interfaces.
Lopes, Ana C; Nunes, Urbano; Vaz, Luis; Vaz, Luís
2010-01-01
This paper presents a shared-control approach for Assistive Mobile Robots (AMR), which depends on the user's ability to navigate a semi-autonomous powered wheelchair, using a sparse and discrete human-machine interface (HMI). This system is primarily intended to help users with severe motor disabilities that prevent them to use standard human-machine interfaces. Scanning interfaces and Brain Computer Interfaces (BCI), characterized to provide a small set of commands issued sparsely, are possible HMIs. This shared-control approach is intended to be applied in an Assisted Navigation Training Framework (ANTF) that is used to train users' ability in steering a powered wheelchair in an appropriate manner, given the restrictions imposed by their limited motor capabilities. A shared-controller based on user characterization, is proposed. This controller is able to share the information provided by the local motion planning level with the commands issued sparsely by the user. Simulation results of the proposed shared-control method, are presented.
NASA Technical Reports Server (NTRS)
Fisher, Scott S.
1986-01-01
A head-mounted, wide-angle, stereoscopic display system controlled by operator position, voice and gesture has been developed for use as a multipurpose interface environment. The system provides a multisensory, interactive display environment in which a user can virtually explore a 360-degree synthesized or remotely sensed environment and can viscerally interact with its components. Primary applications of the system are in telerobotics, management of large-scale integrated information systems, and human factors research. System configuration, application scenarios, and research directions are described.
Designing the user interface: strategies for effective human-computer interaction
NASA Astrophysics Data System (ADS)
Shneiderman, B.
1998-03-01
In revising this popular book, Ben Shneiderman again provides a complete, current and authoritative introduction to user-interface design. The user interface is the part of every computer system that determines how people control and operate that system. When the interface is well designed, it is comprehensible, predictable, and controllable; users feel competent, satisfied, and responsible for their actions. Shneiderman discusses the principles and practices needed to design such effective interaction. Based on 20 years experience, Shneiderman offers readers practical techniques and guidelines for interface design. He also takes great care to discuss underlying issues and to support conclusions with empirical results. Interface designers, software engineers, and product managers will all find this book an invaluable resource for creating systems that facilitate rapid learning and performance, yield low error rates, and generate high user satisfaction. Coverage includes the human factors of interactive software (with a new discussion of diverse user communities), tested methods to develop and assess interfaces, interaction styles such as direct manipulation for graphical user interfaces, and design considerations such as effective messages, consistent screen design, and appropriate color.
Microgravity human factors workstation development
NASA Technical Reports Server (NTRS)
Whitmore, Mihriban; Wilmington, Robert P.; Morris, Randy B.; Jensen, Dean G.
1992-01-01
Microgravity evaluations of workstation hardware as well as its system components were found to be very useful for determining the expected needs of the Space Station crew and for refining overall workstation design. Research at the Johnson Space Center has been carried out to provide optimal workstation design and human interface. The research included evaluations of hand controller configurations for robots and free flyers, the identification of cursor control device requirements, and the examination of anthropometric issues of workstation design such as reach, viewing distance, and head clearance.
Pilot control through the TAFCOS automatic flight control system
NASA Technical Reports Server (NTRS)
Wehrend, W. R., Jr.
1979-01-01
The set of flight control logic used in a recently completed flight test program to evaluate the total automatic flight control system (TAFCOS) with the controller operating in a fully automatic mode, was used to perform an unmanned simulation on an IBM 360 computer in which the TAFCOS concept was extended to provide a multilevel pilot interface. A pilot TAFCOS interface for direct pilot control by use of a velocity-control-wheel-steering mode was defined as well as a means for calling up conventional autopilot modes. It is concluded that the TAFCOS structure is easily adaptable to the addition of a pilot control through a stick-wheel-throttle control similar to conventional airplane controls. Conventional autopilot modes, such as airspeed-hold, altitude-hold, heading-hold, and flight path angle-hold, can also be included.
SIG -- The Role of Human-Computer Interaction in Next-Generation Control Rooms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ronald L. Boring; Jacques Hugo; Christian Richard
2005-04-01
The purpose of this CHI Special Interest Group (SIG) is to facilitate the convergence between human-computer interaction (HCI) and control room design. HCI researchers and practitioners actively need to infuse state-of-the-art interface technology into control rooms to meet usability, safety, and regulatory requirements. This SIG outlines potential HCI contributions to instrumentation and control (I&C) and automation in control rooms as well as to general control room design.
Human Factors in the Design of a Computer-Assisted Instruction System. Technical Progress Report.
ERIC Educational Resources Information Center
Mudge, J. C.
A research project built an author-controlled computer-assisted instruction (CAI) system to study ease-of-use factors in student-system, author-system, and programer-system interfaces. Interfaces were designed and observed in use and systematically revised. Development of course material by authors, use by students, and administrative tasks were…
Adapting human-machine interfaces to user performance.
Danziger, Zachary; Fishbach, Alon; Mussa-Ivaldi, Ferdinando A
2008-01-01
The goal of this study was to create and examine machine learning algorithms that adapt in a controlled and cadenced way to foster a harmonious learning environment between the user of a human-machine interface and the controlled device. In this experiment, subjects' high-dimensional finger motions remotely controlled the joint angles of a simulated planar 2-link arm, which was used to hit targets on a computer screen. Subjects were required to move the cursor at the endpoint of the simulated arm.
Human-Computer Interaction: A Review of the Research on Its Affective and Social Aspects.
ERIC Educational Resources Information Center
Deaudelin, Colette; Dussault, Marc; Brodeur, Monique
2003-01-01
Discusses a review of 34 qualitative and non-qualitative studies related to affective and social aspects of student-computer interactions. Highlights include the nature of the human-computer interaction (HCI); the interface, comparing graphic and text types; and the relation between variables linked to HCI, mainly trust, locus of control,…
The Human-Computer Interface and Information Literacy: Some Basics and Beyond.
ERIC Educational Resources Information Center
Church, Gary M.
1999-01-01
Discusses human/computer interaction research, human/computer interface, and their relationships to information literacy. Highlights include communication models; cognitive perspectives; task analysis; theory of action; problem solving; instructional design considerations; and a suggestion that human/information interface may be a more appropriate…
Brain Computer Interfaces for Enhanced Interaction with Mobile Robot Agents
2016-07-27
synergistic and complementary way. This project focused on acquiring a mobile robotic agent platform that can be used to explore these interfaces...providing a test environment where the human control of a robot agent can be experimentally validated in 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND...Distribution Unlimited UU UU UU UU 27-07-2016 17-Sep-2013 16-Sep-2014 Final Report: Brain Computer Interfaces for Enhanced Interactions with Mobile Robot
NASA Astrophysics Data System (ADS)
LaFleur, Karl; Cassady, Kaitlin; Doud, Alexander; Shades, Kaleb; Rogin, Eitan; He, Bin
2013-08-01
Objective. At the balanced intersection of human and machine adaptation is found the optimally functioning brain-computer interface (BCI). In this study, we report a novel experiment of BCI controlling a robotic quadcopter in three-dimensional (3D) physical space using noninvasive scalp electroencephalogram (EEG) in human subjects. We then quantify the performance of this system using metrics suitable for asynchronous BCI. Lastly, we examine the impact that the operation of a real world device has on subjects' control in comparison to a 2D virtual cursor task. Approach. Five human subjects were trained to modulate their sensorimotor rhythms to control an AR Drone navigating a 3D physical space. Visual feedback was provided via a forward facing camera on the hull of the drone. Main results. Individual subjects were able to accurately acquire up to 90.5% of all valid targets presented while travelling at an average straight-line speed of 0.69 m s-1. Significance. Freely exploring and interacting with the world around us is a crucial element of autonomy that is lost in the context of neurodegenerative disease. Brain-computer interfaces are systems that aim to restore or enhance a user's ability to interact with the environment via a computer and through the use of only thought. We demonstrate for the first time the ability to control a flying robot in 3D physical space using noninvasive scalp recorded EEG in humans. Our work indicates the potential of noninvasive EEG-based BCI systems for accomplish complex control in 3D physical space. The present study may serve as a framework for the investigation of multidimensional noninvasive BCI control in a physical environment using telepresence robotics.
LaFleur, Karl; Cassady, Kaitlin; Doud, Alexander; Shades, Kaleb; Rogin, Eitan; He, Bin
2013-08-01
At the balanced intersection of human and machine adaptation is found the optimally functioning brain-computer interface (BCI). In this study, we report a novel experiment of BCI controlling a robotic quadcopter in three-dimensional (3D) physical space using noninvasive scalp electroencephalogram (EEG) in human subjects. We then quantify the performance of this system using metrics suitable for asynchronous BCI. Lastly, we examine the impact that the operation of a real world device has on subjects' control in comparison to a 2D virtual cursor task. Five human subjects were trained to modulate their sensorimotor rhythms to control an AR Drone navigating a 3D physical space. Visual feedback was provided via a forward facing camera on the hull of the drone. Individual subjects were able to accurately acquire up to 90.5% of all valid targets presented while travelling at an average straight-line speed of 0.69 m s(-1). Freely exploring and interacting with the world around us is a crucial element of autonomy that is lost in the context of neurodegenerative disease. Brain-computer interfaces are systems that aim to restore or enhance a user's ability to interact with the environment via a computer and through the use of only thought. We demonstrate for the first time the ability to control a flying robot in 3D physical space using noninvasive scalp recorded EEG in humans. Our work indicates the potential of noninvasive EEG-based BCI systems for accomplish complex control in 3D physical space. The present study may serve as a framework for the investigation of multidimensional noninvasive BCI control in a physical environment using telepresence robotics.
New generation of 3D desktop computer interfaces
NASA Astrophysics Data System (ADS)
Skerjanc, Robert; Pastoor, Siegmund
1997-05-01
Today's computer interfaces use 2-D displays showing windows, icons and menus and support mouse interactions for handling programs and data files. The interface metaphor is that of a writing desk with (partly) overlapping sheets of documents placed on its top. Recent advances in the development of 3-D display technology give the opportunity to take the interface concept a radical stage further by breaking the design limits of the desktop metaphor. The major advantage of the envisioned 'application space' is, that it offers an additional, immediately perceptible dimension to clearly and constantly visualize the structure and current state of interrelations between documents, videos, application programs and networked systems. In this context, we describe the development of a visual operating system (VOS). Under VOS, applications appear as objects in 3-D space. Users can (graphically connect selected objects to enable communication between the respective applications. VOS includes a general concept of visual and object oriented programming for tasks ranging from, e.g., low-level programming up to high-level application configuration. In order to enable practical operation in an office or at home for many hours, the system should be very comfortable to use. Since typical 3-D equipment used, e.g., in virtual-reality applications (head-mounted displays, data gloves) is rather cumbersome and straining, we suggest to use off-head displays and contact-free interaction techniques. In this article, we introduce an autostereoscopic 3-D display and connected video based interaction techniques which allow viewpoint-depending imaging (by head tracking) and visually controlled modification of data objects and links (by gaze tracking, e.g., to pick, 3-D objects just by looking at them).
Mishra, Varsha; Puthucheri, Smitha; Singh, Dharmendra
2018-05-07
As a preventive measure against the electromagnetic (EM) wave exposure to human body, EM radiation regulatory authorities such as ICNIRP and FCC defined the value of specific absorption rate (SAR) for the human head during EM wave exposure from mobile phone. SAR quantifies the absorption of EM waves in the human body and it mainly depends on the dielectric properties (ε', σ) of the corresponding tissues. The head part of the human body is more susceptible to EM wave exposure due to the usage of mobile phones. The human head is a complex structure made up of multiple tissues with intermixing of many layers; thus, the accurate measurement of permittivity (ε') and conductivity (σ) of the tissues of the human head is still a challenge. For computing the SAR, researchers are using multilayer model, which has some challenges for defining the boundary for layers. Therefore, in this paper, an attempt has been made to propose a method to compute effective complex permittivity of the human head in the range of 0.3 to 3.0 GHz by applying De-Loor mixing model. Similarly, for defining the thermal effect in the tissue, thermal properties of the human head have also been computed using the De-Loor mixing method. The effective dielectric and thermal properties of equivalent human head model are compared with the IEEE Std. 1528. Graphical abstract ᅟ.
NASA Technical Reports Server (NTRS)
Roske-Hofstrand, Renate J.
1990-01-01
The man-machine interface and its influence on the characteristics of computer displays in automated air traffic is discussed. The graphical presentation of spatial relationships and the problems it poses for air traffic control, and the solution of such problems are addressed. Psychological factors involved in the man-machine interface are stressed.
Iáñez, Eduardo; Azorin, Jose M.; Perez-Vidal, Carlos
2013-01-01
This paper describes a human-computer interface based on electro-oculography (EOG) that allows interaction with a computer using eye movement. The EOG registers the movement of the eye by measuring, through electrodes, the difference of potential between the cornea and the retina. A new pair of EOG glasses have been designed to improve the user's comfort and to remove the manual procedure of placing the EOG electrodes around the user's eye. The interface, which includes the EOG electrodes, uses a new processing algorithm that is able to detect the gaze direction and the blink of the eyes from the EOG signals. The system reliably enabled subjects to control the movement of a dot on a video screen. PMID:23843986
Using the Electrocorticographic Speech Network to Control a Brain-Computer Interface in Humans
Leuthardt, Eric C.; Gaona, Charles; Sharma, Mohit; Szrama, Nicholas; Roland, Jarod; Freudenberg, Zac; Solis, Jamie; Breshears, Jonathan; Schalk, Gerwin
2013-01-01
Electrocorticography (ECoG) has emerged as a new signal platform for brain-computer interface (BCI) systems. Classically, the cortical physiology that has been commonly investigated and utilized for device control in humans has been brain signals from sensorimotor cortex. Hence, it was unknown whether other neurophysiological substrates, such as the speech network, could be used to further improve on or complement existing motor-based control paradigms. We demonstrate here for the first time that ECoG signals associated with different overt and imagined phoneme articulation can enable invasively monitored human patients to control a one-dimensional computer cursor rapidly and accurately. This phonetic content was distinguishable within higher gamma frequency oscillations and enabled users to achieve final target accuracies between 68 and 91% within 15 minutes. Additionally, one of the patients achieved robust control using recordings from a microarray consisting of 1 mm spaced microwires. These findings suggest that the cortical network associated with speech could provide an additional cognitive and physiologic substrate for BCI operation and that these signals can be acquired from a cortical array that is small and minimally invasive. PMID:21471638
From Antarctica to space: Use of telepresence and virtual reality in control of remote vehicles
NASA Technical Reports Server (NTRS)
Stoker, Carol; Hine, Butler P., III; Sims, Michael; Rasmussen, Daryl; Hontalas, Phil; Fong, Terrence W.; Steele, Jay; Barch, Don; Andersen, Dale; Miles, Eric
1994-01-01
In the Fall of 1993, NASA Ames deployed a modified Phantom S2 Remotely-Operated underwater Vehicle (ROV) into an ice-covered sea environment near McMurdo Science Station, Antarctica. This deployment was part of the antarctic Space Analog Program, a joint program between NASA and the National Science Foundation to demonstrate technologies relevant for space exploration in realistic field setting in the Antarctic. The goal of the mission was to operationally test the use of telepresence and virtual reality technology in the operator interface to a remote vehicle, while performing a benthic ecology study. The vehicle was operated both locally, from above a dive hole in the ice through which it was launched, and remotely over a satellite communications link from a control room at NASA's Ames Research Center. Local control of the vehicle was accomplished using the standard Phantom control box containing joysticks and switches, with the operator viewing stereo video camera images on a stereo display monitor. Remote control of the vehicle over the satellite link was accomplished using the Virtual Environment Vehicle Interface (VEVI) control software developed at NASA Ames. The remote operator interface included either a stereo display monitor similar to that used locally or a stereo head-mounted head-tracked display. The compressed video signal from the vehicle was transmitted to NASA Ames over a 768 Kbps satellite channel. Another channel was used to provide a bi-directional Internet link to the vehicle control computer through which the command and telemetry signals traveled, along with a bi-directional telephone service. In addition to the live stereo video from the satellite link, the operator could view a computer-generated graphic representation of the underwater terrain, modeled from the vehicle's sensors. The virtual environment contained an animate graphic model of the vehicle which reflected the state of the actual vehicle, along with ancillary information such as the vehicle track, science markers, and locations of video snapshots. The actual vehicle was driven either from within the virtual environment or through a telepresence interface. All vehicle functions could be controlled remotely over the satellite link.
Farhoudi, Hamidreza; Oskouei, Reza H; Pasha Zanoosi, Ali A; Jones, Claire F; Taylor, Mark
2016-12-05
This study predicts the frictional moments at the head-cup interface and frictional torques and bending moments acting on the head-neck interface of a modular total hip replacement across a range of activities of daily living. The predicted moment and torque profiles are based on the kinematics of four patients and the implant characteristics of a metal-on-metal implant. Depending on the body weight and type of activity, the moments and torques had significant variations in both magnitude and direction over the activity cycles. For the nine investigated activities, the maximum magnitude of the frictional moment ranged from 2.6 to 7.1 Nm. The maximum magnitude of the torque acting on the head-neck interface ranged from 2.3 to 5.7 Nm. The bending moment acting on the head-neck interface varied from 7 to 21.6 Nm. One-leg-standing had the widest range of frictional torque on the head-neck interface (11 Nm) while normal walking had the smallest range (6.1 Nm). The widest range, together with the maximum magnitude of torque, bending moment, and frictional moment, occurred during one-leg-standing of the lightest patient. Most of the simulated activities resulted in frictional torques that were near the previously reported oxide layer depassivation threshold torque. The predicted bending moments were also found at a level believed to contribute to the oxide layer depassivation. The calculated magnitudes and directions of the moments, applied directly to the head-neck taper junction, provide realistic mechanical loading data for in vitro and computational studies on the mechanical behaviour and multi-axial fretting at the head-neck interface.
Farhoudi, Hamidreza; Oskouei, Reza H.; Pasha Zanoosi, Ali A.; Jones, Claire F.; Taylor, Mark
2016-01-01
This study predicts the frictional moments at the head-cup interface and frictional torques and bending moments acting on the head-neck interface of a modular total hip replacement across a range of activities of daily living. The predicted moment and torque profiles are based on the kinematics of four patients and the implant characteristics of a metal-on-metal implant. Depending on the body weight and type of activity, the moments and torques had significant variations in both magnitude and direction over the activity cycles. For the nine investigated activities, the maximum magnitude of the frictional moment ranged from 2.6 to 7.1 Nm. The maximum magnitude of the torque acting on the head-neck interface ranged from 2.3 to 5.7 Nm. The bending moment acting on the head-neck interface varied from 7 to 21.6 Nm. One-leg-standing had the widest range of frictional torque on the head-neck interface (11 Nm) while normal walking had the smallest range (6.1 Nm). The widest range, together with the maximum magnitude of torque, bending moment, and frictional moment, occurred during one-leg-standing of the lightest patient. Most of the simulated activities resulted in frictional torques that were near the previously reported oxide layer depassivation threshold torque. The predicted bending moments were also found at a level believed to contribute to the oxide layer depassivation. The calculated magnitudes and directions of the moments, applied directly to the head-neck taper junction, provide realistic mechanical loading data for in vitro and computational studies on the mechanical behaviour and multi-axial fretting at the head-neck interface. PMID:28774104
User participation in the development of the human/computer interface for control centers
NASA Technical Reports Server (NTRS)
Broome, Richard; Quick-Campbell, Marlene; Creegan, James; Dutilly, Robert
1996-01-01
Technological advances coupled with the requirements to reduce operations staffing costs led to the demand for efficient, technologically-sophisticated mission operations control centers. The control center under development for the earth observing system (EOS) is considered. The users are involved in the development of a control center in order to ensure that it is cost-efficient and flexible. A number of measures were implemented in the EOS program in order to encourage user involvement in the area of human-computer interface development. The following user participation exercises carried out in relation to the system analysis and design are described: the shadow participation of the programmers during a day of operations; the flight operations personnel interviews; and the analysis of the flight operations team tasks. The user participation in the interface prototype development, the prototype evaluation, and the system implementation are reported on. The involvement of the users early in the development process enables the requirements to be better understood and the cost to be reduced.
Software systems for modeling articulated figures
NASA Technical Reports Server (NTRS)
Phillips, Cary B.
1989-01-01
Research in computer animation and simulation of human task performance requires sophisticated geometric modeling and user interface tools. The software for a research environment should present the programmer with a powerful but flexible substrate of facilities for displaying and manipulating geometric objects, yet insure that future tools have a consistent and friendly user interface. Jack is a system which provides a flexible and extensible programmer and user interface for displaying and manipulating complex geometric figures, particularly human figures in a 3D working environment. It is a basic software framework for high-performance Silicon Graphics IRIS workstations for modeling and manipulating geometric objects in a general but powerful way. It provides a consistent and user-friendly interface across various applications in computer animation and simulation of human task performance. Currently, Jack provides input and control for applications including lighting specification and image rendering, anthropometric modeling, figure positioning, inverse kinematics, dynamic simulation, and keyframe animation.
Towards free 3D end-point control for robotic-assisted human reaching using binocular eye tracking.
Maimon-Dror, Roni O; Fernandez-Quesada, Jorge; Zito, Giuseppe A; Konnaris, Charalambos; Dziemian, Sabine; Faisal, A Aldo
2017-07-01
Eye-movements are the only directly observable behavioural signals that are highly correlated with actions at the task level, and proactive of body movements and thus reflect action intentions. Moreover, eye movements are preserved in many movement disorders leading to paralysis (or amputees) from stroke, spinal cord injury, Parkinson's disease, multiple sclerosis, and muscular dystrophy among others. Despite this benefit, eye tracking is not widely used as control interface for robotic interfaces in movement impaired patients due to poor human-robot interfaces. We demonstrate here how combining 3D gaze tracking using our GT3D binocular eye tracker with custom designed 3D head tracking system and calibration method enables continuous 3D end-point control of a robotic arm support system. The users can move their own hand to any location of the workspace by simple looking at the target and winking once. This purely eye tracking based system enables the end-user to retain free head movement and yet achieves high spatial end point accuracy in the order of 6 cm RMSE error in each dimension and standard deviation of 4 cm. 3D calibration is achieved by moving the robot along a 3 dimensional space filling Peano curve while the user is tracking it with their eyes. This results in a fully automated calibration procedure that yields several thousand calibration points versus standard approaches using a dozen points, resulting in beyond state-of-the-art 3D accuracy and precision.
1993-03-25
application of Object-Oriented Programming (OOP) and Human-Computer Interface (HCI) design principles. Knowledge gained from each topic has been incorporated...through the ap- plication of Object-Oriented Programming (OOP) and Human-Computer Interface (HCI) design principles. Knowledge gained from each topic has...programming and Human-Computer Interface (HCI) design. Knowledge gained from each is applied to the design of a Form-based interface for database data
NASA Astrophysics Data System (ADS)
Simeral, J. D.; Kim, S.-P.; Black, M. J.; Donoghue, J. P.; Hochberg, L. R.
2011-04-01
The ongoing pilot clinical trial of the BrainGate neural interface system aims in part to assess the feasibility of using neural activity obtained from a small-scale, chronically implanted, intracortical microelectrode array to provide control signals for a neural prosthesis system. Critical questions include how long implanted microelectrodes will record useful neural signals, how reliably those signals can be acquired and decoded, and how effectively they can be used to control various assistive technologies such as computers and robotic assistive devices, or to enable functional electrical stimulation of paralyzed muscles. Here we examined these questions by assessing neural cursor control and BrainGate system characteristics on five consecutive days 1000 days after implant of a 4 × 4 mm array of 100 microelectrodes in the motor cortex of a human with longstanding tetraplegia subsequent to a brainstem stroke. On each of five prospectively-selected days we performed time-amplitude sorting of neuronal spiking activity, trained a population-based Kalman velocity decoding filter combined with a linear discriminant click state classifier, and then assessed closed-loop point-and-click cursor control. The participant performed both an eight-target center-out task and a random target Fitts metric task which was adapted from a human-computer interaction ISO standard used to quantify performance of computer input devices. The neural interface system was further characterized by daily measurement of electrode impedances, unit waveforms and local field potentials. Across the five days, spiking signals were obtained from 41 of 96 electrodes and were successfully decoded to provide neural cursor point-and-click control with a mean task performance of 91.3% ± 0.1% (mean ± s.d.) correct target acquisition. Results across five consecutive days demonstrate that a neural interface system based on an intracortical microelectrode array can provide repeatable, accurate point-and-click control of a computer interface to an individual with tetraplegia 1000 days after implantation of this sensor.
Simeral, J D; Kim, S-P; Black, M J; Donoghue, J P; Hochberg, L R
2013-01-01
The ongoing pilot clinical trial of the BrainGate neural interface system aims in part to assess the feasibility of using neural activity obtained from a small-scale, chronically implanted, intracortical microelectrode array to provide control signals for a neural prosthesis system. Critical questions include how long implanted microelectrodes will record useful neural signals, how reliably those signals can be acquired and decoded, and how effectively they can be used to control various assistive technologies such as computers and robotic assistive devices, or to enable functional electrical stimulation of paralyzed muscles. Here we examined these questions by assessing neural cursor control and BrainGate system characteristics on five consecutive days 1000 days after implant of a 4 × 4 mm array of 100 microelectrodes in the motor cortex of a human with longstanding tetraplegia subsequent to a brainstem stroke. On each of five prospectively-selected days we performed time-amplitude sorting of neuronal spiking activity, trained a population-based Kalman velocity decoding filter combined with a linear discriminant click state classifier, and then assessed closed-loop point-and-click cursor control. The participant performed both an eight-target center-out task and a random target Fitts metric task which was adapted from a human-computer interaction ISO standard used to quantify performance of computer input devices. The neural interface system was further characterized by daily measurement of electrode impedances, unit waveforms and local field potentials. Across the five days, spiking signals were obtained from 41 of 96 electrodes and were successfully decoded to provide neural cursor point-and-click control with a mean task performance of 91.3% ± 0.1% (mean ± s.d.) correct target acquisition. Results across five consecutive days demonstrate that a neural interface system based on an intracortical microelectrode array can provide repeatable, accurate point-and-click control of a computer interface to an individual with tetraplegia 1000 days after implantation of this sensor. PMID:21436513
A Surveillance and Targeting System for an Unmanned Ground Vehicle
1990-08-01
CHARACTERISTICS - SELECTABLE INFRASONIC AND ULTRASONIC FREQUENCY SHIFTING CAPABILITY - SUPER-BINAURAL CONFIGURATION ANGLE AND PICKUP SEPARATION GREATER THAN...HUMAN HEAD - VARIABLE GAIN WITH CLIPPING - INTEGRATABLE INTO TOV OPERATOR HELMET - CONTROL INTERFACE: VOLUME UP/DOWN, SONIC ON/OFF, ULTRA ON/OFF... INFRA ON/OFF, BOOST HI/MED/OFF ----- UGV/TOV ----- ---- AUVS/DAYTON ---- LASER SAFETY IMPLICATIONS IMPLICATIONS FOR DESIGN: - POWER UP SEQUENCE - ABORT
EEGLAB, SIFT, NFT, BCILAB, and ERICA: new tools for advanced EEG processing.
Delorme, Arnaud; Mullen, Tim; Kothe, Christian; Akalin Acar, Zeynep; Bigdely-Shamlo, Nima; Vankov, Andrey; Makeig, Scott
2011-01-01
We describe a set of complementary EEG data collection and processing tools recently developed at the Swartz Center for Computational Neuroscience (SCCN) that connect to and extend the EEGLAB software environment, a freely available and readily extensible processing environment running under Matlab. The new tools include (1) a new and flexible EEGLAB STUDY design facility for framing and performing statistical analyses on data from multiple subjects; (2) a neuroelectromagnetic forward head modeling toolbox (NFT) for building realistic electrical head models from available data; (3) a source information flow toolbox (SIFT) for modeling ongoing or event-related effective connectivity between cortical areas; (4) a BCILAB toolbox for building online brain-computer interface (BCI) models from available data, and (5) an experimental real-time interactive control and analysis (ERICA) environment for real-time production and coordination of interactive, multimodal experiments.
Techniques and applications for binaural sound manipulation in human-machine interfaces
NASA Technical Reports Server (NTRS)
Begault, Durand R.; Wenzel, Elizabeth M.
1990-01-01
The implementation of binaural sound to speech and auditory sound cues (auditory icons) is addressed from both an applications and technical standpoint. Techniques overviewed include processing by means of filtering with head-related transfer functions. Application to advanced cockpit human interface systems is discussed, although the techniques are extendable to any human-machine interface. Research issues pertaining to three-dimensional sound displays under investigation at the Aerospace Human Factors Division at NASA Ames Research Center are described.
Techniques and applications for binaural sound manipulation in human-machine interfaces
NASA Technical Reports Server (NTRS)
Begault, Durand R.; Wenzel, Elizabeth M.
1992-01-01
The implementation of binaural sound to speech and auditory sound cues (auditory icons) is addressed from both an applications and technical standpoint. Techniques overviewed include processing by means of filtering with head-related transfer functions. Application to advanced cockpit human interface systems is discussed, although the techniques are extendable to any human-machine interface. Research issues pertaining to three-dimensional sound displays under investigation at the Aerospace Human Factors Division at NASA Ames Research Center are described.
Zhao, Li; Xing, Xiao; Guo, Xuhong; Liu, Zehua; He, Yang
2014-10-01
Brain-computer interface (BCI) system is a system that achieves communication and control among humans and computers and other electronic equipment with the electroencephalogram (EEG) signals. This paper describes the working theory of the wireless smart home system based on the BCI technology. We started to get the steady-state visual evoked potential (SSVEP) using the single chip microcomputer and the visual stimulation which composed by LED lamp to stimulate human eyes. Then, through building the power spectral transformation on the LabVIEW platform, we processed timely those EEG signals under different frequency stimulation so as to transfer them to different instructions. Those instructions could be received by the wireless transceiver equipment to control the household appliances and to achieve the intelligent control towards the specified devices. The experimental results showed that the correct rate for the 10 subjects reached 100%, and the control time of average single device was 4 seconds, thus this design could totally achieve the original purpose of smart home system.
Tonet, Oliver; Marinelli, Martina; Citi, Luca; Rossini, Paolo Maria; Rossini, Luca; Megali, Giuseppe; Dario, Paolo
2008-01-15
Interaction with machines is mediated by human-machine interfaces (HMIs). Brain-machine interfaces (BMIs) are a particular class of HMIs and have so far been studied as a communication means for people who have little or no voluntary control of muscle activity. In this context, low-performing interfaces can be considered as prosthetic applications. On the other hand, for able-bodied users, a BMI would only be practical if conceived as an augmenting interface. In this paper, a method is introduced for pointing out effective combinations of interfaces and devices for creating real-world applications. First, devices for domotics, rehabilitation and assistive robotics, and their requirements, in terms of throughput and latency, are described. Second, HMIs are classified and their performance described, still in terms of throughput and latency. Then device requirements are matched with performance of available interfaces. Simple rehabilitation and domotics devices can be easily controlled by means of BMI technology. Prosthetic hands and wheelchairs are suitable applications but do not attain optimal interactivity. Regarding humanoid robotics, the head and the trunk can be controlled by means of BMIs, while other parts require too much throughput. Robotic arms, which have been controlled by means of cortical invasive interfaces in animal studies, could be the next frontier for non-invasive BMIs. Combining smart controllers with BMIs could improve interactivity and boost BMI applications.
Virtual Keyboard for Hands-Free Operations
NASA Technical Reports Server (NTRS)
Abou-Ali, Abdel-Latief; Porter, William A.
1996-01-01
The measurement of direction of gaze (d.o.g.) has been used for clinical purposes to detect illness, such as nystagmus, unusual fixation movements and many others. It also is used to determine the points of interest in objects. In this study we employ a measurement of d.o.g. as a computer interface. The interface provides a full keyboard as well as a mouse function. Such an interface is important to computer users with paralysis or in environments where hand-free machine interface is required. The study utilizes the commercially available (ISCAN Model RK426TC) headset which consists of an InfraRed (IR) source and an IR camera to sense deflection of the illuminating beam. It also incorporates image processing package that provides the position of the pupil as well as the pupil size. The study shows the ability of implementing a full keyboard, together with some control functions, imaged on a head mounted monitor screen. This document is composed of four sections: (1) The Nature of the Equipment; (2) The Calibration Process; (3) Running Process; and (4) Conclusions.
Nature and origins of virtual environments - A bibliographical essay
NASA Technical Reports Server (NTRS)
Ellis, S. R.
1991-01-01
Virtual environments presented via head-mounted, computer-driven displays provide a new media for communication. They may be analyzed by considering: (1) what may be meant by an environment; (2) what is meant by the process of virtualization; and (3) some aspects of human performance that constrain environmental design. Their origins are traced from previous work in vehicle simulation and multimedia research. Pointers are provided to key technical references, in the dispersed, archival literature, that are relevant to the development and evaluation of virtual-environment interface systems.
Radar cross calibration investigation TAMU radar polarimeter calibration measurements
NASA Technical Reports Server (NTRS)
Blanchard, A. J.; Newton, R. W.; Bong, S.; Kronke, C.; Warren, G. L.; Carey, D.
1982-01-01
A short pulse, 20 MHz bandwidth, three frequency radar polarimeter system (RPS) operates at center frequencies of 10.003 GHz, 4.75 GHz, and 1.6 GHz and utilizes dual polarized transmit and receive antennas for each frequency. The basic lay-out of the RPS is different from other truck mounted systems in that it uses a pulse compression IF section common to all three RF heads. Separate transmit and receive antennas are used to improve the cross-polarization isolation at each particular frequency. The receive is a digitally controlled gain modulated subsystem and is interfaced directly with a microprocesser computer for control and data manipulation. Antenna focusing distance, focusing each antenna pair, rf head stability, and polarization characteristics of RPS antennas are discussed. Platform and data acquisition procedures are described.
NASA Technical Reports Server (NTRS)
Adams, Richard J.
2015-01-01
The patent-pending Glove-Enabled Computer Operations (GECO) design leverages extravehicular activity (EVA) glove design features as platforms for instrumentation and tactile feedback, enabling the gloves to function as human-computer interface devices. Flexible sensors in each finger enable control inputs that can be mapped to any number of functions (e.g., a mouse click, a keyboard strike, or a button press). Tracking of hand motion is interpreted alternatively as movement of a mouse (change in cursor position on a graphical user interface) or a change in hand position on a virtual keyboard. Programmable vibro-tactile actuators aligned with each finger enrich the interface by creating the haptic sensations associated with control inputs, such as recoil of a button press.
Wearable computer for mobile augmented-reality-based controlling of an intelligent robot
NASA Astrophysics Data System (ADS)
Turunen, Tuukka; Roening, Juha; Ahola, Sami; Pyssysalo, Tino
2000-10-01
An intelligent robot can be utilized to perform tasks that are either hazardous or unpleasant for humans. Such tasks include working in disaster areas or conditions that are, for example, too hot. An intelligent robot can work on its own to some extent, but in some cases the aid of humans will be needed. This requires means for controlling the robot from somewhere else, i.e. teleoperation. Mobile augmented reality can be utilized as a user interface to the environment, as it enhances the user's perception of the situation compared to other interfacing methods and allows the user to perform other tasks while controlling the intelligent robot. Augmented reality is a method that combines virtual objects into the user's perception of the real world. As computer technology evolves, it is possible to build very small devices that have sufficient capabilities for augmented reality applications. We have evaluated the existing wearable computers and mobile augmented reality systems to build a prototype of a future mobile terminal- the CyPhone. A wearable computer with sufficient system resources for applications, wireless communication media with sufficient throughput and enough interfaces for peripherals has been built at the University of Oulu. It is self-sustained in energy, with enough operating time for the applications to be useful, and uses accurate positioning systems.
Passive wireless tags for tongue controlled assistive technology interfaces.
Rakibet, Osman O; Horne, Robert J; Kelly, Stephen W; Batchelor, John C
2016-03-01
Tongue control with low profile, passive mouth tags is demonstrated as a human-device interface by communicating values of tongue-tag separation over a wireless link. Confusion matrices are provided to demonstrate user accuracy in targeting by tongue position. Accuracy is found to increase dramatically after short training sequences with errors falling close to 1% in magnitude with zero missed targets. The rate at which users are able to learn accurate targeting with high accuracy indicates that this is an intuitive device to operate. The significance of the work is that innovative very unobtrusive, wireless tags can be used to provide intuitive human-computer interfaces based on low cost and disposable mouth mounted technology. With the development of an appropriate reading system, control of assistive devices such as computer mice or wheelchairs could be possible for tetraplegics and others who retain fine motor control capability of their tongues. The tags contain no battery and are intended to fit directly on the hard palate, detecting tongue position in the mouth with no need for tongue piercings.
The use of analytical models in human-computer interface design
NASA Technical Reports Server (NTRS)
Gugerty, Leo
1991-01-01
Some of the many analytical models in human-computer interface design that are currently being developed are described. The usefulness of analytical models for human-computer interface design is evaluated. Can the use of analytical models be recommended to interface designers? The answer, based on the empirical research summarized here, is: not at this time. There are too many unanswered questions concerning the validity of models and their ability to meet the practical needs of design organizations.
High-performance dual-speed CCD camera system for scientific imaging
NASA Astrophysics Data System (ADS)
Simpson, Raymond W.
1996-03-01
Traditionally, scientific camera systems were partitioned with a `camera head' containing the CCD and its support circuitry and a camera controller, which provided analog to digital conversion, timing, control, computer interfacing, and power. A new, unitized high performance scientific CCD camera with dual speed readout at 1 X 106 or 5 X 106 pixels per second, 12 bit digital gray scale, high performance thermoelectric cooling, and built in composite video output is described. This camera provides all digital, analog, and cooling functions in a single compact unit. The new system incorporates the A/C converter, timing, control and computer interfacing in the camera, with the power supply remaining a separate remote unit. A 100 Mbyte/second serial link transfers data over copper or fiber media to a variety of host computers, including Sun, SGI, SCSI, PCI, EISA, and Apple Macintosh. Having all the digital and analog functions in the camera made it possible to modify this system for the Woods Hole Oceanographic Institution for use on a remote controlled submersible vehicle. The oceanographic version achieves 16 bit dynamic range at 1.5 X 105 pixels/second, can be operated at depths of 3 kilometers, and transfers data to the surface via a real time fiber optic link.
Evolution of brain-computer interfaces: going beyond classic motor physiology
Leuthardt, Eric C.; Schalk, Gerwin; Roland, Jarod; Rouse, Adam; Moran, Daniel W.
2010-01-01
The notion that a computer can decode brain signals to infer the intentions of a human and then enact those intentions directly through a machine is becoming a realistic technical possibility. These types of devices are known as brain-computer interfaces (BCIs). The evolution of these neuroprosthetic technologies could have significant implications for patients with motor disabilities by enhancing their ability to interact and communicate with their environment. The cortical physiology most investigated and used for device control has been brain signals from the primary motor cortex. To date, this classic motor physiology has been an effective substrate for demonstrating the potential efficacy of BCI-based control. However, emerging research now stands to further enhance our understanding of the cortical physiology underpinning human intent and provide further signals for more complex brain-derived control. In this review, the authors report the current status of BCIs and detail the emerging research trends that stand to augment clinical applications in the future. PMID:19569892
Learning toward practical head pose estimation
NASA Astrophysics Data System (ADS)
Sang, Gaoli; He, Feixiang; Zhu, Rong; Xuan, Shibin
2017-08-01
Head pose is useful information for many face-related tasks, such as face recognition, behavior analysis, human-computer interfaces, etc. Existing head pose estimation methods usually assume that the face images have been well aligned or that sufficient and precise training data are available. In practical applications, however, these assumptions are very likely to be invalid. This paper first investigates the impact of the failure of these assumptions, i.e., misalignment of face images, uncertainty and undersampling of training data, on head pose estimation accuracy of state-of-the-art methods. A learning-based approach is then designed to enhance the robustness of head pose estimation to these factors. To cope with misalignment, instead of using hand-crafted features, it seeks suitable features by learning from a set of training data with a deep convolutional neural network (DCNN), such that the training data can be best classified into the correct head pose categories. To handle uncertainty and undersampling, it employs multivariate labeling distributions (MLDs) with dense sampling intervals to represent the head pose attributes of face images. The correlation between the features and the dense MLD representations of face images is approximated by a maximum entropy model, whose parameters are optimized on the given training data. To estimate the head pose of a face image, its MLD representation is first computed according to the model based on the features extracted from the image by the trained DCNN, and its head pose is then assumed to be the one corresponding to the peak in its MLD. Evaluation experiments on the Pointing'04, FacePix, Multi-PIE, and CASIA-PEAL databases prove the effectiveness and efficiency of the proposed method.
EEGLAB, SIFT, NFT, BCILAB, and ERICA: New Tools for Advanced EEG Processing
Delorme, Arnaud; Mullen, Tim; Kothe, Christian; Akalin Acar, Zeynep; Bigdely-Shamlo, Nima; Vankov, Andrey; Makeig, Scott
2011-01-01
We describe a set of complementary EEG data collection and processing tools recently developed at the Swartz Center for Computational Neuroscience (SCCN) that connect to and extend the EEGLAB software environment, a freely available and readily extensible processing environment running under Matlab. The new tools include (1) a new and flexible EEGLAB STUDY design facility for framing and performing statistical analyses on data from multiple subjects; (2) a neuroelectromagnetic forward head modeling toolbox (NFT) for building realistic electrical head models from available data; (3) a source information flow toolbox (SIFT) for modeling ongoing or event-related effective connectivity between cortical areas; (4) a BCILAB toolbox for building online brain-computer interface (BCI) models from available data, and (5) an experimental real-time interactive control and analysis (ERICA) environment for real-time production and coordination of interactive, multimodal experiments. PMID:21687590
User interface issues in supporting human-computer integrated scheduling
NASA Technical Reports Server (NTRS)
Cooper, Lynne P.; Biefeld, Eric W.
1991-01-01
Explored here is the user interface problems encountered with the Operations Missions Planner (OMP) project at the Jet Propulsion Laboratory (JPL). OMP uses a unique iterative approach to planning that places additional requirements on the user interface, particularly to support system development and maintenance. These requirements are necessary to support the concepts of heuristically controlled search, in-progress assessment, and iterative refinement of the schedule. The techniques used to address the OMP interface needs are given.
US Army Weapon Systems Human-Computer Interface (WSHCI) style guide, Version 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Avery, L.W.; O`Mara, P.A.; Shepard, A.P.
1996-09-30
A stated goal of the U.S. Army has been the standardization of the human computer interfaces (HCIS) of its system. Some of the tools being used to accomplish this standardization are HCI design guidelines and style guides. Currently, the Army is employing a number of style guides. While these style guides provide good guidance for the command, control, communications, computers, and intelligence (C4I) domain, they do not necessarily represent the more unique requirements of the Army`s real time and near-real time (RT/NRT) weapon systems. The Office of the Director of Information for Command, Control, Communications, and Computers (DISC4), in conjunctionmore » with the Weapon Systems Technical Architecture Working Group (WSTAWG), recognized this need as part of their activities to revise the Army Technical Architecture (ATA). To address this need, DISC4 tasked the Pacific Northwest National Laboratory (PNNL) to develop an Army weapon systems unique HCI style guide. This document, the U.S. Army Weapon Systems Human-Computer Interface (WSHCI) Style Guide, represents the first version of that style guide. The purpose of this document is to provide HCI design guidance for RT/NRT Army systems across the weapon systems domains of ground, aviation, missile, and soldier systems. Each domain should customize and extend this guidance by developing their domain-specific style guides, which will be used to guide the development of future systems within their domains.« less
Formal specification of human-computer interfaces
NASA Technical Reports Server (NTRS)
Auernheimer, Brent
1990-01-01
A high-level formal specification of a human computer interface is described. Previous work is reviewed and the ASLAN specification language is described. Top-level specifications written in ASLAN for a library and a multiwindow interface are discussed.
ERIC Educational Resources Information Center
Weller, Herman G.; Hartson, H. Rex
1992-01-01
Describes human-computer interface needs for empowering environments in computer usage in which the machine handles the routine mechanics of problem solving while the user concentrates on its higher order meanings. A closed-loop model of interaction is described, interface as illusion is discussed, and metaphors for human-computer interaction are…
Language evolution and human-computer interaction
NASA Technical Reports Server (NTRS)
Grudin, Jonathan; Norman, Donald A.
1991-01-01
Many of the issues that confront designers of interactive computer systems also appear in natural language evolution. Natural languages and human-computer interfaces share as their primary mission the support of extended 'dialogues' between responsive entities. Because in each case one participant is a human being, some of the pressures operating on natural languages, causing them to evolve in order to better support such dialogue, also operate on human-computer 'languages' or interfaces. This does not necessarily push interfaces in the direction of natural language - since one entity in this dialogue is not a human, this is not to be expected. Nonetheless, by discerning where the pressures that guide natural language evolution also appear in human-computer interaction, we can contribute to the design of computer systems and obtain a new perspective on natural languages.
Sawicka, Marta; Wanrooij, Paulina H; Darbari, Vidya C; Tannous, Elias; Hailemariam, Sarem; Bose, Daniel; Makarova, Alena V; Burgers, Peter M; Zhang, Xiaodong
2016-06-24
The phosphatidylinositol 3-kinase-related protein kinases are key regulators controlling a wide range of cellular events. The yeast Tel1 and Mec1·Ddc2 complex (ATM and ATR-ATRIP in humans) play pivotal roles in DNA replication, DNA damage signaling, and repair. Here, we present the first structural insight for dimers of Mec1·Ddc2 and Tel1 using single-particle electron microscopy. Both kinases reveal a head to head dimer with one major dimeric interface through the N-terminal HEAT (named after Huntingtin, elongation factor 3, protein phosphatase 2A, and yeast kinase TOR1) repeat. Their dimeric interface is significantly distinct from the interface of mTOR complex 1 dimer, which oligomerizes through two spatially separate interfaces. We also observe different structural organizations of kinase domains of Mec1 and Tel1. The kinase domains in the Mec1·Ddc2 dimer are located in close proximity to each other. However, in the Tel1 dimer they are fully separated, providing potential access of substrates to this kinase, even in its dimeric form. © 2016 by The American Society for Biochemistry and Molecular Biology, Inc.
Huo, Xueliang; Ghovanloo, Maysam
2010-01-01
The tongue drive system (TDS) is an unobtrusive, minimally invasive, wearable and wireless tongue–computer interface (TCI), which can infer its users' intentions, represented in their volitional tongue movements, by detecting the position of a small permanent magnetic tracer attached to the users' tongues. Any specific tongue movements can be translated into user-defined commands and used to access and control various devices in the users' environments. The latest external TDS (eTDS) prototype is built on a wireless headphone and interfaced to a laptop PC and a powered wheelchair. Using customized sensor signal processing algorithms and graphical user interface, the eTDS performance was evaluated by 13 naive subjects with high-level spinal cord injuries (C2–C5) at the Shepherd Center in Atlanta, GA. Results of the human trial show that an average information transfer rate of 95 bits/min was achieved for computer access with 82% accuracy. This information transfer rate is about two times higher than the EEG-based BCIs that are tested on human subjects. It was also demonstrated that the subjects had immediate and full control over the powered wheelchair to the extent that they were able to perform complex wheelchair navigation tasks, such as driving through an obstacle course. PMID:20332552
System for assisted mobility using eye movements based on electrooculography.
Barea, Rafael; Boquete, Luciano; Mazo, Manuel; López, Elena
2002-12-01
This paper describes an eye-control method based on electrooculography (EOG) to develop a system for assisted mobility. One of its most important features is its modularity, making it adaptable to the particular needs of each user according to the type and degree of handicap involved. An eye model based on electroculographic signal is proposed and its validity is studied. Several human-machine interfaces (HMI) based on EOG are commented, focusing our study on guiding and controlling a wheelchair for disabled people, where the control is actually effected by eye movements within the socket. Different techniques and guidance strategies are then shown with comments on the advantages and disadvantages of each one. The system consists of a standard electric wheelchair with an on-board computer, sensors and a graphic user interface run by the computer. On the other hand, this eye-control method can be applied to handle graphical interfaces, where the eye is used as a mouse computer. Results obtained show that this control technique could be useful in multiple applications, such as mobility and communication aid for handicapped persons.
Factors in Human-Computer Interface Design (A Pilot Study).
1994-12-01
This study used a pretest - posttest control group experimental design to test the effect of consistency on speed, retention, and user satisfaction. Four...analysis. The overall methodology was a pretest - posttest control group experimental design using different prototypes to test the effects of...methodology used for this study was a pretest - posttest control group experimental design using different prototypes to test for features of the human
Broadening the interface bandwidth in simulation based training
NASA Technical Reports Server (NTRS)
Somers, Larry E.
1989-01-01
Currently most computer based simulations rely exclusively on computer generated graphics to create the simulation. When training is involved, the method almost exclusively used to display information to the learner is text displayed on the cathode ray tube. MICROEXPERT Systems is concentrating on broadening the communications bandwidth between the computer and user by employing a novel approach to video image storage combined with sound and voice output. An expert system is used to combine and control the presentation of analog video, sound, and voice output with computer based graphics and text. Researchers are currently involved in the development of several graphics based user interfaces for NASA, the U.S. Army, and the U.S. Navy. Here, the focus is on the human factors considerations, software modules, and hardware components being used to develop these interfaces.
Control-display mapping in brain-computer interfaces.
Thurlings, Marieke E; van Erp, Jan B F; Brouwer, Anne-Marie; Blankertz, Benjamin; Werkhoven, Peter
2012-01-01
Event-related potential (ERP) based brain-computer interfaces (BCIs) employ differences in brain responses to attended and ignored stimuli. When using a tactile ERP-BCI for navigation, mapping is required between navigation directions on a visual display and unambiguously corresponding tactile stimuli (tactors) from a tactile control device: control-display mapping (CDM). We investigated the effect of congruent (both display and control horizontal or both vertical) and incongruent (vertical display, horizontal control) CDMs on task performance, the ERP and potential BCI performance. Ten participants attended to a target (determined via CDM), in a stream of sequentially vibrating tactors. We show that congruent CDM yields best task performance, enhanced the P300 and results in increased estimated BCI performance. This suggests a reduced availability of attentional resources when operating an ERP-BCI with incongruent CDM. Additionally, we found an enhanced N2 for incongruent CDM, which indicates a conflict between visual display and tactile control orientations. Incongruency in control-display mapping reduces task performance. In this study, brain responses, task and system performance are related to (in)congruent mapping of command options and the corresponding stimuli in a brain-computer interface (BCI). Directional congruency reduces task errors, increases available attentional resources, improves BCI performance and thus facilitates human-computer interaction.
Interfaces for Advanced Computing.
ERIC Educational Resources Information Center
Foley, James D.
1987-01-01
Discusses the coming generation of supercomputers that will have the power to make elaborate "artificial realities" that facilitate user-computer communication. Illustrates these technological advancements with examples of the use of head-mounted monitors which are connected to position and orientation sensors, and gloves that track finger and…
Mundahl, John; Jianjun Meng; He, Jeffrey; Bin He
2016-08-01
Brain-computer interface (BCI) systems allow users to directly control computers and other machines by modulating their brain waves. In the present study, we investigated the effect of soft drinks on resting state (RS) EEG signals and BCI control. Eight healthy human volunteers each participated in three sessions of BCI cursor tasks and resting state EEG. During each session, the subjects drank an unlabeled soft drink with either sugar, caffeine, or neither ingredient. A comparison of resting state spectral power shows a substantial decrease in alpha and beta power after caffeine consumption relative to control. Despite attenuation of the frequency range used for the control signal, caffeine average BCI performance was the same as control. Our work provides a useful characterization of caffeine, the world's most popular stimulant, on brain signal frequencies and their effect on BCI performance.
My thoughts through a robot's eyes: an augmented reality-brain-machine interface.
Kansaku, Kenji; Hata, Naoki; Takano, Kouji
2010-02-01
A brain-machine interface (BMI) uses neurophysiological signals from the brain to control external devices, such as robot arms or computer cursors. Combining augmented reality with a BMI, we show that the user's brain signals successfully controlled an agent robot and operated devices in the robot's environment. The user's thoughts became reality through the robot's eyes, enabling the augmentation of real environments outside the anatomy of the human body.
Enabling Disabled Persons to Gain Access to Digital Media
NASA Technical Reports Server (NTRS)
Beach, Glenn; OGrady, Ryan
2011-01-01
A report describes the first phase in an effort to enhance the NaviGaze software to enable profoundly disabled persons to operate computers. (Running on a Windows-based computer equipped with a video camera aimed at the user s head, the original NaviGaze software processes the user's head movements and eye blinks into cursor movements and mouse clicks to enable hands-free control of the computer.) To accommodate large variations in movement capabilities among disabled individuals, one of the enhancements was the addition of a graphical user interface for selection of parameters that affect the way the software interacts with the computer and tracks the user s movements. Tracking algorithms were improved to reduce sensitivity to rotations and reduce the likelihood of tracking the wrong features. Visual feedback to the user was improved to provide an indication of the state of the computer system. It was found that users can quickly learn to use the enhanced software, performing single clicks, double clicks, and drags within minutes of first use. Available programs that could increase the usability of NaviGaze were identified. One of these enables entry of text by using NaviGaze as a mouse to select keys on a virtual keyboard.
Region based Brain Computer Interface for a home control application.
Akman Aydin, Eda; Bay, Omer Faruk; Guler, Inan
2015-08-01
Environment control is one of the important challenges for disabled people who suffer from neuromuscular diseases. Brain Computer Interface (BCI) provides a communication channel between the human brain and the environment without requiring any muscular activation. The most important expectation for a home control application is high accuracy and reliable control. Region-based paradigm is a stimulus paradigm based on oddball principle and requires selection of a target at two levels. This paper presents an application of region based paradigm for a smart home control application for people with neuromuscular diseases. In this study, a region based stimulus interface containing 49 commands was designed. Five non-disabled subjects were attended to the experiments. Offline analysis results of the experiments yielded 95% accuracy for five flashes. This result showed that region based paradigm can be used to select commands of a smart home control application with high accuracy in the low number of repetitions successfully. Furthermore, a statistically significant difference was not observed between the level accuracies.
An intelligent control and virtual display system for evolutionary space station workstation design
NASA Technical Reports Server (NTRS)
Feng, Xin; Niederjohn, Russell J.; Mcgreevy, Michael W.
1992-01-01
Research and development of the Advanced Display and Computer Augmented Control System (ADCACS) for the space station Body-Ported Cupola Virtual Workstation (BP/VCWS) were pursued. The potential applications were explored of body ported virtual display and intelligent control technology for the human-system interfacing applications is space station environment. The new system is designed to enable crew members to control and monitor a variety of space operations with greater flexibility and efficiency than existing fixed consoles. The technologies being studied include helmet mounted virtual displays, voice and special command input devices, and microprocessor based intelligent controllers. Several research topics, such as human factors, decision support expert systems, and wide field of view, color displays are being addressed. The study showed the significant advantages of this uniquely integrated display and control system, and its feasibility for human-system interfacing applications in the space station command and control environment.
How controllers compensate for the lack of flight progress strips.
DOT National Transportation Integrated Search
1996-02-01
The role of the Flight Progress Strip, currently used to display important flight data, has been debated because of long range plans to automate the air traffic control (ATC) human-computer interface. Currently, the Fight Progress Strip is viewed by ...
NASA Technical Reports Server (NTRS)
Johnson, David W.
1992-01-01
Virtual realities are a type of human-computer interface (HCI) and as such may be understood from a historical perspective. In the earliest era, the computer was a very simple, straightforward machine. Interaction was human manipulation of an inanimate object, little more than the provision of an explicit instruction set to be carried out without deviation. In short, control resided with the user. In the second era of HCI, some level of intelligence and control was imparted to the system to enable a dialogue with the user. Simple context sensitive help systems are early examples, while more sophisticated expert system designs typify this era. Control was shared more equally. In this, the third era of the HCI, the constructed system emulates a particular environment, constructed with rules and knowledge about 'reality'. Control is, in part, outside the realm of the human-computer dialogue. Virtual reality systems are discussed.
Design and development of a Space Station proximity operations research and development mockup
NASA Technical Reports Server (NTRS)
Haines, Richard F.
1986-01-01
Proximity operations (Prox-Ops) on-orbit refers to all activities taking place within one km of the Space Station. Designing a Prox-Ops control station calls for a comprehensive systems approach which takes into account structural constraints, orbital dynamics including approach/departure flight paths, myriad human factors and other topics. This paper describes a reconfigurable full-scale mock-up of a Prox-Ops station constructed at Ames incorporating an array of windows (with dynamic star field, target vehicle(s), and head-up symbology), head-down perspective display of manned and unmanned vehicles, voice- actuated 'electronic checklist', computer-generated voice system, expert system (to help diagnose subsystem malfunctions), and other displays and controls. The facility is used for demonstrations of selected Prox-Ops approach scenarios, human factors research (work-load assessment, determining external vision envelope requirements, head-down and head-up symbology design, voice synthesis and recognition research, etc.) and development of engineering design guidelines for future module interiors.
Cookbook Recipe to Simulate Seawater Intrusion with Standard MODFLOW
NASA Astrophysics Data System (ADS)
Schaars, F.; Bakker, M.
2012-12-01
We developed a cookbook recipe to simulate steady interface flow in multi-layer coastal aquifers with regular groundwater codes such as standard MODFLOW. The main step in the recipe is a simple transformation of the hydraulic conductivities and thicknesses of the aquifers. Standard groundwater codes may be applied to compute the head distribution in the aquifer using the transformed parameters. For example, for flow in a single unconfined aquifer, the hydraulic conductivity needs to be multiplied with 41 and the base of the aquifer needs to be set to mean sea level (for a relative seawater density of 1.025). Once the head distribution is obtained, the Ghijben-Herzberg relationship is applied to compute the depth of the interface. The recipe may be applied to quite general settings, including spatially variable aquifer properties. Any standard groundwater code may be used, as long as it can simulate unconfined flow where the transmissivity is a linear function of the head. The proposed recipe is benchmarked successfully against a number of analytic and numerical solutions.
Information visualization: Beyond traditional engineering
NASA Technical Reports Server (NTRS)
Thomas, James J.
1995-01-01
This presentation addresses a different aspect of the human-computer interface; specifically the human-information interface. This interface will be dominated by an emerging technology called Information Visualization (IV). IV goes beyond the traditional views of computer graphics, CADS, and enables new approaches for engineering. IV specifically must visualize text, documents, sound, images, and video in such a way that the human can rapidly interact with and understand the content structure of information entities. IV is the interactive visual interface between humans and their information resources.
Box, Simon
2014-01-01
Optimal switching of traffic lights on a network of junctions is a computationally intractable problem. In this research, road traffic networks containing signallized junctions are simulated. A computer game interface is used to enable a human ‘player’ to control the traffic light settings on the junctions within the simulation. A supervised learning approach, based on simple neural network classifiers can be used to capture human player's strategies in the game and thus develop a human-trained machine control (HuTMaC) system that approaches human levels of performance. Experiments conducted within the simulation compare the performance of HuTMaC to two well-established traffic-responsive control systems that are widely deployed in the developed world and also to a temporal difference learning-based control method. In all experiments, HuTMaC outperforms the other control methods in terms of average delay and variance over delay. The conclusion is that these results add weight to the suggestion that HuTMaC may be a viable alternative, or supplemental method, to approximate optimization for some practical engineering control problems where the optimal strategy is computationally intractable. PMID:26064570
Box, Simon
2014-12-01
Optimal switching of traffic lights on a network of junctions is a computationally intractable problem. In this research, road traffic networks containing signallized junctions are simulated. A computer game interface is used to enable a human 'player' to control the traffic light settings on the junctions within the simulation. A supervised learning approach, based on simple neural network classifiers can be used to capture human player's strategies in the game and thus develop a human-trained machine control (HuTMaC) system that approaches human levels of performance. Experiments conducted within the simulation compare the performance of HuTMaC to two well-established traffic-responsive control systems that are widely deployed in the developed world and also to a temporal difference learning-based control method. In all experiments, HuTMaC outperforms the other control methods in terms of average delay and variance over delay. The conclusion is that these results add weight to the suggestion that HuTMaC may be a viable alternative, or supplemental method, to approximate optimization for some practical engineering control problems where the optimal strategy is computationally intractable.
A Head in Virtual Reality: Development of A Dynamic Head and Neck Model
ERIC Educational Resources Information Center
Nguyen, Ngan; Wilson, Timothy D.
2009-01-01
Advances in computer and interface technologies have made it possible to create three-dimensional (3D) computerized models of anatomical structures for visualization, manipulation, and interaction in a virtual 3D environment. In the past few decades, a multitude of digital models have been developed to facilitate complex spatial learning of the…
Levels of detail analysis of microwave scattering from human head models for brain stroke detection
2017-01-01
In this paper, we have presented a microwave scattering analysis from multiple human head models. This study incorporates different levels of detail in the human head models and its effect on microwave scattering phenomenon. Two levels of detail are taken into account; (i) Simplified ellipse shaped head model (ii) Anatomically realistic head model, implemented using 2-D geometry. In addition, heterogenic and frequency-dispersive behavior of the brain tissues has also been incorporated in our head models. It is identified during this study that the microwave scattering phenomenon changes significantly once the complexity of head model is increased by incorporating more details using magnetic resonance imaging database. It is also found out that the microwave scattering results match in both types of head model (i.e., geometrically simple and anatomically realistic), once the measurements are made in the structurally simplified regions. However, the results diverge considerably in the complex areas of brain due to the arbitrary shape interface of tissue layers in the anatomically realistic head model. After incorporating various levels of detail, the solution of subject microwave scattering problem and the measurement of transmitted and backscattered signals were obtained using finite element method. Mesh convergence analysis was also performed to achieve error free results with a minimum number of mesh elements and a lesser degree of freedom in the fast computational time. The results were promising and the E-Field values converged for both simple and complex geometrical models. However, the E-Field difference between both types of head model at the same reference point differentiated a lot in terms of magnitude. At complex location, a high difference value of 0.04236 V/m was measured compared to the simple location, where it turned out to be 0.00197 V/m. This study also contributes to provide a comparison analysis between the direct and iterative solvers so as to find out the solution of subject microwave scattering problem in a minimum computational time along with memory resources requirement. It is seen from this study that the microwave imaging may effectively be utilized for the detection, localization and differentiation of different types of brain stroke. The simulation results verified that the microwave imaging can be efficiently exploited to study the significant contrast between electric field values of the normal and abnormal brain tissues for the investigation of brain anomalies. In the end, a specific absorption rate analysis was carried out to compare the ionizing effects of microwave signals to different types of head model using a factor of safety for brain tissues. It is also suggested after careful study of various inversion methods in practice for microwave head imaging, that the contrast source inversion method may be more suitable and computationally efficient for such problems. PMID:29177115
Simulation of Physical Experiments in Immersive Virtual Environments
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.; Wasfy, Tamer M.
2001-01-01
An object-oriented event-driven immersive Virtual environment is described for the creation of virtual labs (VLs) for simulating physical experiments. Discussion focuses on a number of aspects of the VLs, including interface devices, software objects, and various applications. The VLs interface with output devices, including immersive stereoscopic screed(s) and stereo speakers; and a variety of input devices, including body tracking (head and hands), haptic gloves, wand, joystick, mouse, microphone, and keyboard. The VL incorporates the following types of primitive software objects: interface objects, support objects, geometric entities, and finite elements. Each object encapsulates a set of properties, methods, and events that define its behavior, appearance, and functions. A container object allows grouping of several objects. Applications of the VLs include viewing the results of the physical experiment, viewing a computer simulation of the physical experiment, simulation of the experiments procedure, computational steering, and remote control of the physical experiment. In addition, the VL can be used as a risk-free (safe) environment for training. The implementation of virtual structures testing machines, virtual wind tunnels, and a virtual acoustic testing facility is described.
Fourth Annual Workshop on Space Operations Applications and Research (SOAR 90)
NASA Technical Reports Server (NTRS)
Savely, Robert T. (Editor)
1991-01-01
The papers from the symposium are presented. Emphasis is placed on human factors engineering and space environment interactions. The technical areas covered in the human factors section include: satellite monitoring and control, man-computer interfaces, expert systems, AI/robotics interfaces, crew system dynamics, and display devices. The space environment interactions section presents the following topics: space plasma interaction, spacecraft contamination, space debris, and atomic oxygen interaction with materials. Some of the above topics are discussed in relation to the space station and space shuttle.
Extending human proprioception to cyber-physical systems
NASA Astrophysics Data System (ADS)
Keller, Kevin; Robinson, Ethan; Dickstein, Leah; Hahn, Heidi A.; Cattaneo, Alessandro; Mascareñas, David
2016-04-01
Despite advances in computational cognition, there are many cyber-physical systems where human supervision and control is desirable. One pertinent example is the control of a robot arm, which can be found in both humanoid and commercial ground robots. Current control mechanisms require the user to look at several screens of varying perspective on the robot, then give commands through a joystick-like mechanism. This control paradigm fails to provide the human operator with an intuitive state feedback, resulting in awkward and slow behavior and underutilization of the robot's physical capabilities. To overcome this bottleneck, we introduce a new human-machine interface that extends the operator's proprioception by exploiting sensory substitution. Humans have a proprioceptive sense that provides us information on how our bodies are configured in space without having to directly observe our appendages. We constructed a wearable device with vibrating actuators on the forearm, where frequency of vibration corresponds to the spatial configuration of a robotic arm. The goal of this interface is to provide a means to communicate proprioceptive information to the teleoperator. Ultimately we will measure the change in performance (time taken to complete the task) achieved by the use of this interface.
Portable long trace profiler: Concept and solution
NASA Astrophysics Data System (ADS)
Qian, Shinan; Takacs, Peter; Sostero, Giovanni; Cocco, Daniele
2001-08-01
Since the early development of the penta-prism long trace profiler (LTP) and the in situ LTP, and following the completion of the first in situ distortion profile measurements at Sincrotrone Trieste (ELETTRA) in Italy in 1995, a concept was developed for a compact, portable LTP with the following characteristics: easily installed on synchrotron radiation beam lines, easily carried to different laboratories around the world for measurements and calibration, convenient for use in evaluating the LTP as an in-process tool in the optical workshop, and convenient for use in temporarily installation as required by other special applications. The initial design of a compact LTP optical head was made at ELETTRA in 1995. Since 1997 further efforts to reduce the optical head size and weight, and to improve measurement stability have been made at Brookhaven National Laboratory. This article introduces the following solutions and accomplishments for the portable LTP: (1) a new design for a compact and very stable optical head, (2) the use of a small detector connected to a laptop computer directly via an enhanced parallel port, and there is no extra frame grabber interface and control box, (3) a customized small mechanical slide that uses a compact motor with a connector-sized motor controller, and (4) the use of a laptop computer system. These solutions make the portable LTP able to be packed into two laptop-size cases: one for the computer and one for the rest of the system.
Deep Space Network (DSN), Network Operations Control Center (NOCC) computer-human interfaces
NASA Technical Reports Server (NTRS)
Ellman, Alvin; Carlton, Magdi
1993-01-01
The technical challenges, engineering solutions, and results of the NOCC computer-human interface design are presented. The use-centered design process was as follows: determine the design criteria for user concerns; assess the impact of design decisions on the users; and determine the technical aspects of the implementation (tools, platforms, etc.). The NOCC hardware architecture is illustrated. A graphical model of the DSN that represented the hierarchical structure of the data was constructed. The DSN spacecraft summary display is shown. Navigation from top to bottom is accomplished by clicking the appropriate button for the element about which the user desires more detail. The telemetry summary display and the antenna color decision table are also shown.
Techno-Human Mesh: The Growing Power of Information Technologies.
ERIC Educational Resources Information Center
West, Cynthia K.
This book examines the intersection of information technologies, power, people, and bodies. It explores how information technologies are on a path of creating efficiency, productivity, profitability, surveillance, and control, and looks at the ways in which human-machine interface technologies, such as wearable computers, biometric technologies,…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Donald D Dudenhoeffer; Burce P Hallbert
Instrumentation, Controls, and Human-Machine Interface (ICHMI) technologies are essential to ensuring delivery and effective operation of optimized advanced Generation IV (Gen IV) nuclear energy systems. In 1996, the Watts Bar I nuclear power plant in Tennessee was the last U.S. nuclear power plant to go on line. It was, in fact, built based on pre-1990 technology. Since this last U.S. nuclear power plant was designed, there have been major advances in the field of ICHMI systems. Computer technology employed in other industries has advanced dramatically, and computing systems are now replaced every few years as they become functionally obsolete. Functionalmore » obsolescence occurs when newer, more functional technology replaces or supersedes an existing technology, even though an existing technology may well be in working order.Although ICHMI architectures are comprised of much of the same technology, they have not been updated nearly as often in the nuclear power industry. For example, some newer Personal Digital Assistants (PDAs) or handheld computers may, in fact, have more functionality than the 1996 computer control system at the Watts Bar I plant. This illustrates the need to transition and upgrade current nuclear power plant ICHMI technologies.« less
Comparison of three different techniques for camera and motion control of a teleoperated robot.
Doisy, Guillaume; Ronen, Adi; Edan, Yael
2017-01-01
This research aims to evaluate new methods for robot motion control and camera orientation control through the operator's head orientation in robot teleoperation tasks. Specifically, the use of head-tracking in a non-invasive way, without immersive virtual reality devices was combined and compared with classical control modes for robot movements and camera control. Three control conditions were tested: 1) a condition with classical joystick control of both the movements of the robot and the robot camera, 2) a condition where the robot movements were controlled by a joystick and the robot camera was controlled by the user head orientation, and 3) a condition where the movements of the robot were controlled by hand gestures and the robot camera was controlled by the user head orientation. Performance, workload metrics and their evolution as the participants gained experience with the system were evaluated in a series of experiments: for each participant, the metrics were recorded during four successive similar trials. Results shows that the concept of robot camera control by user head orientation has the potential of improving the intuitiveness of robot teleoperation interfaces, specifically for novice users. However, more development is needed to reach a margin of progression comparable to a classical joystick interface. Copyright © 2016 Elsevier Ltd. All rights reserved.
Beard, Brian B; Kainz, Wolfgang; Onishi, Teruo; Iyama, Takahiro; Watanabe, Soichi; Fujiwara, Osamu; Wang, Jianqing; Bit-Babik, Giorgi; Faraone, Antonio; Wiart, Joe; Christ, Andreas; Kuster, Niels; Lee, Ae-Kyoung; Kroeze, Hugo; Siegbahn, Martin; Keshvari, Jafar; Abrishamkar, Houman; Simon, Winfried; Manteuffel, Dirk; Nikoloski, Neviana
2006-06-05
The specific absorption rates (SAR) determined computationally in the specific anthropomorphic mannequin (SAM) and anatomically correct models of the human head when exposed to a mobile phone model are compared as part of a study organized by IEEE Standards Coordinating Committee 34, SubCommittee 2, and Working Group 2, and carried out by an international task force comprising 14 government, academic, and industrial research institutions. The detailed study protocol defined the computational head and mobile phone models. The participants used different finite-difference time-domain software and independently positioned the mobile phone and head models in accordance with the protocol. The results show that when the pinna SAR is calculated separately from the head SAR, SAM produced a higher SAR in the head than the anatomically correct head models. Also the larger (adult) head produced a statistically significant higher peak SAR for both the 1- and 10-g averages than did the smaller (child) head for all conditions of frequency and position.
Enrichment of Human-Computer Interaction in Brain-Computer Interfaces via Virtual Environments
Víctor Rodrigo, Mercado-García
2017-01-01
Tridimensional representations stimulate cognitive processes that are the core and foundation of human-computer interaction (HCI). Those cognitive processes take place while a user navigates and explores a virtual environment (VE) and are mainly related to spatial memory storage, attention, and perception. VEs have many distinctive features (e.g., involvement, immersion, and presence) that can significantly improve HCI in highly demanding and interactive systems such as brain-computer interfaces (BCI). BCI is as a nonmuscular communication channel that attempts to reestablish the interaction between an individual and his/her environment. Although BCI research started in the sixties, this technology is not efficient or reliable yet for everyone at any time. Over the past few years, researchers have argued that main BCI flaws could be associated with HCI issues. The evidence presented thus far shows that VEs can (1) set out working environmental conditions, (2) maximize the efficiency of BCI control panels, (3) implement navigation systems based not only on user intentions but also on user emotions, and (4) regulate user mental state to increase the differentiation between control and noncontrol modalities. PMID:29317861
NASA Technical Reports Server (NTRS)
1990-01-01
While a new technology called 'virtual reality' is still at the 'ground floor' level, one of its basic components, 3D computer graphics is already in wide commercial use and expanding. Other components that permit a human operator to 'virtually' explore an artificial environment and to interact with it are being demonstrated routinely at Ames and elsewhere. Virtual reality might be defined as an environment capable of being virtually entered - telepresence, it is called - or interacted with by a human. The Virtual Interface Environment Workstation (VIEW) is a head-mounted stereoscopic display system in which the display may be an artificial computer-generated environment or a real environment relayed from remote video cameras. Operator can 'step into' this environment and interact with it. The DataGlove has a series of fiber optic cables and sensors that detect any movement of the wearer's fingers and transmit the information to a host computer; a computer generated image of the hand will move exactly as the operator is moving his gloved hand. With appropriate software, the operator can use the glove to interact with the computer scene by grasping an object. The DataSuit is a sensor equipped full body garment that greatly increases the sphere of performance for virtual reality simulations.
Assessing the Impact of Low Workload in Supervisory Control of Networked Unmanned Vehicles
2010-06-01
influence is expected to contain men and women between the ages of 18 and 50 with an interest in using computers. You should read the information below...controlling land, air, and sea vehicles of all different types from the same supervisory control interface. As human supervisory control of UxVs...expressions indicated when boredom was occurring. Video coding shows that humans deal with boredom in different ways. Some individuals are more
Formalisms for user interface specification and design
NASA Technical Reports Server (NTRS)
Auernheimer, Brent J.
1989-01-01
The application of formal methods to the specification and design of human-computer interfaces is described. A broad outline of human-computer interface problems, a description of the field of cognitive engineering and two relevant research results, the appropriateness of formal specification techniques, and potential NASA application areas are described.
An Architectural Experience for Interface Design
ERIC Educational Resources Information Center
Gong, Susan P.
2016-01-01
The problem of human-computer interface design was brought to the foreground with the emergence of the personal computer, the increasing complexity of electronic systems, and the need to accommodate the human operator in these systems. With each new technological generation discovering the interface design problems of its own technologies, initial…
NASA Technical Reports Server (NTRS)
Jiang, Jian-Ping; Murphy, Elizabeth D.; Bailin, Sidney C.; Truszkowski, Walter F.
1993-01-01
Capturing human factors knowledge about the design of graphical user interfaces (GUI's) and applying this knowledge on-line are the primary objectives of the Computer-Human Interaction Models (CHIMES) project. The current CHIMES prototype is designed to check a GUI's compliance with industry-standard guidelines, general human factors guidelines, and human factors recommendations on color usage. Following the evaluation, CHIMES presents human factors feedback and advice to the GUI designer. The paper describes the approach to modeling human factors guidelines, the system architecture, a new method developed to convert quantitative RGB primaries into qualitative color representations, and the potential for integrating CHIMES with user interface management systems (UIMS). Both the conceptual approach and its implementation are discussed. This paper updates the presentation on CHIMES at the first International Symposium on Ground Data Systems for Spacecraft Control.
[Design and implementation of controlling smart car systems using P300 brain-computer interface].
Wang, Jinjia; Yang, Chengjie; Hu, Bei
2013-04-01
Using human electroencephalogram (EEG) to control external devices in order to achieve a variety of functions has been focus of the field of brain-computer interface (BCI) research. P300 is experiments which stimulate the eye to produce EEG by using letters flashing, and then identify the corresponding letters. In this paper, some improvements based on the P300 experiments were made??. Firstly, the matrix of flashing letters were modified into words which represent a certain sense. Secondly, the BCI2000 procedures were added with the corresponding source code. Thirdly, the smart car systems were designed using the radiofrequency signal. Finally it was realized that the evoked potentials were used to control the state of the smart car.
An improved maximum permissible exposure meter for safety assessments of laser radiation
NASA Astrophysics Data System (ADS)
Corder, D. A.; Evans, D. R.; Tyrer, J. R.
1997-12-01
Current interest in laser radiation safety requires demonstration that a laser system has been designed to prevent exposure to levels of laser radiation exceeding the Maximum Permissible Exposure. In some simple systems it is possible to prove this by calculation, but in most cases it is preferable to confirm calculated results with a measurement. This measurement may be made with commercially available equipment, but there are limitations with this approach. A custom designed instrument is presented in which the full range of measurement issues have been addressed. Important features of the instrument are the design and optimisation of detector heads for the measurement task, and consideration of user interface requirements. Three designs for detector head are presented, these cover the majority of common laser types. Detector heads are designed to optimise the performance of relatively low cost detector elements for this measurement task. The three detector head designs are suitable for interfacing to photodiodes, low power thermopiles and pyroelectric detectors. Design of the user interface was an important aspect of the work. A user interface which is designed for the specific application minimises the risk of user error or misinterpretation of the measurement results. A palmtop computer was used to provide an advanced user interface. User requirements were considered in order that the final implement was well matched to the task of laser radiation hazard audits.
Control of a visual keyboard using an electrocorticographic brain-computer interface.
Krusienski, Dean J; Shih, Jerry J
2011-05-01
Brain-computer interfaces (BCIs) are devices that enable severely disabled people to communicate and interact with their environments using their brain waves. Most studies investigating BCI in humans have used scalp EEG as the source of electrical signals and focused on motor control of prostheses or computer cursors on a screen. The authors hypothesize that the use of brain signals obtained directly from the cortical surface will more effectively control a communication/spelling task compared to scalp EEG. A total of 6 patients with medically intractable epilepsy were tested for the ability to control a visual keyboard using electrocorticographic (ECOG) signals. ECOG data collected during a P300 visual task paradigm were preprocessed and used to train a linear classifier to subsequently predict the intended target letters. The classifier was able to predict the intended target character at or near 100% accuracy using fewer than 15 stimulation sequences in 5 of the 6 people tested. ECOG data from electrodes outside the language cortex contributed to the classifier and enabled participants to write words on a visual keyboard. This is a novel finding because previous invasive BCI research in humans used signals exclusively from the motor cortex to control a computer cursor or prosthetic device. These results demonstrate that ECOG signals from electrodes both overlying and outside the language cortex can reliably control a visual keyboard to generate language output without voice or limb movements.
Kim, Sung-Phil; Simeral, John D; Hochberg, Leigh R; Donoghue, John P; Black, Michael J
2010-01-01
Computer-mediated connections between human motor cortical neurons and assistive devices promise to improve or restore lost function in people with paralysis. Recently, a pilot clinical study of an intracortical neural interface system demonstrated that a tetraplegic human was able to obtain continuous two-dimensional control of a computer cursor using neural activity recorded from his motor cortex. This control, however, was not sufficiently accurate for reliable use in many common computer control tasks. Here, we studied several central design choices for such a system including the kinematic representation for cursor movement, the decoding method that translates neuronal ensemble spiking activity into a control signal and the cursor control task used during training for optimizing the parameters of the decoding method. In two tetraplegic participants, we found that controlling a cursor's velocity resulted in more accurate closed-loop control than controlling its position directly and that cursor velocity control was achieved more rapidly than position control. Control quality was further improved over conventional linear filters by using a probabilistic method, the Kalman filter, to decode human motor cortical activity. Performance assessment based on standard metrics used for the evaluation of a wide range of pointing devices demonstrated significantly improved cursor control with velocity rather than position decoding. PMID:19015583
The experience of agency in human-computer interactions: a review
Limerick, Hannah; Coyle, David; Moore, James W.
2014-01-01
The sense of agency is the experience of controlling both one’s body and the external environment. Although the sense of agency has been studied extensively, there is a paucity of studies in applied “real-life” situations. One applied domain that seems highly relevant is human-computer-interaction (HCI), as an increasing number of our everyday agentive interactions involve technology. Indeed, HCI has long recognized the feeling of control as a key factor in how people experience interactions with technology. The aim of this review is to summarize and examine the possible links between sense of agency and understanding control in HCI. We explore the overlap between HCI and sense of agency for computer input modalities and system feedback, computer assistance, and joint actions between humans and computers. An overarching consideration is how agency research can inform HCI and vice versa. Finally, we discuss the potential ethical implications of personal responsibility in an ever-increasing society of technology users and intelligent machine interfaces. PMID:25191256
The Virtual Tablet: Virtual Reality as a Control System
NASA Technical Reports Server (NTRS)
Chronister, Andrew
2016-01-01
In the field of human-computer interaction, Augmented Reality (AR) and Virtual Reality (VR) have been rapidly growing areas of interest and concerted development effort thanks to both private and public research. At NASA, a number of groups have explored the possibilities afforded by AR and VR technology, among which is the IT Advanced Concepts Lab (ITACL). Within ITACL, the AVR (Augmented/Virtual Reality) Lab focuses on VR technology specifically for its use in command and control. Previous work in the AVR lab includes the Natural User Interface (NUI) project and the Virtual Control Panel (VCP) project, which created virtual three-dimensional interfaces that users could interact with while wearing a VR headset thanks to body- and hand-tracking technology. The Virtual Tablet (VT) project attempts to improve on these previous efforts by incorporating a physical surrogate which is mirrored in the virtual environment, mitigating issues with difficulty of visually determining the interface location and lack of tactile feedback discovered in the development of previous efforts. The physical surrogate takes the form of a handheld sheet of acrylic glass with several infrared-range reflective markers and a sensor package attached. Using the sensor package to track orientation and a motion-capture system to track the marker positions, a model of the surrogate is placed in the virtual environment at a position which corresponds with the real-world location relative to the user's VR Head Mounted Display (HMD). A set of control mechanisms is then projected onto the surface of the surrogate such that to the user, immersed in VR, the control interface appears to be attached to the object they are holding. The VT project was taken from an early stage where the sensor package, motion-capture system, and physical surrogate had been constructed or tested individually but not yet combined or incorporated into the virtual environment. My contribution was to combine the pieces of hardware, write software to incorporate each piece of position or orientation data into a coherent description of the object's location in space, place the virtual analogue accordingly, and project the control interface onto it, resulting in a functioning object which has both a physical and a virtual presence. Additionally, the virtual environment was enhanced with two live video feeds from cameras mounted on the robotic device being used as an example target of the virtual interface. The working VT allows users to naturally interact with a control interface with little to no training and without the issues found in previous efforts.
Non-invasive brain-computer interface system: towards its application as assistive technology.
Cincotti, Febo; Mattia, Donatella; Aloise, Fabio; Bufalari, Simona; Schalk, Gerwin; Oriolo, Giuseppe; Cherubini, Andrea; Marciani, Maria Grazia; Babiloni, Fabio
2008-04-15
The quality of life of people suffering from severe motor disabilities can benefit from the use of current assistive technology capable of ameliorating communication, house-environment management and mobility, according to the user's residual motor abilities. Brain-computer interfaces (BCIs) are systems that can translate brain activity into signals that control external devices. Thus they can represent the only technology for severely paralyzed patients to increase or maintain their communication and control options. Here we report on a pilot study in which a system was implemented and validated to allow disabled persons to improve or recover their mobility (directly or by emulation) and communication within the surrounding environment. The system is based on a software controller that offers to the user a communication interface that is matched with the individual's residual motor abilities. Patients (n=14) with severe motor disabilities due to progressive neurodegenerative disorders were trained to use the system prototype under a rehabilitation program carried out in a house-like furnished space. All users utilized regular assistive control options (e.g., microswitches or head trackers). In addition, four subjects learned to operate the system by means of a non-invasive EEG-based BCI. This system was controlled by the subjects' voluntary modulations of EEG sensorimotor rhythms recorded on the scalp; this skill was learnt even though the subjects have not had control over their limbs for a long time. We conclude that such a prototype system, which integrates several different assistive technologies including a BCI system, can potentially facilitate the translation from pre-clinical demonstrations to a clinical useful BCI.
Non invasive Brain-Computer Interface system: towards its application as assistive technology
Cincotti, Febo; Mattia, Donatella; Aloise, Fabio; Bufalari, Simona; Schalk, Gerwin; Oriolo, Giuseppe; Cherubini, Andrea; Marciani, Maria Grazia; Babiloni, Fabio
2010-01-01
The quality of life of people suffering from severe motor disabilities can benefit from the use of current assistive technology capable of ameliorating communication, house-environment management and mobility, according to the user's residual motor abilities. Brain Computer Interfaces (BCIs) are systems that can translate brain activity into signals that control external devices. Thus they can represent the only technology for severely paralyzed patients to increase or maintain their communication and control options. Here we report on a pilot study in which a system was implemented and validated to allow disabled persons to improve or recover their mobility (directly or by emulation) and communication within the surrounding environment. The system is based on a software controller that offers to the user a communication interface that is matched with the individual's residual motor abilities. Patients (n=14) with severe motor disabilities due to progressive neurodegenerative disorders were trained to use the system prototype under a rehabilitation program carried out in a house-like furnished space. All users utilized regular assistive control options (e.g., microswitches or head trackers). In addition, four subjects learned to operate the system by means of a non-invasive EEG-based BCI. This system was controlled by the subjects' voluntary modulations of EEG sensorimotor rhythms recorded on the scalp; this skill was learnt even though the subjects have not had control over their limbs for a long time. We conclude that such a prototype system, which integrates several different assistive technologies including a BCI system, can potentially facilitate the translation from pre-clinical demonstrations to a clinical useful BCI. PMID:18394526
Encoder-Decoder Optimization for Brain-Computer Interfaces
Merel, Josh; Pianto, Donald M.; Cunningham, John P.; Paninski, Liam
2015-01-01
Neuroprosthetic brain-computer interfaces are systems that decode neural activity into useful control signals for effectors, such as a cursor on a computer screen. It has long been recognized that both the user and decoding system can adapt to increase the accuracy of the end effector. Co-adaptation is the process whereby a user learns to control the system in conjunction with the decoder adapting to learn the user's neural patterns. We provide a mathematical framework for co-adaptation and relate co-adaptation to the joint optimization of the user's control scheme ("encoding model") and the decoding algorithm's parameters. When the assumptions of that framework are respected, co-adaptation cannot yield better performance than that obtainable by an optimal initial choice of fixed decoder, coupled with optimal user learning. For a specific case, we provide numerical methods to obtain such an optimized decoder. We demonstrate our approach in a model brain-computer interface system using an online prosthesis simulator, a simple human-in-the-loop pyschophysics setup which provides a non-invasive simulation of the BCI setting. These experiments support two claims: that users can learn encoders matched to fixed, optimal decoders and that, once learned, our approach yields expected performance advantages. PMID:26029919
Encoder-decoder optimization for brain-computer interfaces.
Merel, Josh; Pianto, Donald M; Cunningham, John P; Paninski, Liam
2015-06-01
Neuroprosthetic brain-computer interfaces are systems that decode neural activity into useful control signals for effectors, such as a cursor on a computer screen. It has long been recognized that both the user and decoding system can adapt to increase the accuracy of the end effector. Co-adaptation is the process whereby a user learns to control the system in conjunction with the decoder adapting to learn the user's neural patterns. We provide a mathematical framework for co-adaptation and relate co-adaptation to the joint optimization of the user's control scheme ("encoding model") and the decoding algorithm's parameters. When the assumptions of that framework are respected, co-adaptation cannot yield better performance than that obtainable by an optimal initial choice of fixed decoder, coupled with optimal user learning. For a specific case, we provide numerical methods to obtain such an optimized decoder. We demonstrate our approach in a model brain-computer interface system using an online prosthesis simulator, a simple human-in-the-loop pyschophysics setup which provides a non-invasive simulation of the BCI setting. These experiments support two claims: that users can learn encoders matched to fixed, optimal decoders and that, once learned, our approach yields expected performance advantages.
Human-computer interface glove using flexible piezoelectric sensors
NASA Astrophysics Data System (ADS)
Cha, Youngsu; Seo, Jeonggyu; Kim, Jun-Sik; Park, Jung-Min
2017-05-01
In this note, we propose a human-computer interface glove based on flexible piezoelectric sensors. We select polyvinylidene fluoride as the piezoelectric material for the sensors because of advantages such as a steady piezoelectric characteristic and good flexibility. The sensors are installed in a fabric glove by means of pockets and Velcro bands. We detect changes in the angles of the finger joints from the outputs of the sensors, and use them for controlling a virtual hand that is utilized in virtual object manipulation. To assess the sensing ability of the piezoelectric sensors, we compare the processed angles from the sensor outputs with the real angles from a camera recoding. With good agreement between the processed and real angles, we successfully demonstrate the user interaction system with the virtual hand and interface glove based on the flexible piezoelectric sensors, for four hand motions: fist clenching, pinching, touching, and grasping.
Mind-controlled transgene expression by a wireless-powered optogenetic designer cell implant.
Folcher, Marc; Oesterle, Sabine; Zwicky, Katharina; Thekkottil, Thushara; Heymoz, Julie; Hohmann, Muriel; Christen, Matthias; Daoud El-Baba, Marie; Buchmann, Peter; Fussenegger, Martin
2014-11-11
Synthetic devices for traceless remote control of gene expression may provide new treatment opportunities in future gene- and cell-based therapies. Here we report the design of a synthetic mind-controlled gene switch that enables human brain activities and mental states to wirelessly programme the transgene expression in human cells. An electroencephalography (EEG)-based brain-computer interface (BCI) processing mental state-specific brain waves programs an inductively linked wireless-powered optogenetic implant containing designer cells engineered for near-infrared (NIR) light-adjustable expression of the human glycoprotein SEAP (secreted alkaline phosphatase). The synthetic optogenetic signalling pathway interfacing the BCI with target gene expression consists of an engineered NIR light-activated bacterial diguanylate cyclase (DGCL) producing the orthogonal second messenger cyclic diguanosine monophosphate (c-di-GMP), which triggers the stimulator of interferon genes (STING)-dependent induction of synthetic interferon-β promoters. Humans generating different mental states (biofeedback control, concentration, meditation) can differentially control SEAP production of the designer cells in culture and of subcutaneous wireless-powered optogenetic implants in mice.
NASA Technical Reports Server (NTRS)
Adams, Richard J.; Olowin, Aaron; Krepkovich, Eileen; Hannaford, Blake; Lindsay, Jack I. C.; Homer, Peter; Patrie, James T.; Sands, O. Scott
2013-01-01
The Glove-Enabled Computer Operations (GECO) system enables an extravehicular activity (EVA) glove to be dual-purposed as a human-computer interface device. This paper describes the design and human participant testing of a right-handed GECO glove in a pressurized glove box. As part of an investigation into the usability of the GECO system for EVA data entry, twenty participants were asked to complete activities including (1) a Simon Says Games in which they attempted to duplicate random sequences of targeted finger strikes and (2) a Text Entry activity in which they used the GECO glove to enter target phrases in two different virtual keyboard modes. In a within-subjects design, both activities were performed both with and without vibrotactile feedback. Participants' mean accuracies in correctly generating finger strikes with the pressurized glove were surprisingly high, both with and without the benefit of tactile feedback. Five of the subjects achieved mean accuracies exceeding 99% in both conditions. In Text Entry, tactile feedback provided a statistically significant performance benefit, quantified by characters entered per minute, as well as reduction in error rate. Secondary analyses of responses to a NASA Task Loader Index (TLX) subjective workload assessments reveal a benefit for tactile feedback in GECO glove use for data entry. This first-ever investigation of employment of a pressurized EVA glove for human-computer interface opens up a wide range of future applications, including text "chat" communications, manipulation of procedures/checklists, cataloguing/annotating images, scientific note taking, human-robot interaction, and control of suit and/or other EVA systems.
NASA Technical Reports Server (NTRS)
Adams, Richard J.; Olowin, Aaron; Krepkovich, Eileen; Hannaford, Blake; Lindsay, Jack I. C.; Homer, Peter; Patrie, James T.; Sands, O. Scott
2013-01-01
The Glove-Enabled Computer Operations (GECO) system enables an extravehicular activity (EVA) glove to be dual-purposed as a human-computer interface device. This paper describes the design and human participant testing of a right-handed GECO glove in a pressurized glove box. As part of an investigation into the usability of the GECO system for EVA data entry, twenty participants were asked to complete activities including (1) a Simon Says Games in which they attempted to duplicate random sequences of targeted finger strikes and (2) a Text Entry activity in which they used the GECO glove to enter target phrases in two different virtual keyboard modes. In a within-subjects design, both activities were performed both with and without vibrotactile feedback. Participants mean accuracies in correctly generating finger strikes with the pressurized glove were surprisingly high, both with and without the benefit of tactile feedback. Five of the subjects achieved mean accuracies exceeding 99 in both conditions. In Text Entry, tactile feedback provided a statistically significant performance benefit, quantified by characters entered per minute, as well as reduction in error rate. Secondary analyses of responses to a NASA Task Loader Index (TLX) subjective workload assessments reveal a benefit for tactile feedback in GECO glove use for data entry. This first-ever investigation of employment of a pressurized EVA glove for human-computer interface opens up a wide range of future applications, including text chat communications, manipulation of procedureschecklists, cataloguingannotating images, scientific note taking, human-robot interaction, and control of suit andor other EVA systems.
A Software Architecture for a Small Autonomous Underwater Vehicle Navigation System
1993-06-01
angle consistent with system accuracy objectives for the interim SANS system must be quantified. 12 DEPTH CHAC oCLIMB ANGLE HORIZONTAL DISTANCE Figure...Figure 4.1 illustrates the hardware interface. COMPUTER (ESP-8o80) D IG IT A L B I N A R GYRO SIGNAL BINARY BINARY HEADING DATA "\\DATA DEPTH /RS-232...Mode 3 of the 82C54 provides a square wave through any of the 3 counters in the 82C54. An initial count N is written to the counter control register
A Framework and Implementation of User Interface and Human-Computer Interaction Instruction
ERIC Educational Resources Information Center
Peslak, Alan
2005-01-01
Researchers have suggested that up to 50 % of the effort in development of information systems is devoted to user interface development (Douglas, Tremaine, Leventhal, Wills, & Manaris, 2002; Myers & Rosson, 1992). Yet little study has been performed on the inclusion of important interface and human-computer interaction topics into a current…
PointCom: semi-autonomous UGV control with intuitive interface
NASA Astrophysics Data System (ADS)
Rohde, Mitchell M.; Perlin, Victor E.; Iagnemma, Karl D.; Lupa, Robert M.; Rohde, Steven M.; Overholt, James; Fiorani, Graham
2008-04-01
Unmanned ground vehicles (UGVs) will play an important role in the nation's next-generation ground force. Advances in sensing, control, and computing have enabled a new generation of technologies that bridge the gap between manual UGV teleoperation and full autonomy. In this paper, we present current research on a unique command and control system for UGVs named PointCom (Point-and-Go Command). PointCom is a semi-autonomous command system for one or multiple UGVs. The system, when complete, will be easy to operate and will enable significant reduction in operator workload by utilizing an intuitive image-based control framework for UGV navigation and allowing a single operator to command multiple UGVs. The project leverages new image processing algorithms for monocular visual servoing and odometry to yield a unique, high-performance fused navigation system. Human Computer Interface (HCI) techniques from the entertainment software industry are being used to develop video-game style interfaces that require little training and build upon the navigation capabilities. By combining an advanced navigation system with an intuitive interface, a semi-autonomous control and navigation system is being created that is robust, user friendly, and less burdensome than many current generation systems. mand).
Small computer interface to a stepper motor
NASA Technical Reports Server (NTRS)
Berry, Fred A., Jr.
1986-01-01
A Commodore VIC-20 computer has been interfaced with a stepper motor to provide an inexpensive stepper motor controller. Only eight transistors and two integrated circuits compose the interface. The software controls the parallel interface of the computer and provides the four phase drive signals for the motor. Optical sensors control the zeroing of the 12-inch turntable positioned by the controller. The computer calculates the position information and movement of the table and may be programmed in BASIC to execute automatic sequences.
NASA Astrophysics Data System (ADS)
See, Swee Lan; Tan, Mitchell; Looi, Qin En
This paper presents findings from a descriptive research on social gaming. A video-enhanced diary method was used to understand the user experience in social gaming. From this experiment, we found that natural human behavior and gamer’s decision making process can be elicited and speculated during human computer interaction. These are new information that we should consider as they can help us build better human computer interfaces and human robotic interfaces in future.
An egocentric vision based assistive co-robot.
Zhang, Jingzhe; Zhuang, Lishuo; Wang, Yang; Zhou, Yameng; Meng, Yan; Hua, Gang
2013-06-01
We present the prototype of an egocentric vision based assistive co-robot system. In this co-robot system, the user is wearing a pair of glasses with a forward looking camera, and is actively engaged in the control loop of the robot in navigational tasks. The egocentric vision glasses serve for two purposes. First, it serves as a source of visual input to request the robot to find a certain object in the environment. Second, the motion patterns computed from the egocentric video associated with a specific set of head movements are exploited to guide the robot to find the object. These are especially helpful for quadriplegic individuals who do not have needed hand functionality for interaction and control with other modalities (e.g., joystick). In our co-robot system, when the robot does not fulfill the object finding task in a pre-specified time window, it would actively solicit user controls for guidance. Then the users can use the egocentric vision based gesture interface to orient the robot towards the direction of the object. After that the robot will automatically navigate towards the object until it finds it. Our experiments validated the efficacy of the closed-loop design to engage the human in the loop.
U.S. Army weapon systems human-computer interface style guide. Version 2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Avery, L.W.; O`Mara, P.A.; Shepard, A.P.
1997-12-31
A stated goal of the US Army has been the standardization of the human computer interfaces (HCIs) of its system. Some of the tools being used to accomplish this standardization are HCI design guidelines and style guides. Currently, the Army is employing a number of HCI design guidance documents. While these style guides provide good guidance for the command, control, communications, computers, and intelligence (C4I) domain, they do not necessarily represent the more unique requirements of the Army`s real time and near-real time (RT/NRT) weapon systems. The Office of the Director of Information for Command, Control, Communications, and Computers (DISC4),more » in conjunction with the Weapon Systems Technical Architecture Working Group (WSTAWG), recognized this need as part of their activities to revise the Army Technical Architecture (ATA), now termed the Joint Technical Architecture-Army (JTA-A). To address this need, DISC4 tasked the Pacific Northwest National Laboratory (PNNL) to develop an Army weapon systems unique HCI style guide, which resulted in the US Army Weapon Systems Human-Computer Interface (WSHCI) Style Guide Version 1. Based on feedback from the user community, DISC4 further tasked PNNL to revise Version 1 and publish Version 2. The intent was to update some of the research and incorporate some enhancements. This document provides that revision. The purpose of this document is to provide HCI design guidance for the RT/NRT Army system domain across the weapon systems subdomains of ground, aviation, missile, and soldier systems. Each subdomain should customize and extend this guidance by developing their domain-specific style guides, which will be used to guide the development of future systems within their subdomains.« less
Beard, Brian B.; Kainz, Wolfgang; Onishi, Teruo; Iyama, Takahiro; Watanabe, Soichi; Fujiwara, Osamu; Wang, Jianqing; Bit-Babik, Giorgi; Faraone, Antonio; Wiart, Joe; Christ, Andreas; Kuster, Niels; Lee, Ae-Kyoung; Kroeze, Hugo; Siegbahn, Martin; Keshvari, Jafar; Abrishamkar, Houman; Simon, Winfried; Manteuffel, Dirk; Nikoloski, Neviana
2018-01-01
The specific absorption rates (SAR) determined computationally in the specific anthropomorphic mannequin (SAM) and anatomically correct models of the human head when exposed to a mobile phone model are compared as part of a study organized by IEEE Standards Coordinating Committee 34, SubCommittee 2, and Working Group 2, and carried out by an international task force comprising 14 government, academic, and industrial research institutions. The detailed study protocol defined the computational head and mobile phone models. The participants used different finite-difference time-domain software and independently positioned the mobile phone and head models in accordance with the protocol. The results show that when the pinna SAR is calculated separately from the head SAR, SAM produced a higher SAR in the head than the anatomically correct head models. Also the larger (adult) head produced a statistically significant higher peak SAR for both the 1- and 10-g averages than did the smaller (child) head for all conditions of frequency and position. PMID:29515260
Virtual Reality Simulation of the International Space Welding Experiment
NASA Technical Reports Server (NTRS)
Phillips, James A.
1996-01-01
Virtual Reality (VR) is a set of breakthrough technologies that allow a human being to enter and fully experience a 3-dimensional, computer simulated environment. A true virtual reality experience meets three criteria: (1) It involves 3-dimensional computer graphics; (2) It includes real-time feedback and response to user actions; and (3) It must provide a sense of immersion. Good examples of a virtual reality simulator are the flight simulators used by all branches of the military to train pilots for combat in high performance jet fighters. The fidelity of such simulators is extremely high -- but so is the price tag, typically millions of dollars. Virtual reality teaching and training methods are manifestly effective, and we have therefore implemented a VR trainer for the International Space Welding Experiment. My role in the development of the ISWE trainer consisted of the following: (1) created texture-mapped models of the ISWE's rotating sample drum, technology block, tool stowage assembly, sliding foot restraint, and control panel; (2) developed C code for control panel button selection and rotation of the sample drum; (3) In collaboration with Tim Clark (Antares Virtual Reality Systems), developed a serial interface box for the PC and the SGI Indigo so that external control devices, similar to ones actually used on the ISWE, could be used to control virtual objects in the ISWE simulation; (4) In collaboration with Peter Wang (SFFP) and Mark Blasingame (Boeing), established the interference characteristics of the VIM 1000 head-mounted-display and tested software filters to correct the problem; (5) In collaboration with Peter Wang and Mark Blasingame, established software and procedures for interfacing the VPL DataGlove and the Polhemus 6DOF position sensors to the SGI Indigo serial ports. The majority of the ISWE modeling effort was conducted on a PC-based VR Workstation, described below.
Customization of user interfaces to reduce errors and enhance user acceptance.
Burkolter, Dina; Weyers, Benjamin; Kluge, Annette; Luther, Wolfram
2014-03-01
Customization is assumed to reduce error and increase user acceptance in the human-machine relation. Reconfiguration gives the operator the option to customize a user interface according to his or her own preferences. An experimental study with 72 computer science students using a simulated process control task was conducted. The reconfiguration group (RG) interactively reconfigured their user interfaces and used the reconfigured user interface in the subsequent test whereas the control group (CG) used a default user interface. Results showed significantly lower error rates and higher acceptance of the RG compared to the CG while there were no significant differences between the groups regarding situation awareness and mental workload. Reconfiguration seems to be promising and therefore warrants further exploration. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Xinmin; Belcher, Andrew H.; Grelewicz, Zachary
Purpose: To develop a control system to correct both translational and rotational head motion deviations in real-time during frameless stereotactic radiosurgery (SRS). Methods: A novel feedback control with a feed-forward algorithm was utilized to correct for the coupling of translation and rotation present in serial kinematic robotic systems. Input parameters for the algorithm include the real-time 6DOF target position, the frame pitch pivot point to target distance constant, and the translational and angular Linac beam off (gating) tolerance constants for patient safety. Testing of the algorithm was done using a 4D (XY Z + pitch) robotic stage, an infrared headmore » position sensing unit and a control computer. The measured head position signal was processed and a resulting command was sent to the interface of a four-axis motor controller, through which four stepper motors were driven to perform motion compensation. Results: The control of the translation of a brain target was decoupled with the control of the rotation. For a phantom study, the corrected position was within a translational displacement of 0.35 mm and a pitch displacement of 0.15° 100% of the time. For a volunteer study, the corrected position was within displacements of 0.4 mm and 0.2° over 98.5% of the time, while it was 10.7% without correction. Conclusions: The authors report a control design approach for both translational and rotational head motion correction. The experiments demonstrated that control performance of the 4D robotic stage meets the submillimeter and subdegree accuracy required by SRS.« less
Beard, Brian B; Kainz, Wolfgang
2004-10-13
We reviewed articles using computational RF dosimetry to compare the Specific Anthropomorphic Mannequin (SAM) to anatomically correct models of the human head. Published conclusions based on such comparisons have varied widely. We looked for reasons that might cause apparently similar comparisons to produce dissimilar results. We also looked at the information needed to adequately compare the results of computational RF dosimetry studies. We concluded studies were not comparable because of differences in definitions, models, and methodology. Therefore we propose a protocol, developed by an IEEE standards group, as an initial step in alleviating this problem. The protocol calls for a benchmark validation study comparing the SAM phantom to two anatomically correct models of the human head. It also establishes common definitions and reporting requirements that will increase the comparability of all computational RF dosimetry studies of the human head.
Beard, Brian B; Kainz, Wolfgang
2004-01-01
We reviewed articles using computational RF dosimetry to compare the Specific Anthropomorphic Mannequin (SAM) to anatomically correct models of the human head. Published conclusions based on such comparisons have varied widely. We looked for reasons that might cause apparently similar comparisons to produce dissimilar results. We also looked at the information needed to adequately compare the results of computational RF dosimetry studies. We concluded studies were not comparable because of differences in definitions, models, and methodology. Therefore we propose a protocol, developed by an IEEE standards group, as an initial step in alleviating this problem. The protocol calls for a benchmark validation study comparing the SAM phantom to two anatomically correct models of the human head. It also establishes common definitions and reporting requirements that will increase the comparability of all computational RF dosimetry studies of the human head. PMID:15482601
ERIC Educational Resources Information Center
Selverian, Melissa E. Markaridian; Lombard, Matthew
2009-01-01
A thorough review of the research relating to Human-Computer Interface (HCI) form and content factors in the education, communication and computer science disciplines reveals strong associations of meaningful perceptual "illusions" with enhanced learning and satisfaction in the evolving classroom. Specifically, associations emerge…
Ecological Interface Design for Computer Network Defense.
Bennett, Kevin B; Bryant, Adam; Sushereba, Christen
2018-05-01
A prototype ecological interface for computer network defense (CND) was developed. Concerns about CND run high. Although there is a vast literature on CND, there is some indication that this research is not being translated into operational contexts. Part of the reason may be that CND has historically been treated as a strictly technical problem, rather than as a socio-technical problem. The cognitive systems engineering (CSE)/ecological interface design (EID) framework was used in the analysis and design of the prototype interface. A brief overview of CSE/EID is provided. EID principles of design (i.e., direct perception, direct manipulation and visual momentum) are described and illustrated through concrete examples from the ecological interface. Key features of the ecological interface include (a) a wide variety of alternative visual displays, (b) controls that allow easy, dynamic reconfiguration of these displays, (c) visual highlighting of functionally related information across displays, (d) control mechanisms to selectively filter massive data sets, and (e) the capability for easy expansion. Cyber attacks from a well-known data set are illustrated through screen shots. CND support needs to be developed with a triadic focus (i.e., humans interacting with technology to accomplish work) if it is to be effective. Iterative design and formal evaluation is also required. The discipline of human factors has a long tradition of success on both counts; it is time that HF became fully involved in CND. Direct application in supporting cyber analysts.
Hajdukiewicz, John R; Vicente, Kim J
2002-01-01
Ecological interface design (EID) is a theoretical framework that aims to support worker adaptation to change and novelty in complex systems. Previous evaluations of EID have emphasized representativeness to enhance generalizability of results to operational settings. The research presented here is complementary, emphasizing experimental control to enhance theory building. Two experiments were conducted to test the impact of functional information and emergent feature graphics on adaptation to novelty and change in a thermal-hydraulic process control microworld. Presenting functional information in an interface using emergent features encouraged experienced participants to become perceptually coupled to the interface and thereby to exhibit higher-level control and more successful adaptation to unanticipated events. The absence of functional information or of emergent features generally led to lower-level control and less success at adaptation, the exception being a minority of participants who compensated by relying on analytical reasoning. These findings may have practical implications for shaping coordination in complex systems and fundamental implications for the development of a general unified theory of coordination for the technical, human, and social sciences. Actual or potential applications of this research include the design of human-computer interfaces that improve safety in complex sociotechnical systems.
Virtual workstation - A multimodal, stereoscopic display environment
NASA Astrophysics Data System (ADS)
Fisher, S. S.; McGreevy, M.; Humphries, J.; Robinett, W.
1987-01-01
A head-mounted, wide-angle, stereoscopic display system controlled by operator position, voice and gesture has been developed for use in a multipurpose interface environment. The system provides a multisensory, interactive display environment in which a user can virtually explore a 360-degree synthesized or remotely sensed environment and can viscerally interact with its components. Primary applications of the system are in telerobotics, management of large-scale integrated information systems, and human factors research. System configuration, application scenarios, and research directions are described.
An Object-Oriented Graphical User Interface for a Reusable Rocket Engine Intelligent Control System
NASA Technical Reports Server (NTRS)
Litt, Jonathan S.; Musgrave, Jeffrey L.; Guo, Ten-Huei; Paxson, Daniel E.; Wong, Edmond; Saus, Joseph R.; Merrill, Walter C.
1994-01-01
An intelligent control system for reusable rocket engines under development at NASA Lewis Research Center requires a graphical user interface to allow observation of the closed-loop system in operation. The simulation testbed consists of a real-time engine simulation computer, a controls computer, and several auxiliary computers for diagnostics and coordination. The system is set up so that the simulation computer could be replaced by the real engine and the change would be transparent to the control system. Because of the hard real-time requirement of the control computer, putting a graphical user interface on it was not an option. Thus, a separate computer used strictly for the graphical user interface was warranted. An object-oriented LISP-based graphical user interface has been developed on a Texas Instruments Explorer 2+ to indicate the condition of the engine to the observer through plots, animation, interactive graphics, and text.
Noise Reduction in Brainwaves by Using Both EEG Signals and Frontal Viewing Camera Images
Bang, Jae Won; Choi, Jong-Suk; Park, Kang Ryoung
2013-01-01
Electroencephalogram (EEG)-based brain-computer interfaces (BCIs) have been used in various applications, including human–computer interfaces, diagnosis of brain diseases, and measurement of cognitive status. However, EEG signals can be contaminated with noise caused by user's head movements. Therefore, we propose a new method that combines an EEG acquisition device and a frontal viewing camera to isolate and exclude the sections of EEG data containing these noises. This method is novel in the following three ways. First, we compare the accuracies of detecting head movements based on the features of EEG signals in the frequency and time domains and on the motion features of images captured by the frontal viewing camera. Second, the features of EEG signals in the frequency domain and the motion features captured by the frontal viewing camera are selected as optimal ones. The dimension reduction of the features and feature selection are performed using linear discriminant analysis. Third, the combined features are used as inputs to support vector machine (SVM), which improves the accuracy in detecting head movements. The experimental results show that the proposed method can detect head movements with an average error rate of approximately 3.22%, which is smaller than that of other methods. PMID:23669713
Eye-gaze and intent: Application in 3D interface control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schryver, J.C.; Goldberg, J.H.
1993-06-01
Computer interface control is typically accomplished with an input ``device`` such as keyboard, mouse, trackball, etc. An input device translates a users input actions, such as mouse clicks and key presses, into appropriate computer commands. To control the interface, the user must first convert intent into the syntax of the input device. A more natural means of computer control is possible when the computer can directly infer user intent, without need of intervening input devices. We describe an application of eye-gaze-contingent control of an interactive three-dimensional (3D) user interface. A salient feature of the user interface is natural input, withmore » a heightened impression of controlling the computer directly by the mind. With this interface, input of rotation and translation are intuitive, whereas other abstract features, such as zoom, are more problematic to match with user intent. This paper describes successes with implementation to date, and ongoing efforts to develop a more sophisticated intent inferencing methodology.« less
Eye-gaze and intent: Application in 3D interface control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schryver, J.C.; Goldberg, J.H.
1993-01-01
Computer interface control is typically accomplished with an input device'' such as keyboard, mouse, trackball, etc. An input device translates a users input actions, such as mouse clicks and key presses, into appropriate computer commands. To control the interface, the user must first convert intent into the syntax of the input device. A more natural means of computer control is possible when the computer can directly infer user intent, without need of intervening input devices. We describe an application of eye-gaze-contingent control of an interactive three-dimensional (3D) user interface. A salient feature of the user interface is natural input, withmore » a heightened impression of controlling the computer directly by the mind. With this interface, input of rotation and translation are intuitive, whereas other abstract features, such as zoom, are more problematic to match with user intent. This paper describes successes with implementation to date, and ongoing efforts to develop a more sophisticated intent inferencing methodology.« less
A Human Machine Interface for EVA
NASA Astrophysics Data System (ADS)
Hartmann, L.
EVA astronauts work in a challenging environment that includes high rate of muscle fatigue, haptic and proprioception impairment, lack of dexterity and interaction with robotic equipment. Currently they are heavily dependent on support from on-board crew and ground station staff for information and robotics operation. They are limited to the operation of simple controls on the suit exterior and external robot controls that are difficult to operate because of the heavy gloves that are part of the EVA suit. A wearable human machine interface (HMI) inside the suit provides a powerful alternative for robot teleoperation, procedure checklist access, generic equipment operation via virtual control panels and general information retrieval and presentation. The HMI proposed here includes speech input and output, a simple 6 degree of freedom (dof) pointing device and a heads up display (HUD). The essential characteristic of this interface is that it offers an alternative to the standard keyboard and mouse interface of a desktop computer. The astronaut's speech is used as input to command mode changes, execute arbitrary computer commands and generate text. The HMI can respond with speech also in order to confirm selections, provide status and feedback and present text output. A candidate 6 dof pointing device is Measurand's Shapetape, a flexible "tape" substrate to which is attached an optic fiber with embedded sensors. Measurement of the modulation of the light passing through the fiber can be used to compute the shape of the tape and, in particular, the position and orientation of the end of the Shapetape. It can be used to provide any kind of 3d geometric information including robot teleoperation control. The HUD can overlay graphical information onto the astronaut's visual field including robot joint torques, end effector configuration, procedure checklists and virtual control panels. With suitable tracking information about the position and orientation of the EVA suit, the overlaid graphical information can be registered with the external world. For example, information about an object can be positioned on or beside the object. This wearable HMI supports many applications during EVA including robot teleoperation, procedure checklist usage, operation of virtual control panels and general information or documentation retrieval and presentation. Whether the robot end effector is a mobile platform for the EVA astronaut or is an assistant to the astronaut in an assembly or repair task, the astronaut can control the robot via a direct manipulation interface. Embedded in the suit or the astronaut's clothing, Shapetape can measure the user's arm/hand position and orientation which can be directly mapped into the workspace coordinate system of the robot. Motion of the users hand can generate corresponding motion of the robot end effector in order to reposition the EVA platform or to manipulate objects in the robot's grasp. Speech input can be used to execute commands and mode changes without the astronaut having to withdraw from the teleoperation task. Speech output from the system can provide feedback without affecting the user's visual attention. The procedure checklist guiding the astronaut's detailed activities can be presented on the HUD and manipulated (e.g., move, scale, annotate, mark tasks as done, consult prerequisite tasks) by spoken command. Virtual control panels for suit equipment, equipment being repaired or arbitrary equipment on the space station can be displayed on the HUD and can be operated by speech commands or by hand gestures. For example, an antenna being repaired could be pointed under the control of the EVA astronaut. Additionally arbitrary computer activities such as information retrieval and presentation can be carried out using similar interface techniques. Considering the risks, expense and physical challenges of EVA work, it is appropriate that EVA astronauts have considerable support from station crew and ground station staff. Reducing their dependence on such personnel may under many circumstances, however, improve performance and reduce risk. For example, the EVA astronaut is likely to have the best viewpoint at a robotic worksite. Direct access to the procedure checklist can help provide temporal context and continuity throughout an EVA. Access to station facilities through an HMI such as the one described here could be invaluable during an emergency or in a situation in which a fault occurs. The full paper will describe the HMI operation and applications in the EVA context in more detail and will describe current laboratory prototyping activities.
ERIC Educational Resources Information Center
Landa-Jiménez, M. A.; González-Gaspar, P.; Pérez-Estudillo, C.; López-Meraz, M. L.; Morgado-Valle, C.; Beltran-Parrazal, L.
2016-01-01
A Muscle-Computer Interface (muCI) is a human-machine system that uses electromyographic (EMG) signals to communicate with a computer. Surface EMG (sEMG) signals are currently used to command robotic devices, such as robotic arms and hands, and mobile robots, such as wheelchairs. These signals reflect the motor intention of a user before the…
The Next Wave: Humans, Computers, and Redefining Reality
NASA Technical Reports Server (NTRS)
Little, William
2018-01-01
The Augmented/Virtual Reality (AVR) Lab at KSC is dedicated to " exploration into the growing computer fields of Extended Reality and the Natural User Interface (it is) a proving ground for new technologies that can be integrated into future NASA projects and programs." The topics of Human Computer Interface, Human Computer Interaction, Augmented Reality, Virtual Reality, and Mixed Reality are defined; examples of work being done in these fields in the AVR Lab are given. Current new and future work in Computer Vision, Speech Recognition, and Artificial Intelligence are also outlined.
Baseline experiments in teleoperator control
NASA Technical Reports Server (NTRS)
Hankins, W. W., III; Mixon, R. W.
1986-01-01
Studies have been conducted at the NASA Langley Research Center (LaRC) to establish baseline human teleoperator interface data and to assess the influence of some of the interface parameters on human performance in teleoperation. As baseline data, the results will be used to assess future interface improvements resulting from this research in basic teleoperator human factors. In addition, the data have been used to validate LaRC's basic teleoperator hardware setup and to compare initial teleoperator study results. Four subjects controlled a modified industrial manipulator to perform a simple task involving both high and low precision. Two different schemes for controlling the manipulator were studied along with both direct and indirect viewing of the task. Performance of the task was measured as the length of time required to complete the task along with the number of errors made in the process. Analyses of variance were computed to determine the significance of the influences of each of the independent variables. Comparisons were also made between the LaRC data and data taken earlier by Grumman Aerospace Corp. at their facilities.
Human perceptual deficits as factors in computer interface test and evaluation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowser, S.E.
1992-06-01
Issues related to testing and evaluating human computer interfaces are usually based on the machine rather than on the human portion of the computer interface. Perceptual characteristics of the expected user are rarely investigated, and interface designers ignore known population perceptual limitations. For these reasons, environmental impacts on the equipment will more likely be defined than will user perceptual characteristics. The investigation of user population characteristics is most often directed toward intellectual abilities and anthropometry. This problem is compounded by the fact that some deficits capabilities tend to be found in higher-than-overall population distribution in some user groups. The testmore » and evaluation community can address the issue from two primary aspects. First, assessing user characteristics should be extended to include tests of perceptual capability. Secondly, interface designs should use multimode information coding.« less
The use of graphics in the design of the human-telerobot interface
NASA Technical Reports Server (NTRS)
Stuart, Mark A.; Smith, Randy L.
1989-01-01
The Man-Systems Telerobotics Laboratory (MSTL) of NASA's Johnson Space Center employs computer graphics tools in their design and evaluation of the Flight Telerobotic Servicer (FTS) human/telerobot interface on the Shuttle and on the Space Station. It has been determined by the MSTL that the use of computer graphics can promote more expedient and less costly design endeavors. Several specific examples of computer graphics applied to the FTS user interface by the MSTL are described.
Adaptive control for eye-gaze input system
NASA Astrophysics Data System (ADS)
Zhao, Qijie; Tu, Dawei; Yin, Hairong
2004-01-01
The characteristics of the vision-based human-computer interaction system have been analyzed, and the practical application and its limited factors at present time have also been mentioned. The information process methods have been put forward. In order to make the communication flexible and spontaneous, the algorithms to adaptive control of user"s head movement has been designed, and the events-based methods and object-oriented computer language is used to develop the system software, by experiment testing, we found that under given condition, these methods and algorithms can meet the need of the HCI.
Friedenberg, David A; Bouton, Chad E; Annetta, Nicholas V; Skomrock, Nicholas; Mingming Zhang; Schwemmer, Michael; Bockbrader, Marcia A; Mysiw, W Jerry; Rezai, Ali R; Bresler, Herbert S; Sharma, Gaurav
2016-08-01
Recent advances in Brain Computer Interfaces (BCIs) have created hope that one day paralyzed patients will be able to regain control of their paralyzed limbs. As part of an ongoing clinical study, we have implanted a 96-electrode Utah array in the motor cortex of a paralyzed human. The array generates almost 3 million data points from the brain every second. This presents several big data challenges towards developing algorithms that should not only process the data in real-time (for the BCI to be responsive) but are also robust to temporal variations and non-stationarities in the sensor data. We demonstrate an algorithmic approach to analyze such data and present a novel method to evaluate such algorithms. We present our methodology with examples of decoding human brain data in real-time to inform a BCI.
Human-computer interfaces applied to numerical solution of the Plateau problem
NASA Astrophysics Data System (ADS)
Elias Fabris, Antonio; Soares Bandeira, Ivana; Ramos Batista, Valério
2015-09-01
In this work we present a code in Matlab to solve the Problem of Plateau numerically, and the code will include human-computer interface. The Problem of Plateau has applications in areas of knowledge like, for instance, Computer Graphics. The solution method will be the same one of the Surface Evolver, but the difference will be a complete graphical interface with the user. This will enable us to implement other kinds of interface like ocular mouse, voice, touch, etc. To date, Evolver does not include any graphical interface, which restricts its use by the scientific community. Specially, its use is practically impossible for most of the Physically Challenged People.
PC/AT-based architecture for shared telerobotic control
NASA Astrophysics Data System (ADS)
Schinstock, Dale E.; Faddis, Terry N.; Barr, Bill G.
1993-03-01
A telerobotic control system must include teleoperational, shared, and autonomous modes of control in order to provide a robot platform for incorporating the rapid advances that are occurring in telerobotics and associated technologies. These modes along with the ability to modify the control algorithms are especially beneficial for telerobotic control systems used for research purposes. The paper describes an application of the PC/AT platform to the control system of a telerobotic test cell. The paper provides a discussion of the suitability of the PC/AT as a platform for a telerobotic control system. The discussion is based on the many factors affecting the choice of a computer platform for a real time control system. The factors include I/O capabilities, simplicity, popularity, computational performance, and communication with external systems. The paper also includes a description of the actuation, measurement, and sensor hardware of both the master manipulator and the slave robot. It also includes a description of the PC-Bus interface cards. These cards were developed by the researchers in the KAT Laboratory, specifically for interfacing to the master manipulator and slave robot. Finally, a few different versions of the low level telerobotic control software are presented. This software incorporates shared control by supervisory systems and the human operator and traded control between supervisory systems and the human operator.
NASA Technical Reports Server (NTRS)
Mcgreevy, Michael W.
1990-01-01
An advanced human-system interface is being developed for evolutionary Space Station Freedom as part of the NASA Office of Space Station (OSS) Advanced Development Program. The human-system interface is based on body-pointed display and control devices. The project will identify and document the design accommodations ('hooks and scars') required to support virtual workstations and telepresence interfaces, and prototype interface systems will be built, evaluated, and refined. The project is a joint enterprise of Marquette University, Astronautics Corporation of America (ACA), and NASA's ARC. The project team is working with NASA's JSC and McDonnell Douglas Astronautics Company (the Work Package contractor) to ensure that the project is consistent with space station user requirements and program constraints. Documentation describing design accommodations and tradeoffs will be provided to OSS, JSC, and McDonnell Douglas, and prototype interface devices will be delivered to ARC and JSC. ACA intends to commercialize derivatives of the interface for use with computer systems developed for scientific visualization and system simulation.
Circling motion and screen edges as an alternative input method for on-screen target manipulation.
Ka, Hyun W; Simpson, Richard C
2017-04-01
To investigate a new alternative interaction method, called circling interface, for manipulating on-screen objects. To specify a target, the user makes a circling motion around the target. To specify a desired pointing command with the circling interface, each edge of the screen is used. The user selects a command before circling the target. To evaluate the circling interface, we conducted an experiment with 16 participants, comparing the performance on pointing tasks with different combinations of selection method (circling interface, physical mouse and dwelling interface) and input device (normal computer mouse, head pointer and joystick mouse emulator). A circling interface is compatible with many types of pointing devices, not requiring physical activation of mouse buttons, and is more efficient than dwell-clicking. Across all common pointing operations, the circling interface had a tendency to produce faster performance with a head-mounted mouse emulator than with a joystick mouse. The performance accuracy of the circling interface outperformed the dwelling interface. It was demonstrated that the circling interface has the potential as another alternative pointing method for selecting and manipulating objects in a graphical user interface. Implications for Rehabilitation A circling interface will improve clinical practice by providing an alternative pointing method that does not require physically activating mouse buttons and is more efficient than dwell-clicking. The Circling interface can also work with AAC devices.
Augmented Reality Imaging System: 3D Viewing of a Breast Cancer.
Douglas, David B; Boone, John M; Petricoin, Emanuel; Liotta, Lance; Wilson, Eugene
2016-01-01
To display images of breast cancer from a dedicated breast CT using Depth 3-Dimensional (D3D) augmented reality. A case of breast cancer imaged using contrast-enhanced breast CT (Computed Tomography) was viewed with the augmented reality imaging, which uses a head display unit (HDU) and joystick control interface. The augmented reality system demonstrated 3D viewing of the breast mass with head position tracking, stereoscopic depth perception, focal point convergence and the use of a 3D cursor and joy-stick enabled fly through with visualization of the spiculations extending from the breast cancer. The augmented reality system provided 3D visualization of the breast cancer with depth perception and visualization of the mass's spiculations. The augmented reality system should be further researched to determine the utility in clinical practice.
Ubiquitous Wireless Smart Sensing and Control
NASA Technical Reports Server (NTRS)
Wagner, Raymond
2013-01-01
Need new technologies to reliably and safely have humans interact within sensored environments (integrated user interfaces, physical and cognitive augmentation, training, and human-systems integration tools). Areas of focus include: radio frequency identification (RFID), motion tracking, wireless communication, wearable computing, adaptive training and decision support systems, and tele-operations. The challenge is developing effective, low cost/mass/volume/power integrated monitoring systems to assess and control system, environmental, and operator health; and accurately determining and controlling the physical, chemical, and biological environments of the areas and associated environmental control systems.
Ubiquitous Wireless Smart Sensing and Control. Pumps and Pipes JSC: Uniquely Houston
NASA Technical Reports Server (NTRS)
Wagner, Raymond
2013-01-01
Need new technologies to reliably and safely have humans interact within sensored environments (integrated user interfaces, physical and cognitive augmentation, training, and human-systems integration tools).Areas of focus include: radio frequency identification (RFID), motion tracking, wireless communication, wearable computing, adaptive training and decision support systems, and tele-operations. The challenge is developing effective, low cost/mass/volume/power integrated monitoring systems to assess and control system, environmental, and operator health; and accurately determining and controlling the physical, chemical, and biological environments of the areas and associated environmental control systems.
An intelligent interface for satellite operations: Your Orbit Determination Assistant (YODA)
NASA Technical Reports Server (NTRS)
Schur, Anne
1988-01-01
An intelligent interface is often characterized by the ability to adapt evaluation criteria as the environment and user goals change. Some factors that impact these adaptations are redefinition of task goals and, hence, user requirements; time criticality; and system status. To implement adaptations affected by these factors, a new set of capabilities must be incorporated into the human-computer interface design. These capabilities include: (1) dynamic update and removal of control states based on user inputs, (2) generation and removal of logical dependencies as change occurs, (3) uniform and smooth interfacing to numerous processes, databases, and expert systems, and (4) unobtrusive on-line assistance to users of concepts were applied and incorporated into a human-computer interface using artificial intelligence techniques to create a prototype expert system, Your Orbit Determination Assistant (YODA). YODA is a smart interface that supports, in real teime, orbit analysts who must determine the location of a satellite during the station acquisition phase of a mission. Also described is the integration of four knowledge sources required to support the orbit determination assistant: orbital mechanics, spacecraft specifications, characteristics of the mission support software, and orbit analyst experience. This initial effort is continuing with expansion of YODA's capabilities, including evaluation of results of the orbit determination task.
Teaching Hyporheic and Groundwater Flow Concepts Using an Interactive Computer Simulation
NASA Astrophysics Data System (ADS)
Stonedahl, S. H.; Stonedahl, F.
2016-12-01
We built an educational flow simulator with an interactive web-based interface that allows students to investigate the effects of arbitrary head functions on water flowing through various configurations of permeable/impermeable sediments. The domain consists of a 24 by 48 rectangular grid of sediments with no-flow bottom and side boundaries and a constant head surface water-groundwater (SWGW) interface boundary. The SWGW interface head function can be drawn freehand with the mouse or specified to be a step function, a sine curve, or a zig-zag function, where the amplitude and wavenumber parameters of the head functions are chosen by the user. The subsurface domain may be modified by drawing no-flow (impermeable) barriers in the sediment, changing any number of the 1152 grid cells into no flow cells. The program iteratively solves the Laplace equation to calculate head values at each grid cell within the sediment. Users can then start water particles along the SWGW interface and track their paths through the system to visualize the head-induced flow. Sediment cells can be color coded by head values or water speed. Exploring these systems with the simulator allows users to improve their understanding of the relationship between head and velocity as well as how the position of no-flow barriers impacts water flow in saturated sediments. These learning objectives are amenable to our target audience of undergraduate students, but younger (middle/high school) students may also be able to absorb key concepts by playing with the simulation. The structure of the simulation itself highlights the broader idea of simulation of natural processes through the discretization of continuous environments. The simulation was developed using the NetLogo platform and runs embedded in a webpage: http://susa.stonedahl.com/swgwsimulator. The simulation source code is available and can readily be modified by other educators (or students) to create additional features and options.
Temperature Dependence of Nonelectrolyte Permeation across Red Cell Membranes
Galey, W. R.; Owen, J. D.; Solomon, A. K.
1973-01-01
The temperature dependence of permeation across human red cell membranes has been determined for a series of hydrophilic and lipophilic solutes, including urea and two methyl substituted derivatives, all the straight-chain amides from formamide through valeramide and the two isomers, isobutyramide and isovaleramide. The temperature coefficient for permeation by all the hydrophilic solutes is 12 kcal mol-1 or less, whereas that for all the lipophilic solutes is 19 kcal mol-1 or greater. This difference is consonant with the view that hydrophilic molecules cross the membrane by a path different from that taken by the lipophilic ones. The thermodynamic parameters associated with lipophile permeation have been studied in detail. ΔG is negative for adsorption of lipophilic amides onto an oil-water interface, whereas it is positive for transfer of the polar head from the aqueous medium to bulk lipid solvent. Application of absolute reaction rate theory makes it possible to make a clear distinction between diffusion across the water-red cell membrane interface and diffusion within the membrane. Diffusion coefficients and apparent activation enthalpies and entropies have been computed for each process. Transfer of the polar head from the solvent into the interface is characterized by ΔG ‡ = 0 kcal mol-1 and ΔS ‡ negative, whereas both of these parameters have large positive values for diffusion within the membrane. Diffusion within the membrane is similar to what is expected for diffusion through a highly associated viscous fluid. PMID:4708405
A Human Factors Framework for Payload Display Design
NASA Technical Reports Server (NTRS)
Dunn, Mariea C.; Hutchinson, Sonya L.
1998-01-01
During missions to space, one charge of the astronaut crew is to conduct research experiments. These experiments, referred to as payloads, typically are controlled by computers. Crewmembers interact with payload computers by using visual interfaces or displays. To enhance the safety, productivity, and efficiency of crewmember interaction with payload displays, particular attention must be paid to the usability of these displays. Enhancing display usability requires adoption of a design process that incorporates human factors engineering principles at each stage. This paper presents a proposed framework for incorporating human factors engineering principles into the payload display design process.
The use of analytical models in human-computer interface design
NASA Technical Reports Server (NTRS)
Gugerty, Leo
1993-01-01
Recently, a large number of human-computer interface (HCI) researchers have investigated building analytical models of the user, which are often implemented as computer models. These models simulate the cognitive processes and task knowledge of the user in ways that allow a researcher or designer to estimate various aspects of an interface's usability, such as when user errors are likely to occur. This information can lead to design improvements. Analytical models can supplement design guidelines by providing designers rigorous ways of analyzing the information-processing requirements of specific tasks (i.e., task analysis). These models offer the potential of improving early designs and replacing some of the early phases of usability testing, thus reducing the cost of interface design. This paper describes some of the many analytical models that are currently being developed and evaluates the usefulness of analytical models for human-computer interface design. This paper will focus on computational, analytical models, such as the GOMS model, rather than less formal, verbal models, because the more exact predictions and task descriptions of computational models may be useful to designers. The paper also discusses some of the practical requirements for using analytical models in complex design organizations such as NASA.
A head movement image (HMI)-controlled computer mouse for people with disabilities.
Chen, Yu-Luen; Chen, Weoi-Luen; Kuo, Te-Son; Lai, Jin-Shin
2003-02-04
This study proposes image processing and microprocessor technology for use in developing a head movement image (HMI)-controlled computer mouse system for the spinal cord injured (SCI). The system controls the movement and direction of the mouse cursor by capturing head movement images using a marker installed on the user's headset. In the clinical trial, this new mouse system was compared with an infrared-controlled mouse system on various tasks with nine subjects with SCI. The results were favourable to the new mouse system. The differences between the new mouse system and the infrared-controlled mouse were reaching statistical significance in each of the test situations (p<0.05). The HMI-controlled computer mouse improves the input speed. People with disabilities need only wear the headset and move their heads to freely control the movement of the mouse cursor.
On the tip of the tongue: learning typing and pointing with an intra-oral computer interface.
Caltenco, Héctor A; Breidegard, Björn; Struijk, Lotte N S Andreasen
2014-07-01
To evaluate typing and pointing performance and improvement over time of four able-bodied participants using an intra-oral tongue-computer interface for computer control. A physically disabled individual may lack the ability to efficiently control standard computer input devices. There have been several efforts to produce and evaluate interfaces that provide individuals with physical disabilities the possibility to control personal computers. Training with the intra-oral tongue-computer interface was performed by playing games over 18 sessions. Skill improvement was measured through typing and pointing exercises at the end of each training session. Typing throughput improved from averages of 2.36 to 5.43 correct words per minute. Pointing throughput improved from averages of 0.47 to 0.85 bits/s. Target tracking performance, measured as relative time on target, improved from averages of 36% to 47%. Path following throughput improved from averages of 0.31 to 0.83 bits/s and decreased to 0.53 bits/s with more difficult tasks. Learning curves support the notion that the tongue can rapidly learn novel motor tasks. Typing and pointing performance of the tongue-computer interface is comparable to performances of other proficient assistive devices, which makes the tongue a feasible input organ for computer control. Intra-oral computer interfaces could provide individuals with severe upper-limb mobility impairments the opportunity to control computers and automatic equipment. Typing and pointing performance of the tongue-computer interface is comparable to performances of other proficient assistive devices, but does not cause fatigue easily and might be invisible to other people, which is highly prioritized by assistive device users. Combination of visual and auditory feedback is vital for a good performance of an intra-oral computer interface and helps to reduce involuntary or erroneous activations.
de Carvalho, Sarah Negreiros; Costa, Thiago Bulhões da Silva; Attux, Romis; Hornung, Heiko Horst; Arantes, Dalton Soares
2018-01-01
This paper presents a systematic analysis of a game controlled by a Brain-Computer Interface (BCI) based on Steady-State Visually Evoked Potentials (SSVEP). The objective is to understand BCI systems from the Human-Computer Interface (HCI) point of view, by observing how the users interact with the game and evaluating how the interface elements influence the system performance. The interactions of 30 volunteers with our computer game, named “Get Coins,” through a BCI based on SSVEP, have generated a database of brain signals and the corresponding responses to a questionnaire about various perceptual parameters, such as visual stimulation, acoustic feedback, background music, visual contrast, and visual fatigue. Each one of the volunteers played one match using the keyboard and four matches using the BCI, for comparison. In all matches using the BCI, the volunteers achieved the goals of the game. Eight of them achieved a perfect score in at least one of the four matches, showing the feasibility of the direct communication between the brain and the computer. Despite this successful experiment, adaptations and improvements should be implemented to make this innovative technology accessible to the end user. PMID:29849549
Leite, Harlei Miguel de Arruda; de Carvalho, Sarah Negreiros; Costa, Thiago Bulhões da Silva; Attux, Romis; Hornung, Heiko Horst; Arantes, Dalton Soares
2018-01-01
This paper presents a systematic analysis of a game controlled by a Brain-Computer Interface (BCI) based on Steady-State Visually Evoked Potentials (SSVEP). The objective is to understand BCI systems from the Human-Computer Interface (HCI) point of view, by observing how the users interact with the game and evaluating how the interface elements influence the system performance. The interactions of 30 volunteers with our computer game, named "Get Coins," through a BCI based on SSVEP, have generated a database of brain signals and the corresponding responses to a questionnaire about various perceptual parameters, such as visual stimulation, acoustic feedback, background music, visual contrast, and visual fatigue. Each one of the volunteers played one match using the keyboard and four matches using the BCI, for comparison. In all matches using the BCI, the volunteers achieved the goals of the game. Eight of them achieved a perfect score in at least one of the four matches, showing the feasibility of the direct communication between the brain and the computer. Despite this successful experiment, adaptations and improvements should be implemented to make this innovative technology accessible to the end user.
Implantable brain computer interface: challenges to neurotechnology translation.
Konrad, Peter; Shanks, Todd
2010-06-01
This article reviews three concepts related to implantable brain computer interface (BCI) devices being designed for human use: neural signal extraction primarily for motor commands, signal insertion to restore sensation, and technological challenges that remain. A significant body of literature has occurred over the past four decades regarding motor cortex signal extraction for upper extremity movement or computer interface. However, little is discussed regarding postural or ambulation command signaling. Auditory prosthesis research continues to represent the majority of literature on BCI signal insertion. Significant hurdles continue in the technological translation of BCI implants. These include developing a stable neural interface, significantly increasing signal processing capabilities, and methods of data transfer throughout the human body. The past few years, however, have provided extraordinary human examples of BCI implant potential. Despite technological hurdles, proof-of-concept animal and human studies provide significant encouragement that BCI implants may well find their way into mainstream medical practice in the foreseeable future.
General-Purpose Serial Interface For Remote Control
NASA Technical Reports Server (NTRS)
Busquets, Anthony M.; Gupton, Lawrence E.
1990-01-01
Computer controls remote television camera. General-purpose controller developed to serve as interface between host computer and pan/tilt/zoom/focus functions on series of automated video cameras. Interface port based on 8251 programmable communications-interface circuit configured for tristated outputs, and connects controller system to any host computer with RS-232 input/output (I/O) port. Accepts byte-coded data from host, compares them with prestored codes in read-only memory (ROM), and closes or opens appropriate switches. Six output ports control opening and closing of as many as 48 switches. Operator controls remote television camera by speaking commands, in system including general-purpose controller.
Human factors aspects of control room design
NASA Technical Reports Server (NTRS)
Jenkins, J. P.
1983-01-01
A plan for the design and analysis of a multistation control room is reviewed. It is found that acceptance of the computer based information system by the uses in the control room is mandatory for mission and system success. Criteria to improve computer/user interface include: match of system input/output with user; reliability, compatibility and maintainability; easy to learn and little training needed; self descriptive system; system under user control; transparent language, format and organization; corresponds to user expectations; adaptable to user experience level; fault tolerant; dialog capability user communications needs reflected in flexibility, complexity, power and information load; integrated system; and documentation.
Perspectives on Human-Computer Interface: Introduction and Overview.
ERIC Educational Resources Information Center
Harman, Donna; Lunin, Lois F.
1992-01-01
Discusses human-computer interfaces in information seeking that focus on end users, and provides an overview of articles in this section that (1) provide librarians and information specialists with guidelines for selecting information-seeking systems; (2) provide producers of information systems with directions for production or research; and (3)…
Learning Machine, Vietnamese Based Human-Computer Interface.
ERIC Educational Resources Information Center
Northwest Regional Educational Lab., Portland, OR.
The sixth session of IT@EDU98 consisted of seven papers on the topic of the learning machine--Vietnamese based human-computer interface, and was chaired by Phan Viet Hoang (Informatics College, Singapore). "Knowledge Based Approach for English Vietnamese Machine Translation" (Hoang Kiem, Dinh Dien) presents the knowledge base approach,…
High density tape/head interface study
NASA Technical Reports Server (NTRS)
Csengery, L. C.
1983-01-01
The high energy (H sub c approximately or = to 650 oersteds) tapes and high track density (84 tracks per inch) heads investigated had, as its goal, the definition of optimum combinations of head and tape, including the control required of their interfacial dynamics that would enable the manufacture of high rate (150 Mbps) digital tape recorders for unattended space flight.
Implementing virtual reality interfaces for the geosciences
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bethel, W.; Jacobsen, J.; Austin, A.
1996-06-01
For the past few years, a multidisciplinary team of computer and earth scientists at Lawrence Berkeley National Laboratory has been exploring the use of advanced user interfaces, commonly called {open_quotes}Virtual Reality{close_quotes} (VR), coupled with visualization and scientific computing software. Working closely with industry, these efforts have resulted in an environment in which VR technology is coupled with existing visualization and computational tools. VR technology may be thought of as a user interface. It is useful to think of a spectrum, ranging the gamut from command-line interfaces to completely immersive environments. In the former, one uses the keyboard to enter threemore » or six-dimensional parameters. In the latter, three or six-dimensional information is provided by trackers contained either in hand-held devices or attached to the user in some fashion, e.g. attached to a head-mounted display. Rich, extensible and often complex languages are a vehicle whereby the user controls parameters to manipulate object position and location in a virtual world, but the keyboard is the obstacle in that typing is cumbersome, error-prone and typically slow. In the latter, the user can interact with these parameters by means of motor skills which are highly developed. Two specific geoscience application areas will be highlighted. In the first, we have used VR technology to manipulate three-dimensional input parameters, such as the spatial location of injection or production wells in a reservoir simulator. In the second, we demonstrate how VR technology has been used to manipulate visualization tools, such as a tool for computing streamlines via manipulation of a {open_quotes}rake.{close_quotes} The rake is presented to the user in the form of a {open_quotes}virtual well{close_quotes} icon, and provides parameters used by the streamlines algorithm.« less
Farhoudi, Hamidreza; Fallahnezhad, Khosro; Oskouei, Reza H; Taylor, Mark
2017-11-01
This paper investigates the mechanical response of a modular head-neck interface of hip joint implants under realistic loads of level walking. The realistic loads of the walking activity consist of three dimensional gait forces and the associated frictional moments. These forces and moments were extracted for a 32mm metal-on-metal bearing couple. A previously reported geometry of a modular CoCr/CoCr head-neck interface with a proximal contact was used for this investigation. An explicit finite element analysis was performed to investigate the interface mechanical responses. To study the level of contribution and also the effect of superposition of the load components, three different scenarios of loading were studied: gait forces only, frictional moments only, and combined gait forces and frictional moments. Stress field, micro-motions, shear stresses and fretting work at the contacting nodes of the interface were analysed. Gait forces only were found to significantly influence the mechanical environment of the head-neck interface by temporarily extending the contacting area (8.43% of initially non-contacting surface nodes temporarily came into contact), and therefore changing the stress field and resultant micro-motions during the gait cycle. The frictional moments only did not cause considerable changes in the mechanical response of the interface (only 0.27% of the non-contacting surface nodes temporarily came into contact). However, when superposed with the gait forces, the mechanical response of the interface, particularly micro-motions and fretting work, changed compared to the forces only case. The normal contact stresses and micro-motions obtained from this realistic load-controlled study were typically in the range of 0-275MPa and 0-38µm, respectively. These ranges were found comparable to previous experimental displacement-controlled pin/cylinder-on-disk fretting corrosion studies. Copyright © 2017 Elsevier Ltd. All rights reserved.
Intelligent user interface concept for space station
NASA Technical Reports Server (NTRS)
Comer, Edward; Donaldson, Cameron; Bailey, Elizabeth; Gilroy, Kathleen
1986-01-01
The space station computing system must interface with a wide variety of users, from highly skilled operations personnel to payload specialists from all over the world. The interface must accommodate a wide variety of operations from the space platform, ground control centers and from remote sites. As a result, there is a need for a robust, highly configurable and portable user interface that can accommodate the various space station missions. The concept of an intelligent user interface executive, written in Ada, that would support a number of advanced human interaction techniques, such as windowing, icons, color graphics, animation, and natural language processing is presented. The user interface would provide intelligent interaction by understanding the various user roles, the operations and mission, the current state of the environment and the current working context of the users. In addition, the intelligent user interface executive must be supported by a set of tools that would allow the executive to be easily configured and to allow rapid prototyping of proposed user dialogs. This capability would allow human engineering specialists acting in the role of dialog authors to define and validate various user scenarios. The set of tools required to support development of this intelligent human interface capability is discussed and the prototyping and validation efforts required for development of the Space Station's user interface are outlined.
NASA Astrophysics Data System (ADS)
Kim, Sung-Phil; Simeral, John D.; Hochberg, Leigh R.; Donoghue, John P.; Black, Michael J.
2008-12-01
Computer-mediated connections between human motor cortical neurons and assistive devices promise to improve or restore lost function in people with paralysis. Recently, a pilot clinical study of an intracortical neural interface system demonstrated that a tetraplegic human was able to obtain continuous two-dimensional control of a computer cursor using neural activity recorded from his motor cortex. This control, however, was not sufficiently accurate for reliable use in many common computer control tasks. Here, we studied several central design choices for such a system including the kinematic representation for cursor movement, the decoding method that translates neuronal ensemble spiking activity into a control signal and the cursor control task used during training for optimizing the parameters of the decoding method. In two tetraplegic participants, we found that controlling a cursor's velocity resulted in more accurate closed-loop control than controlling its position directly and that cursor velocity control was achieved more rapidly than position control. Control quality was further improved over conventional linear filters by using a probabilistic method, the Kalman filter, to decode human motor cortical activity. Performance assessment based on standard metrics used for the evaluation of a wide range of pointing devices demonstrated significantly improved cursor control with velocity rather than position decoding. Disclosure. JPD is the Chief Scientific Officer and a director of Cyberkinetics Neurotechnology Systems (CYKN); he holds stock and receives compensation. JDS has been a consultant for CYKN. LRH receives clinical trial support from CYKN.
Hybrid soft computing systems for electromyographic signals analysis: a review.
Xie, Hong-Bo; Guo, Tianruo; Bai, Siwei; Dokos, Socrates
2014-02-03
Electromyographic (EMG) is a bio-signal collected on human skeletal muscle. Analysis of EMG signals has been widely used to detect human movement intent, control various human-machine interfaces, diagnose neuromuscular diseases, and model neuromusculoskeletal system. With the advances of artificial intelligence and soft computing, many sophisticated techniques have been proposed for such purpose. Hybrid soft computing system (HSCS), the integration of these different techniques, aims to further improve the effectiveness, efficiency, and accuracy of EMG analysis. This paper reviews and compares key combinations of neural network, support vector machine, fuzzy logic, evolutionary computing, and swarm intelligence for EMG analysis. Our suggestions on the possible future development of HSCS in EMG analysis are also given in terms of basic soft computing techniques, further combination of these techniques, and their other applications in EMG analysis.
Hybrid soft computing systems for electromyographic signals analysis: a review
2014-01-01
Electromyographic (EMG) is a bio-signal collected on human skeletal muscle. Analysis of EMG signals has been widely used to detect human movement intent, control various human-machine interfaces, diagnose neuromuscular diseases, and model neuromusculoskeletal system. With the advances of artificial intelligence and soft computing, many sophisticated techniques have been proposed for such purpose. Hybrid soft computing system (HSCS), the integration of these different techniques, aims to further improve the effectiveness, efficiency, and accuracy of EMG analysis. This paper reviews and compares key combinations of neural network, support vector machine, fuzzy logic, evolutionary computing, and swarm intelligence for EMG analysis. Our suggestions on the possible future development of HSCS in EMG analysis are also given in terms of basic soft computing techniques, further combination of these techniques, and their other applications in EMG analysis. PMID:24490979
Wang, Nancy X. R.; Olson, Jared D.; Ojemann, Jeffrey G.; Rao, Rajesh P. N.; Brunton, Bingni W.
2016-01-01
Fully automated decoding of human activities and intentions from direct neural recordings is a tantalizing challenge in brain-computer interfacing. Implementing Brain Computer Interfaces (BCIs) outside carefully controlled experiments in laboratory settings requires adaptive and scalable strategies with minimal supervision. Here we describe an unsupervised approach to decoding neural states from naturalistic human brain recordings. We analyzed continuous, long-term electrocorticography (ECoG) data recorded over many days from the brain of subjects in a hospital room, with simultaneous audio and video recordings. We discovered coherent clusters in high-dimensional ECoG recordings using hierarchical clustering and automatically annotated them using speech and movement labels extracted from audio and video. To our knowledge, this represents the first time techniques from computer vision and speech processing have been used for natural ECoG decoding. Interpretable behaviors were decoded from ECoG data, including moving, speaking and resting; the results were assessed by comparison with manual annotation. Discovered clusters were projected back onto the brain revealing features consistent with known functional areas, opening the door to automated functional brain mapping in natural settings. PMID:27148018
Zander, Thorsten O; Kothe, Christian
2011-04-01
Cognitive monitoring is an approach utilizing realtime brain signal decoding (RBSD) for gaining information on the ongoing cognitive user state. In recent decades this approach has brought valuable insight into the cognition of an interacting human. Automated RBSD can be used to set up a brain-computer interface (BCI) providing a novel input modality for technical systems solely based on brain activity. In BCIs the user usually sends voluntary and directed commands to control the connected computer system or to communicate through it. In this paper we propose an extension of this approach by fusing BCI technology with cognitive monitoring, providing valuable information about the users' intentions, situational interpretations and emotional states to the technical system. We call this approach passive BCI. In the following we give an overview of studies which utilize passive BCI, as well as other novel types of applications resulting from BCI technology. We especially focus on applications for healthy users, and the specific requirements and demands of this user group. Since the presented approach of combining cognitive monitoring with BCI technology is very similar to the concept of BCIs itself we propose a unifying categorization of BCI-based applications, including the novel approach of passive BCI.
Flash drive memory apparatus and method
NASA Technical Reports Server (NTRS)
Hinchey, Michael G. (Inventor)
2010-01-01
A memory apparatus includes a non-volatile computer memory, a USB mass storage controller connected to the non-volatile computer memory, the USB mass storage controller including a daisy chain component, a male USB interface connected to the USB mass storage controller, and at least one other interface for a memory device, other than a USB interface, the at least one other interface being connected to the USB mass storage controller.
Noninvasive Electroencephalogram Based Control of a Robotic Arm for Reach and Grasp Tasks
NASA Astrophysics Data System (ADS)
Meng, Jianjun; Zhang, Shuying; Bekyo, Angeliki; Olsoe, Jaron; Baxter, Bryan; He, Bin
2016-12-01
Brain-computer interface (BCI) technologies aim to provide a bridge between the human brain and external devices. Prior research using non-invasive BCI to control virtual objects, such as computer cursors and virtual helicopters, and real-world objects, such as wheelchairs and quadcopters, has demonstrated the promise of BCI technologies. However, controlling a robotic arm to complete reach-and-grasp tasks efficiently using non-invasive BCI has yet to be shown. In this study, we found that a group of 13 human subjects could willingly modulate brain activity to control a robotic arm with high accuracy for performing tasks requiring multiple degrees of freedom by combination of two sequential low dimensional controls. Subjects were able to effectively control reaching of the robotic arm through modulation of their brain rhythms within the span of only a few training sessions and maintained the ability to control the robotic arm over multiple months. Our results demonstrate the viability of human operation of prosthetic limbs using non-invasive BCI technology.
Interaction design challenges and solutions for ALMA operations monitoring and control
NASA Astrophysics Data System (ADS)
Pietriga, Emmanuel; Cubaud, Pierre; Schwarz, Joseph; Primet, Romain; Schilling, Marcus; Barkats, Denis; Barrios, Emilio; Vila Vilaro, Baltasar
2012-09-01
The ALMA radio-telescope, currently under construction in northern Chile, is a very advanced instrument that presents numerous challenges. From a software perspective, one critical issue is the design of graphical user interfaces for operations monitoring and control that scale to the complexity of the system and to the massive amounts of data users are faced with. Early experience operating the telescope with only a few antennas has shown that conventional user interface technologies are not adequate in this context. They consume too much screen real-estate, require many unnecessary interactions to access relevant information, and fail to provide operators and astronomers with a clear mental map of the instrument. They increase extraneous cognitive load, impeding tasks that call for quick diagnosis and action. To address this challenge, the ALMA software division adopted a user-centered design approach. For the last two years, astronomers, operators, software engineers and human-computer interaction researchers have been involved in participatory design workshops, with the aim of designing better user interfaces based on state-of-the-art visualization techniques. This paper describes the process that led to the development of those interface components and to a proposal for the science and operations console setup: brainstorming sessions, rapid prototyping, joint implementation work involving software engineers and human-computer interaction researchers, feedback collection from a broader range of users, further iterations and testing.
Eye Tracking Based Control System for Natural Human-Computer Interaction
Lin, Shu-Fan
2017-01-01
Eye movement can be regarded as a pivotal real-time input medium for human-computer communication, which is especially important for people with physical disability. In order to improve the reliability, mobility, and usability of eye tracking technique in user-computer dialogue, a novel eye control system with integrating both mouse and keyboard functions is proposed in this paper. The proposed system focuses on providing a simple and convenient interactive mode by only using user's eye. The usage flow of the proposed system is designed to perfectly follow human natural habits. Additionally, a magnifier module is proposed to allow the accurate operation. In the experiment, two interactive tasks with different difficulty (searching article and browsing multimedia web) were done to compare the proposed eye control tool with an existing system. The Technology Acceptance Model (TAM) measures are used to evaluate the perceived effectiveness of our system. It is demonstrated that the proposed system is very effective with regard to usability and interface design. PMID:29403528
Eye Tracking Based Control System for Natural Human-Computer Interaction.
Zhang, Xuebai; Liu, Xiaolong; Yuan, Shyan-Ming; Lin, Shu-Fan
2017-01-01
Eye movement can be regarded as a pivotal real-time input medium for human-computer communication, which is especially important for people with physical disability. In order to improve the reliability, mobility, and usability of eye tracking technique in user-computer dialogue, a novel eye control system with integrating both mouse and keyboard functions is proposed in this paper. The proposed system focuses on providing a simple and convenient interactive mode by only using user's eye. The usage flow of the proposed system is designed to perfectly follow human natural habits. Additionally, a magnifier module is proposed to allow the accurate operation. In the experiment, two interactive tasks with different difficulty (searching article and browsing multimedia web) were done to compare the proposed eye control tool with an existing system. The Technology Acceptance Model (TAM) measures are used to evaluate the perceived effectiveness of our system. It is demonstrated that the proposed system is very effective with regard to usability and interface design.
NASA Technical Reports Server (NTRS)
Mitchell, Christine M.
1993-01-01
This chapter examines a class of human-computer interaction applications, specifically the design of human-computer interaction for the operators of complex systems. Such systems include space systems (e.g., manned systems such as the Shuttle or space station, and unmanned systems such as NASA scientific satellites), aviation systems (e.g., the flight deck of 'glass cockpit' airplanes or air traffic control) and industrial systems (e.g., power plants, telephone networks, and sophisticated, e.g., 'lights out,' manufacturing facilities). The main body of human-computer interaction (HCI) research complements but does not directly address the primary issues involved in human-computer interaction design for operators of complex systems. Interfaces to complex systems are somewhat special. The 'user' in such systems - i.e., the human operator responsible for safe and effective system operation - is highly skilled, someone who in human-machine systems engineering is sometimes characterized as 'well trained, well motivated'. The 'job' or task context is paramount and, thus, human-computer interaction is subordinate to human job interaction. The design of human interaction with complex systems, i.e., the design of human job interaction, is sometimes called cognitive engineering.
Najafi, Mohsen; Teimouri, Javad; Shirazi, Alireza; Geraily, Ghazale; Esfahani, Mahbod; Shafaei, Mostafa
2017-10-01
Stereotactic radiosurgery is a high precision modality for conformally delivering high doses of radiation to the brain lesion with a large dose volume. Several studies for the quality control of this technique were performed to measure the dose delivered to the target with a homogenous head phantom and some dosimeters. Some studies were also performed with one or two instances of heterogeneity in the head phantom to measure the dose delivered to the target. But these studies assumed the head as a sphere and simple shape heterogeneity. The construction of an adult human head phantom with the same size, shape, and real inhomogeneity as an adult human head is needed. Only then is measuring the accurate dose delivered to the area of interest and comparison with the calculated dose possible. According to the ICRU Report 44, polytetrafluoroethylene (PTFE) and methyl methacrylate were selected as a bone and soft tissue, respectively. A set of computed tomography (CT) scans from a standard human head were taken, and simplification of the CT images was used to design the layers of the phantom. The parts of each slice were cut and attached together. Tests of density and CT number were done to compare the material of the phantom with tissues of the head. The dose delivered to the target was measured with an EBT3 film. The density of the PTFE and Plexiglas that were inserted in the phantom are in good agreement with bone and soft tissue. Also, the CT numbers of these materials have a low difference. The dose distribution from the EBT3 film and the treatment planning system is similar. The constructed phantom with a size and inhomogeneity like an adult human head is suitable to measure the dose delivered to the area of interest. It also helps make an accurate comparison with the calculated dose by the treatment planning system. By using this phantom, the actual dose delivered to the target was obtained. This anthropomorphic head phantom can be used in other modalities of radiosurgery as well. © 2017 American Association of Physicists in Medicine.
Assessment of a human computer interface prototyping environment
NASA Technical Reports Server (NTRS)
Moore, Loretta A.
1993-01-01
A Human Computer Interface (HCI) prototyping environment with embedded evaluation capability has been successfully assessed which will be valuable in developing and refining HCI standards and evaluating program/project interface development, especially Space Station Freedom on-board displays for payload operations. The HCI prototyping environment is designed to include four components: (1) a HCI format development tool, (2) a test and evaluation simulator development tool, (3) a dynamic, interactive interface between the HCI prototype and simulator, and (4) an embedded evaluation capability to evaluate the adequacy of an HCI based on a user's performance.
Virtual head rotation reveals a process of route reconstruction from human vestibular signals
Day, Brian L; Fitzpatrick, Richard C
2005-01-01
The vestibular organs can feed perceptual processes that build a picture of our route as we move about in the world. However, raw vestibular signals do not define the path taken because, during travel, the head can undergo accelerations unrelated to the route and also be orientated in any direction to vary the signal. This study investigated the computational process by which the brain transforms raw vestibular signals for the purpose of route reconstruction. We electrically stimulated the vestibular nerves of human subjects to evoke a virtual head rotation fixed in skull co-ordinates and measure its perceptual effect. The virtual head rotation caused subjects to perceive an illusory whole-body rotation that was a cyclic function of head-pitch angle. They perceived whole-body yaw rotation in one direction with the head pitched forwards, the opposite direction with the head pitched backwards, and no rotation with the head in an intermediate position. A model based on vector operations and the anatomy and firing properties of semicircular canals precisely predicted these perceptions. In effect, a neural process computes the vector dot product between the craniocentric vestibular vector of head rotation and the gravitational unit vector. This computation yields the signal of body rotation in the horizontal plane that feeds our perception of the route travelled. PMID:16002439
Luu, Trieu Phat; He, Yongtian; Brown, Samuel; Nakagome, Sho; Contreras-Vidal, Jose L.
2016-01-01
The control of human bipedal locomotion is of great interest to the field of lower-body brain computer interfaces (BCIs) for rehabilitation of gait. While the feasibility of a closed-loop BCI system for the control of a lower body exoskeleton has been recently shown, multi-day closed-loop neural decoding of human gait in a virtual reality (BCI-VR) environment has yet to be demonstrated. In this study, we propose a real-time closed-loop BCI that decodes lower limb joint angles from scalp electroencephalography (EEG) during treadmill walking to control the walking movements of a virtual avatar. Moreover, virtual kinematic perturbations resulting in asymmetric walking gait patterns of the avatar were also introduced to investigate gait adaptation using the closed-loop BCI-VR system over a period of eight days. Our results demonstrate the feasibility of using a closed-loop BCI to learn to control a walking avatar under normal and altered visuomotor perturbations, which involved cortical adaptations. These findings have implications for the development of BCI-VR systems for gait rehabilitation after stroke and for understanding cortical plasticity induced by a closed-loop BCI system. PMID:27713915
A reductionist approach to the analysis of learning in brain-computer interfaces.
Danziger, Zachary
2014-04-01
The complexity and scale of brain-computer interface (BCI) studies limit our ability to investigate how humans learn to use BCI systems. It also limits our capacity to develop adaptive algorithms needed to assist users with their control. Adaptive algorithm development is forced offline and typically uses static data sets. But this is a poor substitute for the online, dynamic environment where algorithms are ultimately deployed and interact with an adapting user. This work evaluates a paradigm that simulates the control problem faced by human subjects when controlling a BCI, but which avoids the many complications associated with full-scale BCI studies. Biological learners can be studied in a reductionist way as they solve BCI-like control problems, and machine learning algorithms can be developed and tested in closed loop with the subjects before being translated to full BCIs. The method is to map 19 joint angles of the hand (representing neural signals) to the position of a 2D cursor which must be piloted to displayed targets (a typical BCI task). An investigation is presented on how closely the joint angle method emulates BCI systems; a novel learning algorithm is evaluated, and a performance difference between genders is discussed.
Designing an operator interface? Consider user`s `psychology`
DOE Office of Scientific and Technical Information (OSTI.GOV)
Toffer, D.E.
The modern operator interface is a channel of communication between operators and the plant that, ideally, provides them with information necessary to keep the plant running at maximum efficiency. Advances in automation technology have increased information flow from the field to the screen. New and improved Supervisory Control and Data Acquisition (SCADA) packages provide designers with powerful and open design considerations. All too often, however, systems go to the field designed for the software rather than the operator. Plant operators` jobs have changed fundamentally, from controlling their plants from out in the field to doing so from within control rooms.more » Control room-based operation does not denote idleness. Trained operators should be engaged in examination of plant status and cognitive evaluation of plant efficiencies. Designers who are extremely computer literate, often do not consider demographics of field operators. Many field operators have little knowledge of modern computer systems. As a result, they do not take full advantage of the interface`s capabilities. Designers often fail to understand the true nature of how operators run their plants. To aid field operators, designers must provide familiar controls and intuitive choices. To achieve success in interface design, it is necessary to understand the ways in which humans think conceptually, and to understand how they process this information physically. The physical and the conceptual are closely related when working with any type of interface. Designers should ask themselves: {open_quotes}What type of information is useful to the field operator?{close_quotes} Let`s explore an integration model that contains the following key elements: (1) Easily navigated menus; (2) Reduced chances for misunderstanding; (3) Accurate representations of the plant or operation; (4) Consistent and predictable operation; (5) A pleasant and engaging interface that conforms to the operator`s expectations. 4 figs.« less
Simpson, Tyler; Gauthier, Michel; Prochazka, Arthur
2010-02-01
Computer access can play an important role in employment and leisure activities following spinal cord injury. The authors' prior work has shown that a tooth-click detecting device, when paired with an optical head mouse, may be used by people with tetraplegia for controlling cursor movement and mouse button clicks. To compare the efficacy of tooth clicks to speech recognition and that of an optical head mouse to a gyrometer head mouse for cursor and mouse button control of a computer. Six able-bodied and 3 tetraplegic subjects used the devices listed above to produce cursor movements and mouse clicks in response to a series of prompts displayed on a computer. The time taken to move to and click on each target was recorded. The use of tooth clicks in combination with either an optical head mouse or a gyrometer head mouse can provide hands-free cursor movement and mouse button control at a speed of up to 22% of that of a standard mouse. Tooth clicks were significantly faster at generating mouse button clicks than speech recognition when paired with either type of head mouse device. Tooth-click detection performed better than speech recognition when paired with both the optical head mouse and the gyrometer head mouse. Such a system may improve computer access for people with tetraplegia.
Neurobionics and the brain-computer interface: current applications and future horizons.
Rosenfeld, Jeffrey V; Wong, Yan Tat
2017-05-01
The brain-computer interface (BCI) is an exciting advance in neuroscience and engineering. In a motor BCI, electrical recordings from the motor cortex of paralysed humans are decoded by a computer and used to drive robotic arms or to restore movement in a paralysed hand by stimulating the muscles in the forearm. Simultaneously integrating a BCI with the sensory cortex will further enhance dexterity and fine control. BCIs are also being developed to: provide ambulation for paraplegic patients through controlling robotic exoskeletons; restore vision in people with acquired blindness; detect and control epileptic seizures; and improve control of movement disorders and memory enhancement. High-fidelity connectivity with small groups of neurons requires microelectrode placement in the cerebral cortex. Electrodes placed on the cortical surface are less invasive but produce inferior fidelity. Scalp surface recording using electroencephalography is much less precise. BCI technology is still in an early phase of development and awaits further technical improvements and larger multicentre clinical trials before wider clinical application and impact on the care of people with disabilities. There are also many ethical challenges to explore as this technology evolves.
1981-08-01
cinematic simulation, and interactive computer- 97 control display devices have been reviewed by Roscoe. [35] Developments on some of these training...R. S., Study and Analysis of Requirements for Head-Up Display (HUD), NASA CR-6612, March 1970. 6. Burnette, K. T., "The Status of Human Perceptual
Zander, Thorsten O.; Andreessen, Lena M.; Berg, Angela; Bleuel, Maurice; Pawlitzki, Juliane; Zawallich, Lars; Krol, Laurens R.; Gramann, Klaus
2017-01-01
We tested the applicability and signal quality of a 16 channel dry electroencephalography (EEG) system in a laboratory environment and in a car under controlled, realistic conditions. The aim of our investigation was an estimation how well a passive Brain-Computer Interface (pBCI) can work in an autonomous driving scenario. The evaluation considered speed and accuracy of self-applicability by an untrained person, quality of recorded EEG data, shifts of electrode positions on the head after driving-related movements, usability, and complexity of the system as such and wearing comfort over time. An experiment was conducted inside and outside of a stationary vehicle with running engine, air-conditioning, and muted radio. Signal quality was sufficient for standard EEG analysis in the time and frequency domain as well as for the use in pBCIs. While the influence of vehicle-induced interferences to data quality was insignificant, driving-related movements led to strong shifts in electrode positions. In general, the EEG system used allowed for a fast self-applicability of cap and electrodes. The assessed usability of the system was still acceptable while the wearing comfort decreased strongly over time due to friction and pressure to the head. From these results we conclude that the evaluated system should provide the essential requirements for an application in an autonomous driving context. Nevertheless, further refinement is suggested to reduce shifts of the system due to body movements and increase the headset's usability and wearing comfort. PMID:28293184
Zander, Thorsten O; Andreessen, Lena M; Berg, Angela; Bleuel, Maurice; Pawlitzki, Juliane; Zawallich, Lars; Krol, Laurens R; Gramann, Klaus
2017-01-01
We tested the applicability and signal quality of a 16 channel dry electroencephalography (EEG) system in a laboratory environment and in a car under controlled, realistic conditions. The aim of our investigation was an estimation how well a passive Brain-Computer Interface (pBCI) can work in an autonomous driving scenario. The evaluation considered speed and accuracy of self-applicability by an untrained person, quality of recorded EEG data, shifts of electrode positions on the head after driving-related movements, usability, and complexity of the system as such and wearing comfort over time. An experiment was conducted inside and outside of a stationary vehicle with running engine, air-conditioning, and muted radio. Signal quality was sufficient for standard EEG analysis in the time and frequency domain as well as for the use in pBCIs. While the influence of vehicle-induced interferences to data quality was insignificant, driving-related movements led to strong shifts in electrode positions. In general, the EEG system used allowed for a fast self-applicability of cap and electrodes. The assessed usability of the system was still acceptable while the wearing comfort decreased strongly over time due to friction and pressure to the head. From these results we conclude that the evaluated system should provide the essential requirements for an application in an autonomous driving context. Nevertheless, further refinement is suggested to reduce shifts of the system due to body movements and increase the headset's usability and wearing comfort.
A pilot study comparing mouse and mouse-emulating interface devices for graphic input.
Kanny, E M; Anson, D K
1991-01-01
Adaptive interface devices make it possible for individuals with physical disabilities to use microcomputers and thus perform many tasks that they would otherwise be unable to accomplish. Special equipment is available that purports to allow functional access to the computer for users with disabilities. As technology moves from purely keyboard applications to include graphic input, it will be necessary for assistive interface devices to support graphics as well as text entry. Headpointing systems that emulate the mouse in combination with on-screen keyboards are of particular interest to persons with severe physical impairment such as high level quadriplegia. Two such systems currently on the market are the HeadMaster and the Free Wheel. The authors have conducted a pilot study comparing graphic input speed using the mouse and two headpointing interface systems on the Macintosh computer. The study used a single subject design with six able-bodied subjects, to establish a baseline for comparison with persons with severe disabilities. Results of these preliminary data indicated that the HeadMaster was nearly as effective as the mouse and that it was superior to the Free Wheel for graphics input. This pilot study, however, demonstrated several experimental design problems that need to be addressed to make the study more robust. It also demonstrated the need to include the evaluation of text input so that the effectiveness of the interface devices with text and graphic input could be compared.
Neural correlates of learning in an electrocorticographic motor-imagery brain-computer interface
Blakely, Tim M.; Miller, Kai J.; Rao, Rajesh P. N.; Ojemann, Jeffrey G.
2014-01-01
Human subjects can learn to control a one-dimensional electrocorticographic (ECoG) brain-computer interface (BCI) using modulation of primary motor (M1) high-gamma activity (signal power in the 75–200 Hz range). However, the stability and dynamics of the signals over the course of new BCI skill acquisition have not been investigated. In this study, we report 3 characteristic periods in evolution of the high-gamma control signal during BCI training: initial, low task accuracy with corresponding low power modulation in the gamma spectrum, followed by a second period of improved task accuracy with increasing average power separation between activity and rest, and a final period of high task accuracy with stable (or decreasing) power separation and decreasing trial-to-trial variance. These findings may have implications in the design and implementation of BCI control algorithms. PMID:25599079
Intelligent Context-Aware and Adaptive Interface for Mobile LBS
Liu, Yanhong
2015-01-01
Context-aware user interface plays an important role in many human-computer Interaction tasks of location based services. Although spatial models for context-aware systems have been studied extensively, how to locate specific spatial information for users is still not well resolved, which is important in the mobile environment where location based services users are impeded by device limitations. Better context-aware human-computer interaction models of mobile location based services are needed not just to predict performance outcomes, such as whether people will be able to find the information needed to complete a human-computer interaction task, but to understand human processes that interact in spatial query, which will in turn inform the detailed design of better user interfaces in mobile location based services. In this study, a context-aware adaptive model for mobile location based services interface is proposed, which contains three major sections: purpose, adjustment, and adaptation. Based on this model we try to describe the process of user operation and interface adaptation clearly through the dynamic interaction between users and the interface. Then we show how the model applies users' demands in a complicated environment and suggested the feasibility by the experimental results. PMID:26457077
The development of the Canadian Mobile Servicing System Kinematic Simulation Facility
NASA Technical Reports Server (NTRS)
Beyer, G.; Diebold, B.; Brimley, W.; Kleinberg, H.
1989-01-01
Canada will develop a Mobile Servicing System (MSS) as its contribution to the U.S./International Space Station Freedom. Components of the MSS will include a remote manipulator (SSRMS), a Special Purpose Dexterous Manipulator (SPDM), and a mobile base (MRS). In order to support requirements analysis and the evaluation of operational concepts related to the use of the MSS, a graphics based kinematic simulation/human-computer interface facility has been created. The facility consists of the following elements: (1) A two-dimensional graphics editor allowing the rapid development of virtual control stations; (2) Kinematic simulations of the space station remote manipulators (SSRMS and SPDM), and mobile base; and (3) A three-dimensional graphics model of the space station, MSS, orbiter, and payloads. These software elements combined with state of the art computer graphics hardware provide the capability to prototype MSS workstations, evaluate MSS operational capabilities, and investigate the human-computer interface in an interactive simulation environment. The graphics technology involved in the development and use of this facility is described.
Functional near-infrared spectroscopy for adaptive human-computer interfaces
NASA Astrophysics Data System (ADS)
Yuksel, Beste F.; Peck, Evan M.; Afergan, Daniel; Hincks, Samuel W.; Shibata, Tomoki; Kainerstorfer, Jana; Tgavalekos, Kristen; Sassaroli, Angelo; Fantini, Sergio; Jacob, Robert J. K.
2015-03-01
We present a brain-computer interface (BCI) that detects, analyzes and responds to user cognitive state in real-time using machine learning classifications of functional near-infrared spectroscopy (fNIRS) data. Our work is aimed at increasing the narrow communication bandwidth between the human and computer by implicitly measuring users' cognitive state without any additional effort on the part of the user. Traditionally, BCIs have been designed to explicitly send signals as the primary input. However, such systems are usually designed for people with severe motor disabilities and are too slow and inaccurate for the general population. In this paper, we demonstrate with previous work1 that a BCI that implicitly measures cognitive workload can improve user performance and awareness compared to a control condition by adapting to user cognitive state in real-time. We also discuss some of the other applications we have used in this field to measure and respond to cognitive states such as cognitive workload, multitasking, and user preference.
2017-08-08
Usability Studies In Virtual And Traditional Computer Aided Design Environments For Fault Identification Dr. Syed Adeel Ahmed, Xavier University...virtual environment with wand interfaces compared directly with a workstation non-stereoscopic traditional CAD interface with keyboard and mouse. In...the differences in interaction when compared with traditional human computer interfaces. This paper provides analysis via usability study methods
Gray, Charles M; Goodell, Baldwin; Lear, Alex
2007-07-01
We describe the design and performance of an electromechanical system for conducting multineuron recording experiments in alert non-human primates. The system is based on a simple design, consisting of a microdrive, control electronics, software, and a unique type of recording chamber. The microdrive consists of an aluminum frame, a set of eight linear actuators driven by computer-controlled miniature stepping motors, and two printed circuit boards (PCBs) that provide connectivity to the electrodes and the control electronics. The control circuitry is structured around an Atmel RISC-based microcontroller, which sends commands to as many as eight motor control cards, each capable of controlling eight motors. The microcontroller is programmed in C and uses serial communication to interface with a host computer. The graphical user interface for sending commands is written in C and runs on a conventional personal computer. The recording chamber is low in profile, mounts within a circular craniotomy, and incorporates a removable internal sleeve. A replaceable Sylastic membrane can be stretched across the bottom opening of the sleeve to provide a watertight seal between the cranial cavity and the external environment. This greatly reduces the susceptibility to infection, nearly eliminates the need for routine cleaning, and permits repeated introduction of electrodes into the brain at the same sites while maintaining the watertight seal. The system is reliable, easy to use, and has several advantages over other commercially available systems with similar capabilities.
Test-bench system for a borehole azimuthal acoustic reflection imaging logging tool
NASA Astrophysics Data System (ADS)
Liu, Xianping; Ju, Xiaodong; Qiao, Wenxiao; Lu, Junqiang; Men, Baiyong; Liu, Dong
2016-06-01
The borehole azimuthal acoustic reflection imaging logging tool (BAAR) is a new generation of imaging logging tool, which is able to investigate stratums in a relatively larger range of space around the borehole. The BAAR is designed based on the idea of modularization with a very complex structure, so it has become urgent for us to develop a dedicated test-bench system to debug each module of the BAAR. With the help of a test-bench system introduced in this paper, test and calibration of BAAR can be easily achieved. The test-bench system is designed based on the client/server model. The hardware system mainly consists of a host computer, an embedded controlling board, a bus interface board, a data acquisition board and a telemetry communication board. The host computer serves as the human machine interface and processes the uploaded data. The software running on the host computer is designed based on VC++. The embedded controlling board uses Advanced Reduced Instruction Set Machines 7 (ARM7) as the micro controller and communicates with the host computer via Ethernet. The software for the embedded controlling board is developed based on the operating system uClinux. The bus interface board, data acquisition board and telemetry communication board are designed based on a field programmable gate array (FPGA) and provide test interfaces for the logging tool. To examine the feasibility of the test-bench system, it was set up to perform a test on BAAR. By analyzing the test results, an unqualified channel of the electronic receiving cabin was discovered. It is suggested that the test-bench system can be used to quickly determine the working condition of sub modules of BAAR and it is of great significance in improving production efficiency and accelerating industrial production of the logging tool.
Human performance interfaces in air traffic control.
Chang, Yu-Hern; Yeh, Chung-Hsing
2010-01-01
This paper examines how human performance factors in air traffic control (ATC) affect each other through their mutual interactions. The paper extends the conceptual SHEL model of ergonomics to describe the ATC system as human performance interfaces in which the air traffic controllers interact with other human performance factors including other controllers, software, hardware, environment, and organisation. New research hypotheses about the relationships between human performance interfaces of the system are developed and tested on data collected from air traffic controllers, using structural equation modelling. The research result suggests that organisation influences play a more significant role than individual differences or peer influences on how the controllers interact with the software, hardware, and environment of the ATC system. There are mutual influences between the controller-software, controller-hardware, controller-environment, and controller-organisation interfaces of the ATC system, with the exception of the controller-controller interface. Research findings of this study provide practical insights in managing human performance interfaces of the ATC system in the face of internal or external change, particularly in understanding its possible consequences in relation to the interactions between human performance factors.
A Review and Reappraisal of Adaptive Human-Computer Interfaces in Complex Control Systems
2006-08-01
maneuverability measures. The cost elements were expressed as fuzzy membership functions. Figure 9 shows the flowchart of the route planner. A fuzzy navigator...and updating of the user model, which contains information about three generic stereotypes ( beginner , intermediate and expert users) plus an
Goal selection versus process control in a brain-computer interface based on sensorimotor rhythms.
Royer, Audrey S; He, Bin
2009-02-01
In a brain-computer interface (BCI) utilizing a process control strategy, the signal from the cortex is used to control the fine motor details normally handled by other parts of the brain. In a BCI utilizing a goal selection strategy, the signal from the cortex is used to determine the overall end goal of the user, and the BCI controls the fine motor details. A BCI based on goal selection may be an easier and more natural system than one based on process control. Although goal selection in theory may surpass process control, the two have never been directly compared, as we are reporting here. Eight young healthy human subjects participated in the present study, three trained and five naïve in BCI usage. Scalp-recorded electroencephalograms (EEG) were used to control a computer cursor during five different paradigms. The paradigms were similar in their underlying signal processing and used the same control signal. However, three were based on goal selection, and two on process control. For both the trained and naïve populations, goal selection had more hits per run, was faster, more accurate (for seven out of eight subjects) and had a higher information transfer rate than process control. Goal selection outperformed process control in every measure studied in the present investigation.
2016-01-01
An all-chain-wireless brain-to-brain system (BTBS), which enabled motion control of a cyborg cockroach via human brain, was developed in this work. Steady-state visual evoked potential (SSVEP) based brain-computer interface (BCI) was used in this system for recognizing human motion intention and an optimization algorithm was proposed in SSVEP to improve online performance of the BCI. The cyborg cockroach was developed by surgically integrating a portable microstimulator that could generate invasive electrical nerve stimulation. Through Bluetooth communication, specific electrical pulse trains could be triggered from the microstimulator by BCI commands and were sent through the antenna nerve to stimulate the brain of cockroach. Serial experiments were designed and conducted to test overall performance of the BTBS with six human subjects and three cockroaches. The experimental results showed that the online classification accuracy of three-mode BCI increased from 72.86% to 78.56% by 5.70% using the optimization algorithm and the mean response accuracy of the cyborgs using this system reached 89.5%. Moreover, the results also showed that the cyborg could be navigated by the human brain to complete walking along an S-shape track with the success rate of about 20%, suggesting the proposed BTBS established a feasible functional information transfer pathway from the human brain to the cockroach brain. PMID:26982717
Li, Guangye; Zhang, Dingguo
2016-01-01
An all-chain-wireless brain-to-brain system (BTBS), which enabled motion control of a cyborg cockroach via human brain, was developed in this work. Steady-state visual evoked potential (SSVEP) based brain-computer interface (BCI) was used in this system for recognizing human motion intention and an optimization algorithm was proposed in SSVEP to improve online performance of the BCI. The cyborg cockroach was developed by surgically integrating a portable microstimulator that could generate invasive electrical nerve stimulation. Through Bluetooth communication, specific electrical pulse trains could be triggered from the microstimulator by BCI commands and were sent through the antenna nerve to stimulate the brain of cockroach. Serial experiments were designed and conducted to test overall performance of the BTBS with six human subjects and three cockroaches. The experimental results showed that the online classification accuracy of three-mode BCI increased from 72.86% to 78.56% by 5.70% using the optimization algorithm and the mean response accuracy of the cyborgs using this system reached 89.5%. Moreover, the results also showed that the cyborg could be navigated by the human brain to complete walking along an S-shape track with the success rate of about 20%, suggesting the proposed BTBS established a feasible functional information transfer pathway from the human brain to the cockroach brain.
Simulating Humans as Integral Parts of Spacecraft Missions
NASA Technical Reports Server (NTRS)
Bruins, Anthony C.; Rice, Robert; Nguyen, Lac; Nguyen, Heidi; Saito, Tim; Russell, Elaine
2006-01-01
The Collaborative-Virtual Environment Simulation Tool (C-VEST) software was developed for use in a NASA project entitled "3-D Interactive Digital Virtual Human." The project is oriented toward the use of a comprehensive suite of advanced software tools in computational simulations for the purposes of human-centered design of spacecraft missions and of the spacecraft, space suits, and other equipment to be used on the missions. The C-VEST software affords an unprecedented suite of capabilities for three-dimensional virtual-environment simulations with plug-in interfaces for physiological data, haptic interfaces, plug-and-play software, realtime control, and/or playback control. Mathematical models of the mechanics of the human body and of the aforementioned equipment are implemented in software and integrated to simulate forces exerted on and by astronauts as they work. The computational results can then support the iterative processes of design, building, and testing in applied systems engineering and integration. The results of the simulations provide guidance for devising measures to counteract effects of microgravity on the human body and for the rapid development of virtual (that is, simulated) prototypes of advanced space suits, cockpits, and robots to enhance the productivity, comfort, and safety of astronauts. The unique ability to implement human-in-the-loop immersion also makes the C-VEST software potentially valuable for use in commercial and academic settings beyond the original space-mission setting.
Dennerlein, J T; Yang, M C
2001-01-01
Pointing devices, essential input tools for the graphical user interface (GUI) of desktop computers, require precise motor control and dexterity to use. Haptic force-feedback devices provide the human operator with tactile cues, adding the sense of touch to existing visual and auditory interfaces. However, the performance enhancements, comfort, and possible musculoskeletal loading of using a force-feedback device in an office environment are unknown. Hypothesizing that the time to perform a task and the self-reported pain and discomfort of the task improve with the addition of force feedback, 26 people ranging in age from 22 to 44 years performed a point-and-click task 540 times with and without an attractive force field surrounding the desired target. The point-and-click movements were approximately 25% faster with the addition of force feedback (paired t-tests, p < 0.001). Perceived user discomfort and pain, as measured through a questionnaire, were also smaller with the addition of force feedback (p < 0.001). However, this difference decreased as additional distracting force fields were added to the task environment, simulating a more realistic work situation. These results suggest that for a given task, use of a force-feedback device improves performance, and potentially reduces musculoskeletal loading during mouse use. Actual or potential applications of this research include human-computer interface design, specifically that of the pointing device extensively used for the graphical user interface.
Conscious brain-to-brain communication in humans using non-invasive technologies.
Grau, Carles; Ginhoux, Romuald; Riera, Alejandro; Nguyen, Thanh Lam; Chauvat, Hubert; Berg, Michel; Amengual, Julià L; Pascual-Leone, Alvaro; Ruffini, Giulio
2014-01-01
Human sensory and motor systems provide the natural means for the exchange of information between individuals, and, hence, the basis for human civilization. The recent development of brain-computer interfaces (BCI) has provided an important element for the creation of brain-to-brain communication systems, and precise brain stimulation techniques are now available for the realization of non-invasive computer-brain interfaces (CBI). These technologies, BCI and CBI, can be combined to realize the vision of non-invasive, computer-mediated brain-to-brain (B2B) communication between subjects (hyperinteraction). Here we demonstrate the conscious transmission of information between human brains through the intact scalp and without intervention of motor or peripheral sensory systems. Pseudo-random binary streams encoding words were transmitted between the minds of emitter and receiver subjects separated by great distances, representing the realization of the first human brain-to-brain interface. In a series of experiments, we established internet-mediated B2B communication by combining a BCI based on voluntary motor imagery-controlled electroencephalographic (EEG) changes with a CBI inducing the conscious perception of phosphenes (light flashes) through neuronavigated, robotized transcranial magnetic stimulation (TMS), with special care taken to block sensory (tactile, visual or auditory) cues. Our results provide a critical proof-of-principle demonstration for the development of conscious B2B communication technologies. More fully developed, related implementations will open new research venues in cognitive, social and clinical neuroscience and the scientific study of consciousness. We envision that hyperinteraction technologies will eventually have a profound impact on the social structure of our civilization and raise important ethical issues.
Conscious Brain-to-Brain Communication in Humans Using Non-Invasive Technologies
Grau, Carles; Ginhoux, Romuald; Riera, Alejandro; Nguyen, Thanh Lam; Chauvat, Hubert; Berg, Michel; Amengual, Julià L.; Pascual-Leone, Alvaro; Ruffini, Giulio
2014-01-01
Human sensory and motor systems provide the natural means for the exchange of information between individuals, and, hence, the basis for human civilization. The recent development of brain-computer interfaces (BCI) has provided an important element for the creation of brain-to-brain communication systems, and precise brain stimulation techniques are now available for the realization of non-invasive computer-brain interfaces (CBI). These technologies, BCI and CBI, can be combined to realize the vision of non-invasive, computer-mediated brain-to-brain (B2B) communication between subjects (hyperinteraction). Here we demonstrate the conscious transmission of information between human brains through the intact scalp and without intervention of motor or peripheral sensory systems. Pseudo-random binary streams encoding words were transmitted between the minds of emitter and receiver subjects separated by great distances, representing the realization of the first human brain-to-brain interface. In a series of experiments, we established internet-mediated B2B communication by combining a BCI based on voluntary motor imagery-controlled electroencephalographic (EEG) changes with a CBI inducing the conscious perception of phosphenes (light flashes) through neuronavigated, robotized transcranial magnetic stimulation (TMS), with special care taken to block sensory (tactile, visual or auditory) cues. Our results provide a critical proof-of-principle demonstration for the development of conscious B2B communication technologies. More fully developed, related implementations will open new research venues in cognitive, social and clinical neuroscience and the scientific study of consciousness. We envision that hyperinteraction technologies will eventually have a profound impact on the social structure of our civilization and raise important ethical issues. PMID:25137064
FLOW SIMULATION IN THE HUMAN UPPER RESPIRATORY TRACT
ABSTRACT
Computer simulations of airflow patterns within the human upper respiratory tract (URT) are presented. The URT model includes airways of the head (nasal and oral), throat (pharyngeal and laryngeal), and lungs (trachea and main bronchi). The head and throat mor...
Biosensor Technologies for Augmented Brain-Computer Interfaces in the Next Decades
2012-05-13
Research Triangle Park, NC 27709-2211 Augmented brain–computer interface (ABCI);biosensor; cognitive-state monitoring; electroencephalogram( EEG ); human...biosensor; cognitive-state monitoring; electroencephalogram ( EEG ); human brain imaging Manuscript received November 28, 2011; accepted December 20...magnetic reso- nance imaging (fMRI) [1], positron emission tomography (PET) [2], electroencephalograms ( EEGs ) and optical brain imaging techniques (i.e
Kencana, Andy Prima; Heng, John
2008-11-01
This paper introduces a novel passive tongue control and tracking device. The device is intended to be used by the severely disabled or quadriplegic person. The main focus of this device when compared to the other existing tongue tracking devices is that the sensor employed is passive which means it requires no powered electrical sensor to be inserted into the user's mouth and hence no trailing wires. This haptic interface device employs the use of inductive sensors to track the position of the user's tongue. The device is able perform two main PC functions that of the keyboard and mouse function. The results show that this device allows the severely disabled person to have some control in his environment, such as to turn on and off or control daily electrical devices or appliances; or to be used as a viable PC Human Computer Interface (HCI) by tongue control. The operating principle and set-up of such a novel passive tongue HCI has been established with successful laboratory trials and experiments. Further clinical trials will be required to test out the device on disabled persons before it is ready for future commercial development.
Kim, Jongshin; Nam, Kyoung Won; Jang, Ik Gyu; Yang, Hee Kyung; Kim, Kwang Gi; Hwang, Jeong-Min
2012-03-15
To evaluate the accuracy, validity, and reliability of a newly developed infrared optical head tracker (IOHT) using Nintendo Wii remote controllers (WiiMote; Nintendo Co. Ltd., Kyoto, Japan) for measurement of the angle of head posture. The IOHT consists of two infrared (IR) receivers (WiiMote) that are fixed to a mechanical frame and connected to a monitoring computer via a Bluetooth communication channel and an IR beacon that consists of four IR light-emitting diodes (LEDs). With the use of the Cervical Range of Motion (CROM; Performance Attainment Associates, St. Paul, MN) as a reference, one- and three-dimensional (1- and 3-D) head postures of 20 normal adult subjects (20-37 years of age; 9 women and 11 men) were recorded with the IOHT. In comparison with the data from the CROM, the IOHT-derived results showed high consistency. The measurements of 1- and 3-D positions of the human head with the IOHT were very close to those of the CROM. The correlation coefficients of 1- and 3-D positions between the IOHT and the CROM were more than 0.99 and 0.96 (P < 0.05, Pearson's correlation test), respectively. Reliability tests of the IOHT for the normal adult subjects for 1- and 3-D positions of the human head had 95% limits of agreement angles of approximately ±4.5° and ±8.0°, respectively. The IOHT showed strong concordance with the CROM and relatively good test-retest reliability, thus proving its validity and reliability as a head-posture-measuring device. Considering its high performance, ease of use, and low cost, the IOHT has the potential to be widely used as a head-posture-measuring device in clinical practice.
Research and development of service robot platform based on artificial psychology
NASA Astrophysics Data System (ADS)
Zhang, Xueyuan; Wang, Zhiliang; Wang, Fenhua; Nagai, Masatake
2007-12-01
Some related works about the control architecture of robot system are briefly summarized. According to the discussions above, this paper proposes control architecture of service robot based on artificial psychology. In this control architecture, the robot can obtain the cognition of environment through sensors, and then be handled with intelligent model, affective model and learning model, and finally express the reaction to the outside stimulation through its behavior. For better understanding the architecture, hierarchical structure is also discussed. The control system of robot can be divided into five layers, namely physical layer, drives layer, information-processing and behavior-programming layer, application layer and system inspection and control layer. This paper shows how to achieve system integration from hardware modules, software interface and fault diagnosis. Embedded system GENE-8310 is selected as the PC platform of robot APROS-I, and its primary memory media is CF card. The arms and body of the robot are constituted by 13 motors and some connecting fittings. Besides, the robot has a robot head with emotional facial expression, and the head has 13 DOFs. The emotional and intelligent model is one of the most important parts in human-machine interaction. In order to better simulate human emotion, an emotional interaction model for robot is proposed according to the theory of need levels of Maslom and mood information of Siminov. This architecture has already been used in our intelligent service robot.
The Berlin Brain–Computer Interface: Non-Medical Uses of BCI Technology
Blankertz, Benjamin; Tangermann, Michael; Vidaurre, Carmen; Fazli, Siamac; Sannelli, Claudia; Haufe, Stefan; Maeder, Cecilia; Ramsey, Lenny; Sturm, Irene; Curio, Gabriel; Müller, Klaus-Robert
2010-01-01
Brain–computer interfacing (BCI) is a steadily growing area of research. While initially BCI research was focused on applications for paralyzed patients, increasingly more alternative applications in healthy human subjects are proposed and investigated. In particular, monitoring of mental states and decoding of covert user states have seen a strong rise of interest. Here, we present some examples of such novel applications which provide evidence for the promising potential of BCI technology for non-medical uses. Furthermore, we discuss distinct methodological improvements required to bring non-medical applications of BCI technology to a diversity of layperson target groups, e.g., ease of use, minimal training, general usability, short control latencies. PMID:21165175
NASA Astrophysics Data System (ADS)
Pohlmeyer, Eric A.; Fifer, Matthew; Rich, Matthew; Pino, Johnathan; Wester, Brock; Johannes, Matthew; Dohopolski, Chris; Helder, John; D'Angelo, Denise; Beaty, James; Bensmaia, Sliman; McLoughlin, Michael; Tenore, Francesco
2017-05-01
Brain-computer interface (BCI) research has progressed rapidly, with BCIs shifting from animal tests to human demonstrations of controlling computer cursors and even advanced prosthetic limbs, the latter having been the goal of the Revolutionizing Prosthetics (RP) program. These achievements now include direct electrical intracortical microstimulation (ICMS) of the brain to provide human BCI users feedback information from the sensors of prosthetic limbs. These successes raise the question of how well people would be able to use BCIs to interact with systems that are not based directly on the body (e.g., prosthetic arms), and how well BCI users could interpret ICMS information from such devices. If paralyzed individuals could use BCIs to effectively interact with such non-anthropomorphic systems, it would offer them numerous new opportunities to control novel assistive devices. Here we explore how well a participant with tetraplegia can detect infrared (IR) sources in the environment using a prosthetic arm mounted camera that encodes IR information via ICMS. We also investigate how well a BCI user could transition from controlling a BCI based on prosthetic arm movements to controlling a flight simulator, a system with different physical dynamics than the arm. In that test, the BCI participant used environmental information encoded via ICMS to identify which of several upcoming flight routes was the best option. For both tasks, the BCI user was able to quickly learn how to interpret the ICMSprovided information to achieve the task goals.
Design of Flight Control Panel Layout using Graphical User Interface in MATLAB
NASA Astrophysics Data System (ADS)
Wirawan, A.; Indriyanto, T.
2018-04-01
This paper introduces the design of Flight Control Panel (FCP) Layout using Graphical User Interface in MATLAB. The FCP is the interface to give the command to the simulation and to monitor model variables while the simulation is running. The command accommodates by the FCP are altitude command, the angle of sideslip command, heading command, and setting command for turbulence model. The FCP was also designed to monitor the flight parameter while the simulation is running.
Head-Mounted Display Technology for Low Vision Rehabilitation and Vision Enhancement
Ehrlich, Joshua R.; Ojeda, Lauro V.; Wicker, Donna; Day, Sherry; Howson, Ashley; Lakshminarayanan, Vasudevan; Moroi, Sayoko E.
2017-01-01
Purpose To describe the various types of head-mounted display technology, their optical and human factors considerations, and their potential for use in low vision rehabilitation and vision enhancement. Design Expert perspective. Methods An overview of head-mounted display technology by an interdisciplinary team of experts drawing on key literature in the field. Results Head-mounted display technologies can be classified based on their display type and optical design. See-through displays such as retinal projection devices have the greatest potential for use as low vision aids. Devices vary by their relationship to the user’s eyes, field of view, illumination, resolution, color, stereopsis, effect on head motion and user interface. These optical and human factors considerations are important when selecting head-mounted displays for specific applications and patient groups. Conclusions Head-mounted display technologies may offer advantages over conventional low vision aids. Future research should compare head-mounted displays to commonly prescribed low vision aids in order to compare their effectiveness in addressing the impairments and rehabilitation goals of diverse patient populations. PMID:28048975
Tsui, Chun Sing Louis; Gan, John Q; Roberts, Stephen J
2009-03-01
Due to the non-stationarity of EEG signals, online training and adaptation are essential to EEG based brain-computer interface (BCI) systems. Self-paced BCIs offer more natural human-machine interaction than synchronous BCIs, but it is a great challenge to train and adapt a self-paced BCI online because the user's control intention and timing are usually unknown. This paper proposes a novel motor imagery based self-paced BCI paradigm for controlling a simulated robot in a specifically designed environment which is able to provide user's control intention and timing during online experiments, so that online training and adaptation of the motor imagery based self-paced BCI can be effectively investigated. We demonstrate the usefulness of the proposed paradigm with an extended Kalman filter based method to adapt the BCI classifier parameters, with experimental results of online self-paced BCI training with four subjects.
NASA Astrophysics Data System (ADS)
Krusienski, D. J.; Shih, J. J.
2011-04-01
A brain-computer interface (BCI) is a device that enables severely disabled people to communicate and interact with their environments using their brain waves. Most research investigating BCI in humans has used scalp-recorded electroencephalography or intracranial electrocorticography. The use of brain signals obtained directly from stereotactic depth electrodes to control a BCI has not previously been explored. In this study, event-related potentials (ERPs) recorded from bilateral stereotactic depth electrodes implanted in and adjacent to the hippocampus were used to control a P300 Speller paradigm. The ERPs were preprocessed and used to train a linear classifier to subsequently predict the intended target letters. The classifier was able to predict the intended target character at or near 100% accuracy using fewer than 15 stimulation sequences in the two subjects tested. Our results demonstrate that ERPs from hippocampal and hippocampal adjacent depth electrodes can be used to reliably control the P300 Speller BCI paradigm.
Toward a Model-Based Predictive Controller Design in Brain–Computer Interfaces
Kamrunnahar, M.; Dias, N. S.; Schiff, S. J.
2013-01-01
A first step in designing a robust and optimal model-based predictive controller (MPC) for brain–computer interface (BCI) applications is presented in this article. An MPC has the potential to achieve improved BCI performance compared to the performance achieved by current ad hoc, nonmodel-based filter applications. The parameters in designing the controller were extracted as model-based features from motor imagery task-related human scalp electroencephalography. Although the parameters can be generated from any model-linear or non-linear, we here adopted a simple autoregressive model that has well-established applications in BCI task discriminations. It was shown that the parameters generated for the controller design can as well be used for motor imagery task discriminations with performance (with 8–23% task discrimination errors) comparable to the discrimination performance of the commonly used features such as frequency specific band powers and the AR model parameters directly used. An optimal MPC has significant implications for high performance BCI applications. PMID:21267657
Toward a model-based predictive controller design in brain-computer interfaces.
Kamrunnahar, M; Dias, N S; Schiff, S J
2011-05-01
A first step in designing a robust and optimal model-based predictive controller (MPC) for brain-computer interface (BCI) applications is presented in this article. An MPC has the potential to achieve improved BCI performance compared to the performance achieved by current ad hoc, nonmodel-based filter applications. The parameters in designing the controller were extracted as model-based features from motor imagery task-related human scalp electroencephalography. Although the parameters can be generated from any model-linear or non-linear, we here adopted a simple autoregressive model that has well-established applications in BCI task discriminations. It was shown that the parameters generated for the controller design can as well be used for motor imagery task discriminations with performance (with 8-23% task discrimination errors) comparable to the discrimination performance of the commonly used features such as frequency specific band powers and the AR model parameters directly used. An optimal MPC has significant implications for high performance BCI applications.
NASA Astrophysics Data System (ADS)
Aditya, K.; Biswadeep, G.; Kedar, S.; Sundar, S.
2017-11-01
Human computer communication has growing demand recent days. The new generation of autonomous technology aspires to give computer interfaces emotional states that relate and consider user as well as system environment considerations. In the existing computational model is based an artificial intelligent and externally by multi-modal expression augmented with semi human characteristics. But the main problem with is multi-model expression is that the hardware control given to the Artificial Intelligence (AI) is very limited. So, in our project we are trying to give the Artificial Intelligence (AI) more control on the hardware. There are two main parts such as Speech to Text (STT) and Text to Speech (TTS) engines are used accomplish the requirement. In this work, we are using a raspberry pi 3, a speaker and a mic as hardware and for the programing part, we are using python scripting.
Aidlen, Jeremy T; Glick, Sara; Silverman, Kenneth; Silverman, Harvey F; Luks, Francois I
2009-08-01
Light-weight, low-profile, and high-resolution head-mounted displays (HMDs) now allow personalized viewing, of a laparoscopic image. The advantages include unobstructed viewing, regardless of position at the operating table, and the possibility to customize the image (i.e., enhanced reality, picture-in-picture, etc.). The bright image display allows use in daylight surroundings and the low profile of the HMD provides adequate peripheral vision. Theoretic disadvantages include reliance for all on the same image capture and anticues (i.e., reality disconnect) when the projected image remains static, despite changes in head position. This can lead to discomfort and even nausea. We have developed a prototype of interactive laparoscopic image display that allows hands-free control of the displayed image by changes in spatial orientation of the operator's head. The prototype consists of an HMD, a spatial orientation device, and computer software to enable hands-free panning and zooming of a video-endoscopic image display. The spatial orientation device uses magnetic fields created by a transmitter and receiver, each containing three orthogonal coils. The transmitter coils are efficiently driven, using USB power only, by a newly developed circuit, each at a unique frequency. The HMD-mounted receiver system links to a commercially available PC-interface PCI-bus sound card (M-Audiocard Delta 44; Avid Technology, Tewksbury, MA). Analog signals at the receiver are filtered, amplified, and converted to digital signals, which are processed to control the image display. The prototype uses a proprietary static fish-eye lens and software for the distortion-free reconstitution of any portion of the captured image. Left-right and up-down motions of the head (and HMD) produce real-time panning of the displayed image. Motion of the head toward, or away from, the transmitter causes real-time zooming in or out, respectively, of the displayed image. This prototype of the interactive HMD allows hands-free, intuitive control of the laparoscopic field, independent of the captured image.
Tactile and bone-conduction auditory brain computer interface for vision and hearing impaired users.
Rutkowski, Tomasz M; Mori, Hiromu
2015-04-15
The paper presents a report on the recently developed BCI alternative for users suffering from impaired vision (lack of focus or eye-movements) or from the so-called "ear-blocking-syndrome" (limited hearing). We report on our recent studies of the extents to which vibrotactile stimuli delivered to the head of a user can serve as a platform for a brain computer interface (BCI) paradigm. In the proposed tactile and bone-conduction auditory BCI novel multiple head positions are used to evoke combined somatosensory and auditory (via the bone conduction effect) P300 brain responses, in order to define a multimodal tactile and bone-conduction auditory brain computer interface (tbcaBCI). In order to further remove EEG interferences and to improve P300 response classification synchrosqueezing transform (SST) is applied. SST outperforms the classical time-frequency analysis methods of the non-linear and non-stationary signals such as EEG. The proposed method is also computationally more effective comparing to the empirical mode decomposition. The SST filtering allows for online EEG preprocessing application which is essential in the case of BCI. Experimental results with healthy BCI-naive users performing online tbcaBCI, validate the paradigm, while the feasibility of the concept is illuminated through information transfer rate case studies. We present a comparison of the proposed SST-based preprocessing method, combined with a logistic regression (LR) classifier, together with classical preprocessing and LDA-based classification BCI techniques. The proposed tbcaBCI paradigm together with data-driven preprocessing methods are a step forward in robust BCI applications research. Copyright © 2014 Elsevier B.V. All rights reserved.
Simplified realistic human head model for simulating Tumor Treating Fields (TTFields).
Wenger, Cornelia; Bomzon, Ze'ev; Salvador, Ricardo; Basser, Peter J; Miranda, Pedro C
2016-08-01
Tumor Treating Fields (TTFields) are alternating electric fields in the intermediate frequency range (100-300 kHz) of low-intensity (1-3 V/cm). TTFields are an anti-mitotic treatment against solid tumors, which are approved for Glioblastoma Multiforme (GBM) patients. These electric fields are induced non-invasively by transducer arrays placed directly on the patient's scalp. Cell culture experiments showed that treatment efficacy is dependent on the induced field intensity. In clinical practice, a software called NovoTalTM uses head measurements to estimate the optimal array placement to maximize the electric field delivery to the tumor. Computational studies predict an increase in the tumor's electric field strength when adapting transducer arrays to its location. Ideally, a personalized head model could be created for each patient, to calculate the electric field distribution for the specific situation. Thus, the optimal transducer layout could be inferred from field calculation rather than distance measurements. Nonetheless, creating realistic head models of patients is time-consuming and often needs user interaction, because automated image segmentation is prone to failure. This study presents a first approach to creating simplified head models consisting of convex hulls of the tissue layers. The model is able to account for anisotropic conductivity in the cortical tissues by using a tensor representation estimated from Diffusion Tensor Imaging. The induced electric field distribution is compared in the simplified and realistic head models. The average field intensities in the brain and tumor are generally slightly higher in the realistic head model, with a maximal ratio of 114% for a simplified model with reasonable layer thicknesses. Thus, the present pipeline is a fast and efficient means towards personalized head models with less complexity involved in characterizing tissue interfaces, while enabling accurate predictions of electric field distribution.
Closed-loop dialog model of face-to-face communication with a photo-real virtual human
NASA Astrophysics Data System (ADS)
Kiss, Bernadette; Benedek, Balázs; Szijárto, Gábor; Takács, Barnabás
2004-01-01
We describe an advanced Human Computer Interaction (HCI) model that employs photo-realistic virtual humans to provide digital media users with information, learning services and entertainment in a highly personalized and adaptive manner. The system can be used as a computer interface or as a tool to deliver content to end-users. We model the interaction process between the user and the system as part of a closed loop dialog taking place between the participants. This dialog, exploits the most important characteristics of a face-to-face communication process, including the use of non-verbal gestures and meta communication signals to control the flow of information. Our solution is based on a Virtual Human Interface (VHI) technology that was specifically designed to be able to create emotional engagement between the virtual agent and the user, thus increasing the efficiency of learning and/or absorbing any information broadcasted through this device. The paper reviews the basic building blocks and technologies needed to create such a system and discusses its advantages over other existing methods.
Kennedy Space Center's Command and Control System - "Toasters to Rocket Ships"
NASA Technical Reports Server (NTRS)
Lougheed, Kirk; Mako, Cheryle
2011-01-01
This slide presentation reviews the history of the development of the command and control system at Kennedy Space Center. From a system that could be brought to Florida in the trunk of a car in the 1950's. Including the development of larger and more complex launch vehicles with the Apollo program where human launch controllers managed the launch process with a hardware only system that required a dedicated human interface to perform every function until the Apollo vehicle lifted off from the pad. Through the development of the digital computer that interfaced with ground launch processing systems with the Space Shuttle program. Finally, showing the future control room being developed to control the missions to return to the moon and Mars, which will maximize the use of Commercial-Off-The Shelf (COTS) hardware and software which was standards based and not tied to a single vendor. The system is designed to be flexible and adaptable to support the requirements of future spacecraft and launch vehicles.
Human/Computer Interfacing in Educational Environments.
ERIC Educational Resources Information Center
Sarti, Luigi
1992-01-01
This discussion of educational applications of user interfaces covers the benefits of adopting database techniques in organizing multimedia materials; the evolution of user interface technology, including teletype interfaces, analogic overlay graphics, window interfaces, and adaptive systems; application design problems, including the…
Software for Simulating a Complex Robot
NASA Technical Reports Server (NTRS)
Goza, S. Michael
2003-01-01
RoboSim (Robot Simulation) is a computer program that simulates the poses and motions of the Robonaut a developmental anthropomorphic robot that has a complex system of joints with 43 degrees of freedom and multiple modes of operation and control. RoboSim performs a full kinematic simulation of all degrees of freedom. It also includes interface components that duplicate the functionality of the real Robonaut interface with control software and human operators. Basically, users see no difference between the real Robonaut and the simulation. Consequently, new control algorithms can be tested by computational simulation, without risk to the Robonaut hardware, and without using excessive Robonaut-hardware experimental time, which is always at a premium. Previously developed software incorporated into RoboSim includes Enigma (for graphical displays), OSCAR (for kinematical computations), and NDDS (for communication between the Robonaut and external software). In addition, RoboSim incorporates unique inverse-kinematical algorithms for chains of joints that have fewer than six degrees of freedom (e.g., finger joints). In comparison with the algorithms of OSCAR, these algorithms are more readily adaptable and provide better results when using equivalent sets of data.
Brain-Computer Interfaces Using Sensorimotor Rhythms: Current State and Future Perspectives
Yuan, Han; He, Bin
2014-01-01
Many studies over the past two decades have shown that people can use brain signals to convey their intent to a computer using brain-computer interfaces (BCIs). BCI systems extract specific features of brain activity and translate them into control signals that drive an output. Recently, a category of BCIs that are built on the rhythmic activity recorded over the sensorimotor cortex, i.e. the sensorimotor rhythm (SMR), has attracted considerable attention among the BCIs that use noninvasive neural recordings, e.g. electroencephalography (EEG), and have demonstrated the capability of multi-dimensional prosthesis control. This article reviews the current state and future perspectives of SMR-based BCI and its clinical applications, in particular focusing on the EEG SMR. The characteristic features of SMR from the human brain are described and their underlying neural sources are discussed. The functional components of SMR-based BCI, together with its current clinical applications are reviewed. Lastly, limitations of SMR-BCIs and future outlooks are also discussed. PMID:24759276
Wu, Shang-Lin; Liao, Lun-De; Lu, Shao-Wei; Jiang, Wei-Ling; Chen, Shi-An; Lin, Chin-Teng
2013-08-01
Electrooculography (EOG) signals can be used to control human-computer interface (HCI) systems, if properly classified. The ability to measure and process these signals may help HCI users to overcome many of the physical limitations and inconveniences in daily life. However, there are currently no effective multidirectional classification methods for monitoring eye movements. Here, we describe a classification method used in a wireless EOG-based HCI device for detecting eye movements in eight directions. This device includes wireless EOG signal acquisition components, wet electrodes and an EOG signal classification algorithm. The EOG classification algorithm is based on extracting features from the electrical signals corresponding to eight directions of eye movement (up, down, left, right, up-left, down-left, up-right, and down-right) and blinking. The recognition and processing of these eight different features were achieved in real-life conditions, demonstrating that this device can reliably measure the features of EOG signals. This system and its classification procedure provide an effective method for identifying eye movements. Additionally, it may be applied to study eye functions in real-life conditions in the near future.
DOT National Transportation Integrated Search
2004-03-20
A means of quantifying the cluttering effects of symbols is needed to evaluate the impact of displaying an increasing volume of information on aviation displays such as head-up displays. Human visual perception has been successfully modeled by algori...
Controller/Computer Interface with an Air-Ground Data Link
DOT National Transportation Integrated Search
1976-06-01
This report describes the results of an experiment for evaluating the controller/computer interface in an ARTS III/M&S system modified for use with a simulated digital data link and a voice link utilizing a computer-generated voice system. A modified...
A flexible telerobotic system for space operations
NASA Technical Reports Server (NTRS)
Sliwa, N. O.; Will, R. W.
1987-01-01
The objective and design of a proposed goal-oriented knowledge-based telerobotic system for space operations is described. This design effort encompasses the elements of the system executive and user interface and the distribution and general structure of the knowledge base, the displays, and the task sequencing. The objective of the design effort is to provide an expandable structure for a telerobotic system that provides cooperative interaction between the human operator and computer control. The initial phase of the implementation provides a rule-based, goal-oriented script generator to interface to the existing control modes of a telerobotic research system, in the Intelligent Systems Research Lab at NASA Research Center.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Winters, J.M.
Some background is given on the field of human factors. The nature of problems with current human/computer interfaces is discussed, some costs are identified, ideal attributes of graceful system interfaces are outlined, and some reasons are indicated why it's not easy to fix the problems. (LEW)
Mehl, Steffen W.; Hill, Mary C.
2013-01-01
This report documents the addition of ghost node Local Grid Refinement (LGR2) to MODFLOW-2005, the U.S. Geological Survey modular, transient, three-dimensional, finite-difference groundwater flow model. LGR2 provides the capability to simulate groundwater flow using multiple block-shaped higher-resolution local grids (a child model) within a coarser-grid parent model. LGR2 accomplishes this by iteratively coupling separate MODFLOW-2005 models such that heads and fluxes are balanced across the grid-refinement interface boundary. LGR2 can be used in two-and three-dimensional, steady-state and transient simulations and for simulations of confined and unconfined groundwater systems. Traditional one-way coupled telescopic mesh refinement methods can have large, often undetected, inconsistencies in heads and fluxes across the interface between two model grids. The iteratively coupled ghost-node method of LGR2 provides a more rigorous coupling in which the solution accuracy is controlled by convergence criteria defined by the user. In realistic problems, this can result in substantially more accurate solutions and require an increase in computer processing time. The rigorous coupling enables sensitivity analysis, parameter estimation, and uncertainty analysis that reflects conditions in both model grids. This report describes the method used by LGR2, evaluates accuracy and performance for two-and three-dimensional test cases, provides input instructions, and lists selected input and output files for an example problem. It also presents the Boundary Flow and Head (BFH2) Package, which allows the child and parent models to be simulated independently using the boundary conditions obtained through the iterative process of LGR2.
Mehl, Steffen W.; Hill, Mary C.
2006-01-01
This report documents the addition of shared node Local Grid Refinement (LGR) to MODFLOW-2005, the U.S. Geological Survey modular, transient, three-dimensional, finite-difference ground-water flow model. LGR provides the capability to simulate ground-water flow using one block-shaped higher-resolution local grid (a child model) within a coarser-grid parent model. LGR accomplishes this by iteratively coupling two separate MODFLOW-2005 models such that heads and fluxes are balanced across the shared interfacing boundary. LGR can be used in two-and three-dimensional, steady-state and transient simulations and for simulations of confined and unconfined ground-water systems. Traditional one-way coupled telescopic mesh refinement (TMR) methods can have large, often undetected, inconsistencies in heads and fluxes across the interface between two model grids. The iteratively coupled shared-node method of LGR provides a more rigorous coupling in which the solution accuracy is controlled by convergence criteria defined by the user. In realistic problems, this can result in substantially more accurate solutions and require an increase in computer processing time. The rigorous coupling enables sensitivity analysis, parameter estimation, and uncertainty analysis that reflects conditions in both model grids. This report describes the method used by LGR, evaluates LGR accuracy and performance for two- and three-dimensional test cases, provides input instructions, and lists selected input and output files for an example problem. It also presents the Boundary Flow and Head (BFH) Package, which allows the child and parent models to be simulated independently using the boundary conditions obtained through the iterative process of LGR.
NASA Astrophysics Data System (ADS)
Milekovic, Tomislav; Fischer, Jörg; Pistohl, Tobias; Ruescher, Johanna; Schulze-Bonhage, Andreas; Aertsen, Ad; Rickert, Jörn; Ball, Tonio; Mehring, Carsten
2012-08-01
A brain-machine interface (BMI) can be used to control movements of an artificial effector, e.g. movements of an arm prosthesis, by motor cortical signals that control the equivalent movements of the corresponding body part, e.g. arm movements. This approach has been successfully applied in monkeys and humans by accurately extracting parameters of movements from the spiking activity of multiple single neurons. We show that the same approach can be realized using brain activity measured directly from the surface of the human cortex using electrocorticography (ECoG). Five subjects, implanted with ECoG implants for the purpose of epilepsy assessment, took part in our study. Subjects used directionally dependent ECoG signals, recorded during active movements of a single arm, to control a computer cursor in one out of two directions. Significant BMI control was achieved in four out of five subjects with correct directional decoding in 69%-86% of the trials (75% on average). Our results demonstrate the feasibility of an online BMI using decoding of movement direction from human ECoG signals. Thus, to achieve such BMIs, ECoG signals might be used in conjunction with or as an alternative to intracortical neural signals.
Aircraft Alerting Systems Standardization Study. Phase IV. Accident Implications on Systems Design.
1982-06-01
computing and processing to assimilate and process status informa- 5 tion using...provided with capabilities in computing and processing , sensing, interfacing, and controlling and displaying. 17 o Computing and Processing - Algorithms...alerting system to perform a flight status monitor function would require additional sensinq, computing and processing , interfacing, and controlling
3-D PARTICLE TRANSPORT WITHIN THE HUMAN UPPER RESPIRATORY TRACT
In this study trajectories of inhaled particulate matter (PM) were simulated within a three-dimensional (3-D) computer model of the human upper respiratory tract (URT). The airways were described by computer-reconstructed images of a silicone rubber cast of the human head, throat...
NASA Astrophysics Data System (ADS)
Felton, E. A.; Radwin, R. G.; Wilson, J. A.; Williams, J. C.
2009-10-01
A brain-computer interface (BCI) is a communication system that takes recorded brain signals and translates them into real-time actions, in this case movement of a cursor on a computer screen. This work applied Fitts' law to the evaluation of performance on a target acquisition task during sensorimotor rhythm-based BCI training. Fitts' law, which has been used as a predictor of movement time in studies of human movement, was used here to determine the information transfer rate, which was based on target acquisition time and target difficulty. The information transfer rate was used to make comparisons between control modalities and subject groups on the same task. Data were analyzed from eight able-bodied and five motor disabled participants who wore an electrode cap that recorded and translated their electroencephalogram (EEG) signals into computer cursor movements. Direct comparisons were made between able-bodied and disabled subjects, and between EEG and joystick cursor control in able-bodied subjects. Fitts' law aptly described the relationship between movement time and index of difficulty for each task movement direction when evaluated separately and averaged together. This study showed that Fitts' law can be successfully applied to computer cursor movement controlled by neural signals.
The MITy micro-rover: Sensing, control, and operation
NASA Technical Reports Server (NTRS)
Malafeew, Eric; Kaliardos, William
1994-01-01
The sensory, control, and operation systems of the 'MITy' Mars micro-rover are discussed. It is shown that the customized sun tracker and laser rangefinder provide internal, autonomous dead reckoning and hazard detection in unstructured environments. The micro-rover consists of three articulated platforms with sensing, processing and payload subsystems connected by a dual spring suspension system. A reactive obstacle avoidance routine makes intelligent use of robot-centered laser information to maneuver through cluttered environments. The hazard sensors include a rangefinder, inclinometers, proximity sensors and collision sensors. A 486/66 laptop computer runs the graphical user interface and programming environment. A graphical window displays robot telemetry in real time and a small TV/VCR is used for real time supervisory control. Guidance, navigation, and control routines work in conjunction with the mapping and obstacle avoidance functions to provide heading and speed commands that maneuver the robot around obstacles and towards the target.
Development of the User Interface for AIR-Spec
NASA Astrophysics Data System (ADS)
Cervantes Alcala, E.; Guth, G.; Fedeler, S.; Samra, J.; Cheimets, P.; DeLuca, E.; Golub, L.
2016-12-01
The airborne infrared spectrometer (AIR-Spec) is an imaging spectrometer that will observe the solar corona during the 2017 total solar eclipse. This eclipse will provide a unique opportunity to observe infrared emission lines in the corona. Five spectral lines are of particular interest because they may eventually be used to measure the coronal magnetic field. To avoid infrared absorption from atmospheric water vapor, AIR-Spec will be placed on an NSF Gulfstream aircraft flying above 14.9 km. AIR-Spec must be capable of taking stable images while the plane moves. The instrument includes an image stabilization system, which uses fiber-optic gyroscopes to determine platform rotation, GPS to calculate the ephemeris of the sun, and a voltage-driven mirror to correct the line of sight. An operator monitors a white light image of the eclipse and manually corrects for residual drift. The image stabilization calculation is performed by a programmable automatic controller (PAC), which interfaces with the gyroscopes and mirror controller. The operator interfaces with a separate computer, which acquires images and computes the solar ephemeris. To ensure image stabilization is successful, a human machine interface (HMI) was developed to allow connection between the client and PAC. In order to make control of the instruments user friendly during the short eclipse observation, a graphical user interface (GUI) was also created. The GUI's functionality includes turning image stabilization on and off, allowing the user to input information about the geometric setup, calculating the solar ephemeris, refining estimates of the initial aircraft attitude, and storing data from the PAC on the operator's computer. It also displays time, location, attitude, ephemeris, gyro rates and mirror angles.
User Language Considerations in Military Human-Computer Interface Design
1988-06-30
InterfatceDe~sign (rinclassilied i. PEASO2NAL AUTHOR(S) 11rinil 3. Pond_ & VWilliamK. Cbruvn _______ Ia. TYPE OF REFORT Ib. TIME COVERED 14 DAt( OP...report details the soldtar lanquagoiculli-o ’s,.tves of poDzibIo releivance to US Military 01IOCliveneSS. 0&poCiatty in thosesV,tqIm& wtth cit:1c~l...IMPLICATIONS OF BILINGUALISM 7. Stress Effects 7 Significance for the US Military 9 BILINGUALISM AND THE HUMAN-COMPUTER INTERFACE 11 Computer-specific
Turning Shortcomings into Challenges: Brain-Computer Interfaces for Games
NASA Astrophysics Data System (ADS)
Nijholt, Anton; Reuderink, Boris; Oude Bos, Danny
In recent years we have seen a rising interest in brain-computer interfacing for human-computer interaction and potential game applications. Until now, however, we have almost only seen attempts where BCI is used to measure the affective state of the user or in neurofeedback games. There have hardly been any attempts to design BCI games where BCI is considered to be one of the possible input modalities that can be used to control the game. One reason may be that research still follows the paradigms of the traditional, medically oriented, BCI approaches. In this paper we discuss current BCI research from the viewpoint of games and game design. It is hoped that this survey will make clear that we need to design different games than we used to, but that such games can nevertheless be interesting and exciting.
Exploring Gigabyte Datasets in Real Time: Architectures, Interfaces and Time-Critical Design
NASA Technical Reports Server (NTRS)
Bryson, Steve; Gerald-Yamasaki, Michael (Technical Monitor)
1998-01-01
Architectures and Interfaces: The implications of real-time interaction on software architecture design: decoupling of interaction/graphics and computation into asynchronous processes. The performance requirements of graphics and computation for interaction. Time management in such an architecture. Examples of how visualization algorithms must be modified for high performance. Brief survey of interaction techniques and design, including direct manipulation and manipulation via widgets. talk discusses how human factors considerations drove the design and implementation of the virtual wind tunnel. Time-Critical Design: A survey of time-critical techniques for both computation and rendering. Emphasis on the assignment of a time budget to both the overall visualization environment and to each individual visualization technique in the environment. The estimation of the benefit and cost of an individual technique. Examples of the modification of visualization algorithms to allow time-critical control.
Role and interest of new technologies in data processing for space control centers
NASA Astrophysics Data System (ADS)
Denier, Jean-Paul; Caspar, Raoul; Borillo, Mario; Soubie, Jean-Luc
1990-10-01
The ways in which a multidisplinary approach will improve space control centers is discussed. Electronic documentation, ergonomics of human computer interfaces, natural language, intelligent tutoring systems and artificial intelligence systems are considered and applied in the study of the Hermes flight control center. It is concluded that such technologies are best integrated into a classical operational environment rather than taking a revolutionary approach which would involve a global modification of the system.
Cursor control by Kalman filter with a non-invasive body–machine interface
Seáñez-González, Ismael; Mussa-Ivaldi, Ferdinando A
2015-01-01
Objective We describe a novel human–machine interface for the control of a two-dimensional (2D) computer cursor using four inertial measurement units (IMUs) placed on the user’s upper-body. Approach A calibration paradigm where human subjects follow a cursor with their body as if they were controlling it with their shoulders generates a map between shoulder motions and cursor kinematics. This map is used in a Kalman filter to estimate the desired cursor coordinates from upper-body motions. We compared cursor control performance in a centre-out reaching task performed by subjects using different amounts of information from the IMUs to control the 2D cursor. Main results Our results indicate that taking advantage of the redundancy of the signals from the IMUs improved overall performance. Our work also demonstrates the potential of non-invasive IMU-based body–machine interface systems as an alternative or complement to brain–machine interfaces for accomplishing cursor control in 2D space. Significance The present study may serve as a platform for people with high-tetraplegia to control assistive devices such as powered wheelchairs using a joystick. PMID:25242561
A Robust Camera-Based Interface for Mobile Entertainment
Roig-Maimó, Maria Francesca; Manresa-Yee, Cristina; Varona, Javier
2016-01-01
Camera-based interfaces in mobile devices are starting to be used in games and apps, but few works have evaluated them in terms of usability or user perception. Due to the changing nature of mobile contexts, this evaluation requires extensive studies to consider the full spectrum of potential users and contexts. However, previous works usually evaluate these interfaces in controlled environments such as laboratory conditions, therefore, the findings cannot be generalized to real users and real contexts. In this work, we present a robust camera-based interface for mobile entertainment. The interface detects and tracks the user’s head by processing the frames provided by the mobile device’s front camera, and its position is then used to interact with the mobile apps. First, we evaluate the interface as a pointing device to study its accuracy, and different factors to configure such as the gain or the device’s orientation, as well as the optimal target size for the interface. Second, we present an in the wild study to evaluate the usage and the user’s perception when playing a game controlled by head motion. Finally, the game is published in an application store to make it available to a large number of potential users and contexts and we register usage data. Results show the feasibility of using this robust camera-based interface for mobile entertainment in different contexts and by different people. PMID:26907288
Yin, Ming; Li, Hao; Bull, Christopher; Borton, David A; Aceros, Juan; Larson, Lawrence; Nurmikko, Arto V
2013-01-01
In this paper we present a new type of head-mounted wireless neural recording device in a highly compact package, dedicated for untethered laboratory animal research and designed for future mobile human clinical use. The device, which takes its input from an array of intracortical microelectrode arrays (MEA) has ninety-seven broadband parallel neural recording channels and was integrated on to two custom designed printed circuit boards. These house several low power, custom integrated circuits, including a preamplifier ASIC, a controller ASIC, plus two SAR ADCs, a 3-axis accelerometer, a 48MHz clock source, and a Manchester encoder. Another ultralow power RF chip supports an OOK transmitter with the center frequency tunable from 3GHz to 4GHz, mounted on a separate low loss dielectric board together with a 3V LDO, with output fed to a UWB chip antenna. The IC boards were interconnected and packaged in a polyether ether ketone (PEEK) enclosure which is compatible with both animal and human use (e.g. sterilizable). The entire system consumes 17mA from a 1.2Ahr 3.6V Li-SOCl2 1/2AA battery, which operates the device for more than 2 days. The overall system includes a custom RF receiver electronics which are designed to directly interface with any number of commercial (or custom) neural signal processors for multi-channel broadband neural recording. Bench-top measurements and in vivo testing of the device in rhesus macaques are presented to demonstrate the performance of the wireless neural interface.
ERIC Educational Resources Information Center
Li, Xiaoming; Atkins, Melissa S.; Stanton, Bonita
2006-01-01
Data from 122 Head Start children were analyzed to examine the impact of computer use on school readiness and psychomotor skills. Children in the experimental group were given the opportunity to work on a computer for 15-20 minutes per day with their choice of developmentally appropriate educational software, while the control group received a…
Eye-movements and Voice as Interface Modalities to Computer Systems
NASA Astrophysics Data System (ADS)
Farid, Mohsen M.; Murtagh, Fionn D.
2003-03-01
We investigate the visual and vocal modalities of interaction with computer systems. We focus our attention on the integration of visual and vocal interface as possible replacement and/or additional modalities to enhance human-computer interaction. We present a new framework for employing eye gaze as a modality of interface. While voice commands, as means of interaction with computers, have been around for a number of years, integration of both the vocal interface and the visual interface, in terms of detecting user's eye movements through an eye-tracking device, is novel and promises to open the horizons for new applications where a hand-mouse interface provides little or no apparent support to the task to be accomplished. We present an array of applications to illustrate the new framework and eye-voice integration.
On the Rhetorical Contract in Human-Computer Interaction.
ERIC Educational Resources Information Center
Wenger, Michael J.
1991-01-01
An exploration of the rhetorical contract--i.e., the expectations for appropriate interaction--as it develops in human-computer interaction revealed that direct manipulation interfaces were more likely to establish social expectations. Study results suggest that the social nature of human-computer interactions can be examined with reference to the…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, B.; /Fermilab
1999-10-08
A user interface is created to monitor and operate the heating, ventilation, and air conditioning system. The interface is networked to the system's programmable logic controller. The controller maintains automated control of the system. The user through the interface is able to see the status of the system and override or adjust the automatic control features. The interface is programmed to show digital readouts of system equipment as well as visual queues of system operational statuses. It also provides information for system design and component interaction. The interface is made easier to read by simple designs, color coordination, and graphics.more » Fermi National Accelerator Laboratory (Fermi lab) conducts high energy particle physics research. Part of this research involves collision experiments with protons, and anti-protons. These interactions are contained within one of two massive detectors along Fermilab's largest particle accelerator the Tevatron. The D-Zero Assembly Building houses one of these detectors. At this time detector systems are being upgraded for a second experiment run, titled Run II. Unlike the previous run, systems at D-Zero must be computer automated so operators do not have to continually monitor and adjust these systems during the run. Human intervention should only be necessary for system start up and shut down, and equipment failure. Part of this upgrade includes the heating, ventilation, and air conditioning system (HVAC system). The HVAC system is responsible for controlling two subsystems, the air temperatures of the D-Zero Assembly Building and associated collision hall, as well as six separate water systems used in the heating and cooling of the air and detector components. The BYAC system is automated by a programmable logic controller. In order to provide system monitoring and operator control a user interface is required. This paper will address methods and strategies used to design and implement an effective user interface. Background material pertinent to the BYAC system will cover the separate water and air subsystems and their purposes. In addition programming and system automation will also be covered.« less
Where to look? Automating attending behaviors of virtual human characters
NASA Technical Reports Server (NTRS)
Chopra Khullar, S.; Badler, N. I.
2001-01-01
This research proposes a computational framework for generating visual attending behavior in an embodied simulated human agent. Such behaviors directly control eye and head motions, and guide other actions such as locomotion and reach. The implementation of these concepts, referred to as the AVA, draws on empirical and qualitative observations known from psychology, human factors and computer vision. Deliberate behaviors, the analogs of scanpaths in visual psychology, compete with involuntary attention capture and lapses into idling or free viewing. Insights provided by implementing this framework are: a defined set of parameters that impact the observable effects of attention, a defined vocabulary of looking behaviors for certain motor and cognitive activity, a defined hierarchy of three levels of eye behavior (endogenous, exogenous and idling) and a proposed method of how these types interact.
Soft brain-machine interfaces for assistive robotics: A novel control approach.
Schiatti, Lucia; Tessadori, Jacopo; Barresi, Giacinto; Mattos, Leonardo S; Ajoudani, Arash
2017-07-01
Robotic systems offer the possibility of improving the life quality of people with severe motor disabilities, enhancing the individual's degree of independence and interaction with the external environment. In this direction, the operator's residual functions must be exploited for the control of the robot movements and the underlying dynamic interaction through intuitive and effective human-robot interfaces. Towards this end, this work aims at exploring the potential of a novel Soft Brain-Machine Interface (BMI), suitable for dynamic execution of remote manipulation tasks for a wide range of patients. The interface is composed of an eye-tracking system, for an intuitive and reliable control of a robotic arm system's trajectories, and a Brain-Computer Interface (BCI) unit, for the control of the robot Cartesian stiffness, which determines the interaction forces between the robot and environment. The latter control is achieved by estimating in real-time a unidimensional index from user's electroencephalographic (EEG) signals, which provides the probability of a neutral or active state. This estimated state is then translated into a stiffness value for the robotic arm, allowing a reliable modulation of the robot's impedance. A preliminary evaluation of this hybrid interface concept provided evidence on the effective execution of tasks with dynamic uncertainties, demonstrating the great potential of this control method in BMI applications for self-service and clinical care.
NASA Technical Reports Server (NTRS)
Bejczy, Antal K.
1995-01-01
This presentation focuses on the application of computer graphics or 'virtual reality' (VR) techniques as a human-computer interface tool in the operation of telerobotic systems. VR techniques offer very valuable task realization aids for planning, previewing and predicting robotic actions, operator training, and for visual perception of non-visible events like contact forces in robotic tasks. The utility of computer graphics in telerobotic operation can be significantly enhanced by high-fidelity calibration of virtual reality images to actual TV camera images. This calibration will even permit the creation of artificial (synthetic) views of task scenes for which no TV camera views are available.
Development of a robotic device for facilitating learning by children who have severe disabilities.
Cook, Albert M; Meng, Max Q H; Gu, Jason J; Howery, Kathy
2002-09-01
This paper presents technical aspects of a robot manipulator developed to facilitate learning by young children who are generally unable to grasp objects or speak. The severity of these physical disabilities also limits assessment of their cognitive and language skills and abilities. The CRS robot manipulator was adapted for use by children who have disabilities. Our emphasis is on the technical control aspects of the development of an interface and communication environment between the child and the robot arm. The system is designed so that each child has user control and control procedures that are individually adapted. Control interfaces include large push buttons, keyboards, laser pointer, and head-controlled switches. Preliminary results have shown that young children who have severe disabilities can use the robotic arm system to complete functional play-related tasks. Developed software allows the child to accomplish a series of multistep tasks by activating one or more single switches. Through a single switch press the child can replay a series of preprogrammed movements that have a development sequence. Children using this system engaged in three-step sequential activities and were highly responsive to the robotic tasks. This was in marked contrast to other interventions using toys and computer games.
Ghajari, Mazdak; Hellyer, Peter J; Sharp, David J
2017-01-01
Abstract Traumatic brain injury can lead to the neurodegenerative disease chronic traumatic encephalopathy. This condition has a clear neuropathological definition but the relationship between the initial head impact and the pattern of progressive brain pathology is poorly understood. We test the hypothesis that mechanical strain and strain rate are greatest in sulci, where neuropathology is prominently seen in chronic traumatic encephalopathy, and whether human neuroimaging observations converge with computational predictions. Three distinct types of injury were simulated. Chronic traumatic encephalopathy can occur after sporting injuries, so we studied a helmet-to-helmet impact in an American football game. In addition, we investigated an occipital head impact due to a fall from ground level and a helmeted head impact in a road traffic accident involving a motorcycle and a car. A high fidelity 3D computational model of brain injury biomechanics was developed and the contours of strain and strain rate at the grey matter–white matter boundary were mapped. Diffusion tensor imaging abnormalities in a cohort of 97 traumatic brain injury patients were also mapped at the grey matter–white matter boundary. Fifty-one healthy subjects served as controls. The computational models predicted large strain most prominent at the depths of sulci. The volume fraction of sulcal regions exceeding brain injury thresholds were significantly larger than that of gyral regions. Strain and strain rates were highest for the road traffic accident and sporting injury. Strain was greater in the sulci for all injury types, but strain rate was greater only in the road traffic and sporting injuries. Diffusion tensor imaging showed converging imaging abnormalities within sulcal regions with a significant decrease in fractional anisotropy in the patient group compared to controls within the sulci. Our results show that brain tissue deformation induced by head impact loading is greatest in sulcal locations, where pathology in cases of chronic traumatic encephalopathy is observed. In addition, the nature of initial head loading can have a significant influence on the magnitude and pattern of injury. Clarifying this relationship is key to understanding the long-term effects of head impacts and improving protective strategies, such as helmet design. PMID:28043957
Human recognition based on head-shoulder contour extraction and BP neural network
NASA Astrophysics Data System (ADS)
Kong, Xiao-fang; Wang, Xiu-qin; Gu, Guohua; Chen, Qian; Qian, Wei-xian
2014-11-01
In practical application scenarios like video surveillance and human-computer interaction, human body movements are uncertain because the human body is a non-rigid object. Based on the fact that the head-shoulder part of human body can be less affected by the movement, and will seldom be obscured by other objects, in human detection and recognition, a head-shoulder model with its stable characteristics can be applied as a detection feature to describe the human body. In order to extract the head-shoulder contour accurately, a head-shoulder model establish method with combination of edge detection and the mean-shift algorithm in image clustering has been proposed in this paper. First, an adaptive method of mixture Gaussian background update has been used to extract targets from the video sequence. Second, edge detection has been used to extract the contour of moving objects, and the mean-shift algorithm has been combined to cluster parts of target's contour. Third, the head-shoulder model can be established, according to the width and height ratio of human head-shoulder combined with the projection histogram of the binary image, and the eigenvectors of the head-shoulder contour can be acquired. Finally, the relationship between head-shoulder contour eigenvectors and the moving objects will be formed by the training of back-propagation (BP) neural network classifier, and the human head-shoulder model can be clustered for human detection and recognition. Experiments have shown that the method combined with edge detection and mean-shift algorithm proposed in this paper can extract the complete head-shoulder contour, with low calculating complexity and high efficiency.
Computer interface for mechanical arm
NASA Technical Reports Server (NTRS)
Derocher, W. L.; Zermuehlen, R. O.
1978-01-01
Man/machine interface commands computer-controlled mechanical arm. Remotely-controlled arm has six degrees of freedom and is controlled through "supervisory-control" mode, in which all motions of arm follow set of preprogramed sequences. For simplicity, few prescribed commands are required to accomplish entire operation. Applications include operating computer-controlled arm to handle radioactive of explosive materials or commanding arm to perform functions in hostile environments. Modified version using displays may be applied in medicine.
Man-machine interface issues in space telerobotics: A JPL research and development program
NASA Technical Reports Server (NTRS)
Bejczy, A. K.
1987-01-01
Technology issues related to the use of robots as man-extension or telerobot systems in space are discussed and exemplified. General considerations are presentd on control and information problems in space teleoperation and on the characteristics of Earth orbital teleoperation. The JPL R and D work in the area of man-machine interface devices and techniques for sensing and computer-based control is briefly summarized. The thrust of this R and D effort is to render space teleoperation efficient and safe through the use of devices and techniques which will permit integrated and task-level (intelligent) two-way control communication between human operator and telerobot machine in Earth orbit. Specific control and information display devices and techniques are discussed and exemplified with development results obtained at JPL in recent years.
Towards Effective Non-Invasive Brain-Computer Interfaces Dedicated to Gait Rehabilitation Systems
Castermans, Thierry; Duvinage, Matthieu; Cheron, Guy; Dutoit, Thierry
2014-01-01
In the last few years, significant progress has been made in the field of walk rehabilitation. Motor cortex signals in bipedal monkeys have been interpreted to predict walk kinematics. Epidural electrical stimulation in rats and in one young paraplegic has been realized to partially restore motor control after spinal cord injury. However, these experimental trials are far from being applicable to all patients suffering from motor impairments. Therefore, it is thought that more simple rehabilitation systems are desirable in the meanwhile. The goal of this review is to describe and summarize the progress made in the development of non-invasive brain-computer interfaces dedicated to motor rehabilitation systems. In the first part, the main principles of human locomotion control are presented. The paper then focuses on the mechanisms of supra-spinal centers active during gait, including results from electroencephalography, functional brain imaging technologies [near-infrared spectroscopy (NIRS), functional magnetic resonance imaging (fMRI), positron-emission tomography (PET), single-photon emission-computed tomography (SPECT)] and invasive studies. The first brain-computer interface (BCI) applications to gait rehabilitation are then presented, with a discussion about the different strategies developed in the field. The challenges to raise for future systems are identified and discussed. Finally, we present some proposals to address these challenges, in order to contribute to the improvement of BCI for gait rehabilitation. PMID:24961699
Heidrich, Regina O; Jensen, Emely; Rebelo, Francisco; Oliveira, Tiago
2015-01-01
This article presents a comparative study among people with cerebral palsy and healthy controls, of various ages, using a Brain-computer Interface (BCI) device. The research is qualitative in its approach. Researchers worked with Observational Case Studies. People with cerebral palsy and healthy controls were evaluated in Portugal and in Brazil. The study aimed to develop a study for product evaluation in order to perceive whether people with cerebral palsy could interact with the computer and compare whether their performance is similar to that of healthy controls when using the Brain-computer Interface. Ultimately, it was found that there are no significant differences between people with cerebral palsy in the two countries, as well as between populations without cerebral palsy (healthy controls).
Decoding human mental states by whole-head EEG+fNIRS during category fluency task performance
NASA Astrophysics Data System (ADS)
Omurtag, Ahmet; Aghajani, Haleh; Onur Keles, Hasan
2017-12-01
Objective. Concurrent scalp electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS), which we refer to as EEG+fNIRS, promises greater accuracy than the individual modalities while remaining nearly as convenient as EEG. We sought to quantify the hybrid system’s ability to decode mental states and compare it with its unimodal components. Approach. We recorded from healthy volunteers taking the category fluency test and applied machine learning techniques to the data. Main results. EEG+fNIRS’s decoding accuracy was greater than that of its subsystems, partly due to the new type of neurovascular features made available by hybrid data. Significance. Availability of an accurate and practical decoding method has potential implications for medical diagnosis, brain-computer interface design, and neuroergonomics.
NASA Technical Reports Server (NTRS)
Robbins, Woodrow E. (Editor); Fisher, Scott S. (Editor)
1989-01-01
Special attention was given to problems of stereoscopic display devices, such as CAD for enhancement of the design process in visual arts, stereo-TV improvement of remote manipulator performance, a voice-controlled stereographic video camera system, and head-mounted displays and their low-cost design alternatives. Also discussed was a novel approach to chromostereoscopic microscopy, computer-generated barrier-strip autostereography and lenticular stereograms, and parallax barrier three-dimensional TV. Additional topics include processing and user interface isssues and visualization applications, including automated analysis and fliud flow topology, optical tomographic measusrements of mixing fluids, visualization of complex data, visualization environments, and visualization management systems.
Human Computer Interface Design Criteria. Volume 1. User Interface Requirements
2010-03-19
Television tuners, including tuner cards for use in computers, shall be equipped with secondary audio program playback circuitry. (c) All training...Shelf CSS Cascading Style Sheets DII Defense Information Infrastructure DISA Defense Information Systems Agency DoD Department of Defense
Pita, Murillo S; do Nascimento, Cássio; Dos Santos, Carla G P; Pires, Isabela M; Pedrazzi, Vinícius
2017-07-01
The aim of this in vitro study was to identify and quantify up to 38 microbial species from human saliva penetrating through the implant-abutment interface in two different implant connections, external hexagon and tri-channel internal connection, both with conventional flat-head or experimental conical-head abutment screws. Forty-eight two-part implants with external hexagon (EH; n = 24) or tri-channel internal (TI; n = 24) connections were investigated. Abutments were attached to implants with conventional flat-head or experimental conical-head screws. After saliva incubation, Checkerboard DNA-DNA hybridization was used to identify and quantify up to 38 bacterial colonizing the internal parts of the implants. Kruskal-Wallis test followed by Bonferroni's post-tests for multiple comparisons was used for statistical analysis. Twenty-four of thirty-eight species, including putative periodontal pathogens, were found colonizing the inner surfaces of both EH and TI implants. Peptostreptococcus anaerobios (P = 0.003), Prevotella melaninogenica (P < 0.0001), and Candida dubliniensis (P < 0.0001) presented significant differences between different groups. Means of total microbial count (×10 4 , ±SD) for each group were recorded as follows: G1 (0.27 ± 2.04), G2 (0 ± 0), G3 (1.81 ± 7.50), and G4 (0.35 ± 1.81). Differences in the geometry of implant connections and abutment screws have impacted the microbial leakage through the implant-abutment interface. Implants attached with experimental conical-head abutment screws showed lower counts of microorganisms when compared with conventional flat-head screws. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
NASA Technical Reports Server (NTRS)
Erzberger, Heinz
2000-01-01
The FAA's Free Flight Phase 1 Office is in the process of deploying the current generation of CTAS tools, which are the Traffic Management Advisor (TMA) and the passive Final Approach Spacing Tool (pFAST), at selected centers and airports. Research at NASA is now focussed on extending the CTAS software and computer human interfaces to provide more advanced capabilities. The Multi-center TMA (McTMA) is designed to operate at airports where arrival flows originate from two or more centers whose boundaries are in close proximity to the TRACON boundary. McTMA will also include techniques for routing arrival flows away from congested airspace and around airspace reserved for arrivals into other hub airports. NASA is working with FAA and MITRE to build a prototype McTMA for the Philadelphia airport. The active Final Approach Spacing Tool (aFAST) provides speed and heading advisories to help controllers achieve accurate spacing between aircraft on final approach. These advisories will be integrated with those in the existing pFAST to provide a set of comprehensive advisories for controlling arrival traffic from the TRACON boundary to touchdown at complex, high-capacity airports. A research prototype of aFAST, designed for the Dallas-Fort Worth is in an advanced stage of development. The Expedite Departure Path (EDP) and Direct-To tools are designed to help controllers guide departing aircraft out of the TRACON airspace and to climb to cruise altitude along the most efficient routes.
An independent brain-computer interface using covert non-spatial visual selective attention
NASA Astrophysics Data System (ADS)
Zhang, Dan; Maye, Alexander; Gao, Xiaorong; Hong, Bo; Engel, Andreas K.; Gao, Shangkai
2010-02-01
In this paper, a novel independent brain-computer interface (BCI) system based on covert non-spatial visual selective attention of two superimposed illusory surfaces is described. Perception of two superimposed surfaces was induced by two sets of dots with different colors rotating in opposite directions. The surfaces flickered at different frequencies and elicited distinguishable steady-state visual evoked potentials (SSVEPs) over parietal and occipital areas of the brain. By selectively attending to one of the two surfaces, the SSVEP amplitude at the corresponding frequency was enhanced. An online BCI system utilizing the attentional modulation of SSVEP was implemented and a 3-day online training program with healthy subjects was carried out. The study was conducted with Chinese subjects at Tsinghua University, and German subjects at University Medical Center Hamburg-Eppendorf (UKE) using identical stimulation software and equivalent technical setup. A general improvement of control accuracy with training was observed in 8 out of 18 subjects. An averaged online classification accuracy of 72.6 ± 16.1% was achieved on the last training day. The system renders SSVEP-based BCI paradigms possible for paralyzed patients with substantial head or ocular motor impairments by employing covert attention shifts instead of changing gaze direction.
An independent brain-computer interface using covert non-spatial visual selective attention.
Zhang, Dan; Maye, Alexander; Gao, Xiaorong; Hong, Bo; Engel, Andreas K; Gao, Shangkai
2010-02-01
In this paper, a novel independent brain-computer interface (BCI) system based on covert non-spatial visual selective attention of two superimposed illusory surfaces is described. Perception of two superimposed surfaces was induced by two sets of dots with different colors rotating in opposite directions. The surfaces flickered at different frequencies and elicited distinguishable steady-state visual evoked potentials (SSVEPs) over parietal and occipital areas of the brain. By selectively attending to one of the two surfaces, the SSVEP amplitude at the corresponding frequency was enhanced. An online BCI system utilizing the attentional modulation of SSVEP was implemented and a 3-day online training program with healthy subjects was carried out. The study was conducted with Chinese subjects at Tsinghua University, and German subjects at University Medical Center Hamburg-Eppendorf (UKE) using identical stimulation software and equivalent technical setup. A general improvement of control accuracy with training was observed in 8 out of 18 subjects. An averaged online classification accuracy of 72.6 +/- 16.1% was achieved on the last training day. The system renders SSVEP-based BCI paradigms possible for paralyzed patients with substantial head or ocular motor impairments by employing covert attention shifts instead of changing gaze direction.
Intuitive wireless control of a robotic arm for people living with an upper body disability.
Fall, C L; Turgeon, P; Campeau-Lecours, A; Maheu, V; Boukadoum, M; Roy, S; Massicotte, D; Gosselin, C; Gosselin, B
2015-08-01
Assistive Technologies (ATs) also called extrinsic enablers are useful tools for people living with various disabilities. The key points when designing such useful devices not only concern their intended goal, but also the most suitable human-machine interface (HMI) that should be provided to users. This paper describes the design of a highly intuitive wireless controller for people living with upper body disabilities with a residual or complete control of their neck and their shoulders. Tested with JACO, a six-degree-of-freedom (6-DOF) assistive robotic arm with 3 flexible fingers on its end-effector, the system described in this article is made of low-cost commercial off-the-shelf components and allows a full emulation of JACO's standard controller, a 3 axis joystick with 7 user buttons. To do so, three nine-degree-of-freedom (9-DOF) inertial measurement units (IMUs) are connected to a microcontroller and help measuring the user's head and shoulders position, using a complementary filter approach. The results are then transmitted to a base-station via a 2.4-GHz low-power wireless transceiver and interpreted by the control algorithm running on a PC host. A dedicated software interface allows the user to quickly calibrate the controller, and translates the information into suitable commands for JACO. The proposed controller is thoroughly described, from the electronic design to implemented algorithms and user interfaces. Its performance and future improvements are discussed as well.
A Computational Study of the Rheology and Structure of Surfactant Covered Droplets
NASA Astrophysics Data System (ADS)
Maia, Joao; Boromand, Arman
Using different types of surface-active agents are ubiquitous in different industrial applications ranging from cosmetic and food industries to polymeric nano-composite and blends. This allows to produce stable multiphasic systems like foams and emulsions whose stability and shelf-life are directly determined by the efficiency and the type of the surfactant molecules. Moreover, presence and self-assembly of these species on an interface will display complex dynamics and structural evolution under different processing conditions. Analogous to bulk rheology of complex systems, surfactant covered interfaces will response to an external mechanical forces or deformation differently depends on the molecular configuration and topology of the system constituents. Although the effect of molecular configuration of the surface-active molecules on the planar interfaces has been studied both experimentally and computationally, it remains challenging from both experimental and computational aspects to track efficiency and effectiveness of different surfactant molecules with different molecular geometries on curved interfaces. Using Dissipative Particle Dynamics, we have studies effectiveness and efficiency of different surfactant molecules on a curved interface in equilibrium and far from equilibrium. Interfacial tension is calculated for linear and branched surfactant with different hydrophobic and hydrophilic tail and head groups with different branching densities. Deformation parameter and Taylor plots are obtained for individual surfactant molecules under shear flow.
N S Andreasen Struijk, Lotte; Lontis, Eugen R; Gaihede, Michael; Caltenco, Hector A; Lund, Morten Enemark; Schioeler, Henrik; Bentsen, Bo
2017-08-01
Individuals with tetraplegia depend on alternative interfaces in order to control computers and other electronic equipment. Current interfaces are often limited in the number of available control commands, and may compromise the social identity of an individual due to their undesirable appearance. The purpose of this study was to implement an alternative computer interface, which was fully embedded into the oral cavity and which provided multiple control commands. The development of a wireless, intraoral, inductive tongue computer was described. The interface encompassed a 10-key keypad area and a mouse pad area. This system was embedded wirelessly into the oral cavity of the user. The functionality of the system was demonstrated in two tetraplegic individuals and two able-bodied individuals Results: The system was invisible during use and allowed the user to type on a computer using either the keypad area or the mouse pad. The maximal typing rate was 1.8 s for repetitively typing a correct character with the keypad area and 1.4 s for repetitively typing a correct character with the mouse pad area. The results suggest that this inductive tongue computer interface provides an esthetically acceptable and functionally efficient environmental control for a severely disabled user. Implications for Rehabilitation New Design, Implementation and detection methods for intra oral assistive devices. Demonstration of wireless, powering and encapsulation techniques suitable for intra oral embedment of assistive devices. Demonstration of the functionality of a rechargeable and fully embedded intra oral tongue controlled computer input device.
Neurofeedback Training for BCI Control
NASA Astrophysics Data System (ADS)
Neuper, Christa; Pfurtscheller, Gert
Brain-computer interface (BCI) systems detect changes in brain signals that reflect human intention, then translate these signals to control monitors or external devices (for a comprehensive review, see [1]). BCIs typically measure electrical signals resulting from neural firing (i.e. neuronal action potentials, Electroencephalogram (ECoG), or Electroencephalogram (EEG)). Sophisticated pattern recognition and classification algorithms convert neural activity into the required control signals. BCI research has focused heavily on developing powerful signal processing and machine learning techniques to accurately classify neural activity [2-4].
Evaluation of a graphic interface to control a robotic grasping arm: a multicenter study.
Laffont, Isabelle; Biard, Nicolas; Chalubert, Gérard; Delahoche, Laurent; Marhic, Bruno; Boyer, François C; Leroux, Christophe
2009-10-01
Laffont I, Biard N, Chalubert G, Delahoche L, Marhic B, Boyer FC, Leroux C. Evaluation of a graphic interface to control a robotic grasping arm: a multicenter study. Grasping robots are still difficult to use for persons with disabilities because of inadequate human-machine interfaces (HMIs). Our purpose was to evaluate the efficacy of a graphic interface enhanced by a panoramic camera to detect out-of-view objects and control a commercialized robotic grasping arm. Multicenter, open-label trial. Four French departments of physical and rehabilitation medicine. Control subjects (N=24; mean age, 33y) and 20 severely impaired patients (mean age, 44y; 5 with muscular dystrophies, 13 with traumatic tetraplegia, and 2 others) completed the study. None of these patients was able to grasp a 50-cL bottle without the robot. Participants were asked to grasp 6 objects scattered around their wheelchair using the robotic arm. They were able to select the desired object through the graphic interface available on their computer screen. Global success rate, time needed to select the object on the screen of the computer, number of clicks on the HMI, and satisfaction among users. We found a significantly lower success rate in patients (81.1% vs 88.7%; chi(2)P=.017). The duration of the task was significantly higher in patients (71.6s vs 39.1s; P<.001). We set a cut-off for the maximum duration at 79 seconds, representing twice the amount of time needed by the control subjects to complete the task. In these conditions, the success rate for the impaired participants was 65% versus 85.4% for control subjects. The mean number of clicks necessary to select the object with the HMI was very close in both groups: patients used (mean +/- SD) 7.99+/-6.07 clicks, whereas controls used 7.04+/-2.87 clicks. Considering the severity of patients' impairment, all these differences were considered tiny. Furthermore, a high satisfaction rate was reported for this population concerning the use of the graphic interface. The graphic interface is of interest in controlling robotic arms for disabled people, with numerous potential applications in daily life.
Wilson, Gwendoline Ixia; Holton, Mark D.; Walker, James; Jones, Mark W.; Grundy, Ed; Davies, Ian M.; Clarke, David; Luckman, Adrian; Russill, Nick; Wilson, Vianney; Plummer, Rosie
2015-01-01
Understanding the way humans inform themselves about their environment is pivotal in helping explain our susceptibility to stimuli and how this modulates behaviour and movement patterns. We present a new device, the Human Interfaced Personal Observation Platform (HIPOP), which is a head-mounted (typically on a hat) unit that logs magnetometry and accelerometry data at high rates and, following appropriate calibration, can be used to determine the heading and pitch of the wearer’s head. We used this device on participants visiting a botanical garden and noted that although head pitch ranged between −80° and 60°, 25% confidence limits were restricted to an arc of about 25° with a tendency for the head to be pitched down (mean head pitch ranged between −43° and 0°). Mean rates of change of head pitch varied between −0.00187°/0.1 s and 0.00187°/0.1 s, markedly slower than rates of change of head heading which varied between −0.3141°/0.1 s and 0.01263°/0.1 s although frequency distributions of both parameters showed them to be symmetrical and monomodal. Overall, there was considerable variation in both head pitch and head heading, which highlighted the role that head orientation might play in exposing people to certain features of the environment. Thus, when used in tandem with accurate position-determining systems, the HIPOP can be used to determine how the head is orientated relative to gravity and geographic North and in relation to geographic position, presenting data on how the environment is being ‘framed’ by people in relation to environmental content. PMID:26157643
Active Microelectronic Neurosensor Arrays for Implantable Brain Communication Interfaces
Song, Y.-K.; Borton, D. A.; Park, S.; Patterson, W. R.; Bull, C. W.; Laiwalla, F.; Mislow, J.; Simeral, J. D.; Donoghue, J. P.; Nurmikko, A. V.
2010-01-01
We have built a wireless implantable microelectronic device for transmitting cortical signals transcutaneously. The device is aimed at interfacing a microelectrode array cortical to an external computer for neural control applications. Our implantable microsystem enables presently 16-channel broadband neural recording in a non-human primate brain by converting these signals to a digital stream of infrared light pulses for transmission through the skin. The implantable unit employs a flexible polymer substrate onto which we have integrated ultra-low power amplification with analog multiplexing, an analog-to-digital converter, a low power digital controller chip, and infrared telemetry. The scalable 16-channel microsystem can employ any of several modalities of power supply, including via radio frequency by induction, or infrared light via a photovoltaic converter. As of today, the implant has been tested as a sub-chronic unit in non-human primates (~ 1 month), yielding robust spike and broadband neural data on all available channels. PMID:19502132
Design for interaction between humans and intelligent systems during real-time fault management
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Schreckenghost, Debra L.; Thronesbery, Carroll G.
1992-01-01
Initial results are reported to provide guidance and assistance for designers of intelligent systems and their human interfaces. The objective is to achieve more effective human-computer interaction (HCI) for real time fault management support systems. Studies of the development of intelligent fault management systems within NASA have resulted in a new perspective of the user. If the user is viewed as one of the subsystems in a heterogeneous, distributed system, system design becomes the design of a flexible architecture for accomplishing system tasks with both human and computer agents. HCI requirements and design should be distinguished from user interface (displays and controls) requirements and design. Effective HCI design for multi-agent systems requires explicit identification of activities and information that support coordination and communication between agents. The effects are characterized of HCI design on overall system design and approaches are identified to addressing HCI requirements in system design. The results include definition of (1) guidance based on information level requirements analysis of HCI, (2) high level requirements for a design methodology that integrates the HCI perspective into system design, and (3) requirements for embedding HCI design tools into intelligent system development environments.
Technology transfer of operator-in-the-loop simulation
NASA Technical Reports Server (NTRS)
Yae, K. H.; Lin, H. C.; Lin, T. C.; Frisch, H. P.
1994-01-01
The technology developed for operator-in-the-loop simulation in space teleoperation has been applied to Caterpillar's backhoe, wheel loader, and off-highway truck. On an SGI workstation, the simulation integrates computer modeling of kinematics and dynamics, real-time computational and visualization, and an interface with the operator through the operator's console. The console is interfaced with the workstation through an IBM-PC in which the operator's commands were digitized and sent through an RS-232 serial port. The simulation gave visual feedback adequate for the operator in the loop, with the camera's field of vision projected on a large screen in multiple view windows. The view control can emulate either stationary or moving cameras. This simulator created an innovative engineering design environment by integrating computer software and hardware with the human operator's interactions. The backhoe simulation has been adopted by Caterpillar in building a virtual reality tool for backhoe design.
The 3D Human Motion Control Through Refined Video Gesture Annotation
NASA Astrophysics Data System (ADS)
Jin, Yohan; Suk, Myunghoon; Prabhakaran, B.
In the beginning of computer and video game industry, simple game controllers consisting of buttons and joysticks were employed, but recently game consoles are replacing joystick buttons with novel interfaces such as the remote controllers with motion sensing technology on the Nintendo Wii [1] Especially video-based human computer interaction (HCI) technique has been applied to games, and the representative game is 'Eyetoy' on the Sony PlayStation 2. Video-based HCI technique has great benefit to release players from the intractable game controller. Moreover, in order to communicate between humans and computers, video-based HCI is very crucial since it is intuitive, easy to get, and inexpensive. On the one hand, extracting semantic low-level features from video human motion data is still a major challenge. The level of accuracy is really dependent on each subject's characteristic and environmental noises. Of late, people have been using 3D motion-capture data for visualizing real human motions in 3D space (e.g, 'Tiger Woods' in EA Sports, 'Angelina Jolie' in Bear-Wolf movie) and analyzing motions for specific performance (e.g, 'golf swing' and 'walking'). 3D motion-capture system ('VICON') generates a matrix for each motion clip. Here, a column is corresponding to a human's sub-body part and row represents time frames of data capture. Thus, we can extract sub-body part's motion only by selecting specific columns. Different from low-level feature values of video human motion, 3D human motion-capture data matrix are not pixel values, but is closer to human level of semantics.
The use of virtual reality technology in the treatment of anxiety and other psychiatric disorders
Maples-Keller, Jessica L.; Bunnell, Brian E.; Kim, Sae-Jin; Rothbaum, Barbara O.
2016-01-01
Virtual reality, or VR, allows users to experience a sense of presence in a computer-generated three-dimensional environment. Sensory information is delivered through a head mounted display and specialized interface devices. These devices track head movements so that the movements and images change in a natural way with head motion, allowing for a sense of immersion. VR allows for controlled delivery of sensory stimulation via the therapist and is a convenient and cost-effective treatment. The primary focus of this article is to review the available literature regarding the effectiveness of incorporating VR within the psychiatric treatment of a wide range of psychiatric disorders, with a specific focus on exposure-based intervention for anxiety disorders. A systematic literature search was conducted in order to identify studies implementing VR based treatment for anxiety or other psychiatric disorders. This review will provide an overview of the history of the development of VR based technology and its use within psychiatric treatment, an overview of the empirical evidence for VR based treatment, the benefits for using VR for psychiatric research and treatment, recommendations for how to incorporate VR into psychiatric care, and future directions for VR based treatment and clinical research. PMID:28475502
Experiments in teleoperator and autonomous control of space robotic vehicles
NASA Technical Reports Server (NTRS)
Alexander, Harold L.
1990-01-01
A research program and strategy are described which include fundamental teleoperation issues and autonomous-control issues of sensing and navigation for satellite robots. The program consists of developing interfaces for visual operation and studying the consequences of interface designs as well as developing navigation and control technologies based on visual interaction. A space-robot-vehicle simulator is under development for use in virtual-environment teleoperation experiments and neutral-buoyancy investigations. These technologies can be utilized in a study of visual interfaces to address tradeoffs between head-tracking and manual remote cameras, panel-mounted and helmet-mounted displays, and stereoscopic and monoscopic display systems. The present program can provide significant data for the development of control experiments for autonomously controlled satellite robots.
Brain-Computer Interface with Inhibitory Neurons Reveals Subtype-Specific Strategies.
Mitani, Akinori; Dong, Mingyuan; Komiyama, Takaki
2018-01-08
Brain-computer interfaces have seen an increase in popularity due to their potential for direct neuroprosthetic applications for amputees and disabled individuals. Supporting this promise, animals-including humans-can learn even arbitrary mapping between the activity of cortical neurons and movement of prosthetic devices [1-4]. However, the performance of neuroprosthetic device control has been nowhere near that of limb control in healthy individuals, presenting a dire need to improve the performance. One potential limitation is the fact that previous work has not distinguished diverse cell types in the neocortex, even though different cell types possess distinct functions in cortical computations [5-7] and likely distinct capacities to control brain-computer interfaces. Here, we made a first step in addressing this issue by tracking the plastic changes of three major types of cortical inhibitory neurons (INs) during a neuron-pair operant conditioning task using two-photon imaging of IN subtypes expressing GCaMP6f. Mice were rewarded when the activity of the positive target neuron (N+) exceeded that of the negative target neuron (N-) beyond a set threshold. Mice improved performance with all subtypes, but the strategies were subtype specific. When parvalbumin (PV)-expressing INs were targeted, the activity of N- decreased. However, targeting of somatostatin (SOM)- and vasoactive intestinal peptide (VIP)-expressing INs led to an increase of the N+ activity. These results demonstrate that INs can be individually modulated in a subtype-specific manner and highlight the versatility of neural circuits in adapting to new demands by using cell-type-specific strategies. Copyright © 2017 Elsevier Ltd. All rights reserved.
Designers' models of the human-computer interface
NASA Technical Reports Server (NTRS)
Gillan, Douglas J.; Breedin, Sarah D.
1993-01-01
Understanding design models of the human-computer interface (HCI) may produce two types of benefits. First, interface development often requires input from two different types of experts: human factors specialists and software developers. Given the differences in their backgrounds and roles, human factors specialists and software developers may have different cognitive models of the HCI. Yet, they have to communicate about the interface as part of the design process. If they have different models, their interactions are likely to involve a certain amount of miscommunication. Second, the design process in general is likely to be guided by designers' cognitive models of the HCI, as well as by their knowledge of the user, tasks, and system. Designers do not start with a blank slate; rather they begin with a general model of the object they are designing. The author's approach to a design model of the HCI was to have three groups make judgments of categorical similarity about the components of an interface: human factors specialists with HCI design experience, software developers with HCI design experience, and a baseline group of computer users with no experience in HCI design. The components of the user interface included both display components such as windows, text, and graphics, and user interaction concepts, such as command language, editing, and help. The judgments of the three groups were analyzed using hierarchical cluster analysis and Pathfinder. These methods indicated, respectively, how the groups categorized the concepts, and network representations of the concepts for each group. The Pathfinder analysis provides greater information about local, pairwise relations among concepts, whereas the cluster analysis shows global, categorical relations to a greater extent.
Winston, R.B.
1999-01-01
This report describes enhancements to a Graphical-User Interface (GUI) for MODFLOW-96, the U.S. Geological Survey (USGS) modular, three-dimensional, finitedifference ground-water flow model, and MOC3D, the USGS three-dimensional, method-ofcharacteristics solute-transport model. The GUI is a plug-in extension (PIE) for the commercial program Argus ONEe. The GUI has been modified to support MODPATH (a particle tracking post-processing package for MODFLOW), ZONEBDGT (a computer program for calculating subregional water budgets), and the Stream, Horizontal-Flow Barrier, and Flow and Head Boundary packages in MODFLOW. Context-sensitive help has been added to make the GUI easier to use and to understand. In large part, the help consists of quotations from the relevant sections of this report and its predecessors. The revised interface includes automatic creation of geospatial information layers required for the added programs and packages, and menus and dialog boxes for input of parameters for simulation control. The GUI creates formatted ASCII files that can be read by MODFLOW-96, MOC3D, MODPATH, and ZONEBDGT. All four programs can be executed within the Argus ONEe application (Argus Interware, Inc., 1997). Spatial results of MODFLOW-96, MOC3D, and MODPATH can be visualized within Argus ONEe. Results from ZONEBDGT can be visualized in an independent program that can also be used to view budget data from MODFLOW, MOC3D, and SUTRA. Another independent program extracts hydrographs of head or drawdown at individual cells from formatted MODFLOW head and drawdown files. A web-based tutorial on the use of MODFLOW with Argus ONE has also been updated. The internal structure of the GUI has been modified to make it possible for advanced users to easily customize the GUI. Two additional, independent PIE?s were developed to allow users to edit the positions of nodes and to facilitate exporting the grid geometry to external programs.
Human agency beliefs influence behaviour during virtual social interactions.
Caruana, Nathan; Spirou, Dean; Brock, Jon
2017-01-01
In recent years, with the emergence of relatively inexpensive and accessible virtual reality technologies, it is now possible to deliver compelling and realistic simulations of human-to-human interaction. Neuroimaging studies have shown that, when participants believe they are interacting via a virtual interface with another human agent, they show different patterns of brain activity compared to when they know that their virtual partner is computer-controlled. The suggestion is that users adopt an "intentional stance" by attributing mental states to their virtual partner. However, it remains unclear how beliefs in the agency of a virtual partner influence participants' behaviour and subjective experience of the interaction. We investigated this issue in the context of a cooperative "joint attention" game in which participants interacted via an eye tracker with a virtual onscreen partner, directing each other's eye gaze to different screen locations. Half of the participants were correctly informed that their partner was controlled by a computer algorithm ("Computer" condition). The other half were misled into believing that the virtual character was controlled by a second participant in another room ("Human" condition). Those in the "Human" condition were slower to make eye contact with their partner and more likely to try and guide their partner before they had established mutual eye contact than participants in the "Computer" condition. They also responded more rapidly when their partner was guiding them, although the same effect was also found for a control condition in which they responded to an arrow cue. Results confirm the influence of human agency beliefs on behaviour in this virtual social interaction context. They further suggest that researchers and developers attempting to simulate social interactions should consider the impact of agency beliefs on user experience in other social contexts, and their effect on the achievement of the application's goals.
Avola, Danilo; Spezialetti, Matteo; Placidi, Giuseppe
2013-06-01
Rehabilitation is often required after stroke, surgery, or degenerative diseases. It has to be specific for each patient and can be easily calibrated if assisted by human-computer interfaces and virtual reality. Recognition and tracking of different human body landmarks represent the basic features for the design of the next generation of human-computer interfaces. The most advanced systems for capturing human gestures are focused on vision-based techniques which, on the one hand, may require compromises from real-time and spatial precision and, on the other hand, ensure natural interaction experience. The integration of vision-based interfaces with thematic virtual environments encourages the development of novel applications and services regarding rehabilitation activities. The algorithmic processes involved during gesture recognition activity, as well as the characteristics of the virtual environments, can be developed with different levels of accuracy. This paper describes the architectural aspects of a framework supporting real-time vision-based gesture recognition and virtual environments for fast prototyping of customized exercises for rehabilitation purposes. The goal is to provide the therapist with a tool for fast implementation and modification of specific rehabilitation exercises for specific patients, during functional recovery. Pilot examples of designed applications and preliminary system evaluation are reported and discussed. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Experimental Injury Biomechanics of the Pediatric Head and Brain
NASA Astrophysics Data System (ADS)
Margulies, Susan; Coats, Brittany
Traumatic brain injury (TBI) is a leading cause of death and disability among children and young adults in the United States and results in over 2,500 childhood deaths, 37,000 hospitalizations, and 435,000 emergency department visits each year (Langlois et al. 2004). Computational models of the head have proven to be powerful tools to help us understand mechanisms of adult TBI and to determine load thresholds for injuries specific to adult TBI. Similar models need to be developed for children and young adults to identify age-specific mechanisms and injury tolerances appropriate for children and young adults. The reliability of these tools, however, depends heavily on the availability of pediatric tissue material property data. To date the majority of material and structural properties used in pediatric computer models have been scaled from adult human data. Studies have shown significant age-related differences in brain and skull properties (Prange and Margulies 2002; Coats and Margulies 2006a, b), indicating that the pediatric head cannot be modeled as a miniature adult head, and pediatric computer models incorporating age-specific data are necessary to accurately mimic the pediatric head response to impact or rotation. This chapter details the developmental changes of the pediatric head and summarizes human pediatric properties currently available in the literature. Because there is a paucity of human pediatric data, material properties derived from animal tissue are also presented to demonstrate possible age-related differences in the heterogeneity and rate dependence of tissue properties. The chapter is divided into three main sections: (1) brain, meninges, and cerebral spinal fluid (CSF); (2) skull; and (3) scalp.
A novel method for intraoral access to the superior head of the human lateral pterygoid muscle.
Oliveira, Aleli Tôrres; Camilo, Anderson Aparecido; Bahia, Paulo Roberto Valle; Carvalho, Antonio Carlos Pires; DosSantos, Marcos Fabio; da Silva, Jorge Vicente Lopes; Monteiro, André Antonio
2014-01-01
The uncoordinated activity of the superior and inferior parts of the lateral pterygoid muscle (LPM) has been suggested to be one of the causes of temporomandibular joint (TMJ) disc displacement. A therapy for this muscle disorder is the injection of botulinum toxin (BTX), of the LPM. However, there is a potential risk of side effects with the injection guide methods currently available. In addition, they do not permit appropriate differentiation between the two bellies of the muscle. Herein, a novel method is presented to provide intraoral access to the superior head of the human LPM with maximal control and minimal hazards. Computational tomography along with digital imaging software programs and rapid prototyping techniques were used to create a rapid prototyped guide to orient BTX injections in the superior LPM. The method proved to be feasible and reliable. Furthermore, when tested in one volunteer it allowed precise access to the upper head of LPM, without producing side effects. The prototyped guide presented in this paper is a novel tool that provides intraoral access to the superior head of the LPM. Further studies will be necessary to test the efficacy and validate this method in a larger cohort of subjects.
Save medical personnel's time by improved user interfaces.
Kindler, H
1997-01-01
Common objectives in the industrial countries are the improvement of quality of care, clinical effectiveness, and cost control. Cost control, in particular, has been addressed through the introduction of case mix systems for reimbursement by social-security institutions. More data is required to enable quality improvement, increases in clinical effectiveness and for juridical reasons. At first glance, this documentation effort is contradictory to cost reduction. However, integrated services for resource management based on better documentation should help to reduce costs. The clerical effort for documentation should be decreased by providing a co-operative working environment for healthcare professionals applying sophisticated human-computer interface technology. Additional services, e.g., automatic report generation, increase the efficiency of healthcare personnel. Modelling the medical work flow forms an essential prerequisite for integrated resource management services and for co-operative user interfaces. A user interface aware of the work flow provides intelligent assistance by offering the appropriate tools at the right moment. Nowadays there is a trend to client/server systems with relational databases or object-oriented databases as repository. The work flows used for controlling purposes and to steer the user interfaces must be represented in the repository.
Sensor Control And Film Annotation For Long Range, Standoff Reconnaissance
NASA Astrophysics Data System (ADS)
Schmidt, Thomas G.; Peters, Owen L.; Post, Lawrence H.
1984-12-01
This paper describes a Reconnaissance Data Annotation System that incorporates off-the-shelf technology and system designs providing a high degree of adaptability and interoperability to satisfy future reconnaissance data requirements. The history of data annotation for reconnaissance is reviewed in order to provide the base from which future developments can be assessed and technical risks minimized. The system described will accommodate new developments in recording head assemblies and the incorporation of advanced cameras of both the film and electro-optical type. Use of microprocessor control and digital bus inter-face form the central design philosophy. For long range, high altitude, standoff missions, the Data Annotation System computes the projected latitude and longitude of central target position from aircraft position and attitude. This complements the use of longer ranges and high altitudes for reconnaissance missions.
Brain-computer interfaces in the continuum of consciousness.
Kübler, Andrea; Kotchoubey, Boris
2007-12-01
To summarize recent developments and look at important future aspects of brain-computer interfaces. Recent brain-computer interface studies are largely targeted at helping severely or even completely paralysed patients. The former are only able to communicate yes or no via a single muscle twitch, and the latter are totally nonresponsive. Such patients can control brain-computer interfaces and use them to select letters, words or items on a computer screen, for neuroprosthesis control or for surfing the Internet. This condition of motor paralysis, in which cognition and consciousness appear to be unaffected, is traditionally opposed to nonresponsiveness due to disorders of consciousness. Although these groups of patients may appear to be very alike, numerous transition states between them are demonstrated by recent studies. All nonresponsive patients can be regarded on a continuum of consciousness which may vary even within short time periods. As overt behaviour is lacking, cognitive functions in such patients can only be investigated using neurophysiological methods. We suggest that brain-computer interfaces may provide a new tool to investigate cognition in disorders of consciousness, and propose a hierarchical procedure entailing passive stimulation, active instructions, volitional paradigms, and brain-computer interface operation.
Visual design for the user interface, Part 1: Design fundamentals.
Lynch, P J
1994-01-01
Digital audiovisual media and computer-based documents will be the dominant forms of professional communication in both clinical medicine and the biomedical sciences. The design of highly interactive multimedia systems will shortly become a major activity for biocommunications professionals. The problems of human-computer interface design are intimately linked with graphic design for multimedia presentations and on-line document systems. This article outlines the history of graphic interface design and the theories that have influenced the development of today's major graphic user interfaces.
Bashford, Luke; Mehring, Carsten
2016-01-01
To study body ownership and control, illusions that elicit these feelings in non-body objects are widely used. Classically introduced with the Rubber Hand Illusion, these illusions have been replicated more recently in virtual reality and by using brain-computer interfaces. Traditionally these illusions investigate the replacement of a body part by an artificial counterpart, however as brain-computer interface research develops it offers us the possibility to explore the case where non-body objects are controlled in addition to movements of our own limbs. Therefore we propose a new illusion designed to test the feeling of ownership and control of an independent supernumerary hand. Subjects are under the impression they control a virtual reality hand via a brain-computer interface, but in reality there is no causal connection between brain activity and virtual hand movement but correct movements are observed with 80% probability. These imitation brain-computer interface trials are interspersed with movements in both the subjects' real hands, which are in view throughout the experiment. We show that subjects develop strong feelings of ownership and control over the third hand, despite only receiving visual feedback with no causal link to the actual brain signals. Our illusion is crucially different from previously reported studies as we demonstrate independent ownership and control of the third hand without loss of ownership in the real hands.
Modeling User Behavior in Computer Learning Tasks.
ERIC Educational Resources Information Center
Mantei, Marilyn M.
Model building techniques from Artifical Intelligence and Information-Processing Psychology are applied to human-computer interface tasks to evaluate existing interfaces and suggest new and better ones. The model is in the form of an augmented transition network (ATN) grammar which is built by applying grammar induction heuristics on a sequential…
Influence of visual path information on human heading perception during rotation.
Li, Li; Chen, Jing; Peng, Xiaozhe
2009-03-31
How does visual path information influence people's perception of their instantaneous direction of self-motion (heading)? We have previously shown that humans can perceive heading without direct access to visual path information. Here we vary two key parameters for estimating heading from optic flow, the field of view (FOV) and the depth range of environmental points, to investigate the conditions under which visual path information influences human heading perception. The display simulated an observer traveling on a circular path. Observers used a joystick to rotate their line of sight until deemed aligned with true heading. Four FOV sizes (110 x 94 degrees, 48 x 41 degrees, 16 x 14 degrees, 8 x 7 degrees) and depth ranges (6-50 m, 6-25 m, 6-12.5 m, 6-9 m) were tested. Consistent with our computational modeling results, heading bias increased with the reduction of FOV or depth range when the display provided a sequence of velocity fields but no direct path information. When the display provided path information, heading bias was not influenced as much by the reduction of FOV or depth range. We conclude that human heading and path perception involve separate visual processes. Path helps heading perception when the display does not contain enough optic-flow information for heading estimation during rotation.
ERIC Educational Resources Information Center
Kirby, Paul J.; And Others
The design, development, test, and evaluation of an electronic hardware device interfacing a commercially available slide projector with a plasma panel computer terminal is reported. The interface device allows an instructional computer program to select slides for viewing based upon the lesson student situation parameters of the instructional…
A brain computer interface using electrocorticographic signals in humans
NASA Astrophysics Data System (ADS)
Leuthardt, Eric C.; Schalk, Gerwin; Wolpaw, Jonathan R.; Ojemann, Jeffrey G.; Moran, Daniel W.
2004-06-01
Brain-computer interfaces (BCIs) enable users to control devices with electroencephalographic (EEG) activity from the scalp or with single-neuron activity from within the brain. Both methods have disadvantages: EEG has limited resolution and requires extensive training, while single-neuron recording entails significant clinical risks and has limited stability. We demonstrate here for the first time that electrocorticographic (ECoG) activity recorded from the surface of the brain can enable users to control a one-dimensional computer cursor rapidly and accurately. We first identified ECoG signals that were associated with different types of motor and speech imagery. Over brief training periods of 3-24 min, four patients then used these signals to master closed-loop control and to achieve success rates of 74-100% in a one-dimensional binary task. In additional open-loop experiments, we found that ECoG signals at frequencies up to 180 Hz encoded substantial information about the direction of two-dimensional joystick movements. Our results suggest that an ECoG-based BCI could provide for people with severe motor disabilities a non-muscular communication and control option that is more powerful than EEG-based BCIs and is potentially more stable and less traumatic than BCIs that use electrodes penetrating the brain. The authors declare that they have no competing financial interests.
Hands in space: gesture interaction with augmented-reality interfaces.
Billinghurst, Mark; Piumsomboon, Tham; Huidong Bai
2014-01-01
Researchers at the Human Interface Technology Laboratory New Zealand (HIT Lab NZ) are investigating free-hand gestures for natural interaction with augmented-reality interfaces. They've applied the results to systems for desktop computers and mobile devices.
Otolith and Vertical Canal Contributions to Dynamic Postural Control
NASA Technical Reports Server (NTRS)
Black, F. Owen
1999-01-01
The objective of this project is to determine: 1) how do normal subjects adjust postural movements in response to changing or altered otolith input, for example, due to aging? and 2) how do patients adapt postural control after altered unilateral or bilateral vestibular sensory inputs such as ablative inner ear surgery or ototoxicity, respectively? The following hypotheses are under investigation: 1) selective alteration of otolith input or abnormalities of otolith receptor function will result in distinctive spatial, frequency, and temporal patterns of head movements and body postural sway dynamics. 2) subjects with reduced, altered, or absent vertical semicircular canal receptor sensitivity but normal otolith receptor function or vice versa, should show predictable alterations of body and head movement strategies essential for the control of postural sway and movement. The effect of altered postural movement control upon compensation and/or adaptation will be determined. These experiments provide data for the development of computational models of postural control in normals, vestibular deficient subjects and normal humans exposed to unusual force environments, including orbital space flight.
Destabilization of Human Balance Control by Static and Dynamic Head Tilts
NASA Technical Reports Server (NTRS)
Paloski, William H.; Wood, Scott J.; Feiveson, Alan H.; Black, F. Owen; Hwang, Emma Y.; Reschke, Millard F.
2004-01-01
To better understand the effects of varying head movement frequencies on human balance control, 12 healthy adult humans were studied during static and dynamic (0.14,0.33,0.6 Hz) head tilts of +/-30deg in the pitch and roll planes. Postural sway was measured during upright stance with eyes closed and altered somatosensory inputs provided by a computerized dynamic posturography (CDP) system. Subjects were able to maintain upright stance with static head tilts, although postural sway was increased during neck extension. Postural stability was decreased during dynamic head tilts, and the degree of destabilization varied directly with increasing frequency of head tilt. In the absence of vision and accurate foot support surface inputs, postural stability may be compromised during dynamic head tilts due to a decreased ability of the vestibular system to discern the orientation of gravity.
Mercer, James W.; Larson, S.P.; Faust, Charles R.
1980-01-01
Model documentation is presented for a two-dimensional (areal) model capable of simulating ground-water flow of salt water and fresh water separated by an interface. The partial differential equations are integrated over the thicknesses of fresh water and salt water resulting in two equations describing the flow characteristics in the areal domain. These equations are approximated using finite-difference techniques and the resulting algebraic equations are solved for the dependent variables, fresh water head and salt water head. An iterative solution method was found to be most appropriate. The program is designed to simulate time-dependent problems such as those associated with the development of coastal aquifers, and can treat water-table conditions or confined conditions with steady-state leakage of fresh water. The program will generally be most applicable to the analysis of regional aquifer problems in which the zone between salt water and fresh water can be considered a surface (sharp interface). Example problems and a listing of the computer code are included. (USGS).
Oppold, P; Rupp, M; Mouloua, M; Hancock, P A; Martin, J
2012-01-01
Unmanned (UAVs, UCAVs, and UGVs) systems still have major human factors and ergonomic challenges related to the effective design of their control interface systems, crucial to their efficient operation, maintenance, and safety. Unmanned system interfaces with a human centered approach promote intuitive interfaces that are easier to learn, and reduce human errors and other cognitive ergonomic issues with interface design. Automation has shifted workload from physical to cognitive, thus control interfaces for unmanned systems need to reduce mental workload on the operators and facilitate the interaction between vehicle and operator. Two-handed video game controllers provide wide usability within the overall population, prior exposure for new operators, and a variety of interface complexity levels to match the complexity level of the task and reduce cognitive load. This paper categorizes and provides taxonomy for 121 haptic interfaces from the entertainment industry that can be utilized as control interfaces for unmanned systems. Five categories of controllers were based on the complexity of the buttons, control pads, joysticks, and switches on the controller. This allows the selection of the level of complexity needed for a specific task without creating an entirely new design or utilizing an overly complex design.
NASA Astrophysics Data System (ADS)
Wang, Pengbo; Gao, Yuan; Chen, Xiao; Li, Ting
2016-03-01
Low-level light therapy (LLLT) has been clinically applied. Recently, more and more cases are reported with positive therapeutic effect by using transcranial light emitting diodes (LEDs) illumination. Here, we developed a LLLT helmet for treating brain injuries based on LED arrays. We designed the LED arrays in circle shape and assembled them in multilayered 3D printed helmet with water-cooling module. The LED arrays can be adjust to touch the head of subjects. A control circuit was developed to drive and control the illumination of the LLLT helmet. The software portion provides the control of on and off of each LED arrays, the setup of illumination parameters, and 3D distribution of LLLT light dose in human subject according to the illumination setups. This LLLT light dose distribution was computed by a Monte Carlo model for voxelized media and the Visible Chinese Human head dataset and displayed in 3D view at the background of head anatomical structure. The performance of the whole system was fully tested. One stroke patient was recruited in the preliminary LLLT experiment and the following neuropsychological testing showed obvious improvement in memory and executive functioning. This clinical case suggested the potential of this Illumination-parameter adjustable and illuminationdistribution visible LED helmet as a reliable, noninvasive, and effective tool in treating brain injuries.
NASA Astrophysics Data System (ADS)
Fu, Deqian; Gao, Lisheng; Jhang, Seong Tae
2012-04-01
The mobile computing device has many limitations, such as relative small user interface and slow computing speed. Usually, augmented reality requires face pose estimation can be used as a HCI and entertainment tool. As far as the realtime implementation of head pose estimation on relatively resource limited mobile platforms is concerned, it is required to face different constraints while leaving enough face pose estimation accuracy. The proposed face pose estimation method met this objective. Experimental results running on a testing Android mobile device delivered satisfactory performing results in the real-time and accurately.
Kasashima-Shindo, Yuko; Fujiwara, Toshiyuki; Ushiba, Junichi; Matsushika, Yayoi; Kamatani, Daiki; Oto, Misa; Ono, Takashi; Nishimoto, Atsuko; Shindo, Keiichiro; Kawakami, Michiyuki; Tsuji, Tetsuya; Liu, Meigen
2015-04-01
Brain-computer interface technology has been applied to stroke patients to improve their motor function. Event-related desynchronization during motor imagery, which is used as a brain-computer interface trigger, is sometimes difficult to detect in stroke patients. Anodal transcranial direct current stimulation (tDCS) is known to increase event-related desynchronization. This study investigated the adjunctive effect of anodal tDCS for brain-computer interface training in patients with severe hemiparesis. Eighteen patients with chronic stroke. A non-randomized controlled study. Subjects were divided between a brain-computer interface group and a tDCS- brain-computer interface group and participated in a 10-day brain-computer interface training. Event-related desynchronization was detected in the affected hemisphere during motor imagery of the affected fingers. The tDCS-brain-computer interface group received anodal tDCS before brain-computer interface training. Event-related desynchronization was evaluated before and after the intervention. The Fugl-Meyer Assessment upper extremity motor score (FM-U) was assessed before, immediately after, and 3 months after, the intervention. Event-related desynchronization was significantly increased in the tDCS- brain-computer interface group. The FM-U was significantly increased in both groups. The FM-U improvement was maintained at 3 months in the tDCS-brain-computer interface group. Anodal tDCS can be a conditioning tool for brain-computer interface training in patients with severe hemiparetic stroke.
Independent Verification and Validation of Complex User Interfaces: A Human Factors Approach
NASA Technical Reports Server (NTRS)
Whitmore, Mihriban; Berman, Andrea; Chmielewski, Cynthia
1996-01-01
The Usability Testing and Analysis Facility (UTAF) at the NASA Johnson Space Center has identified and evaluated a potential automated software interface inspection tool capable of assessing the degree to which space-related critical and high-risk software system user interfaces meet objective human factors standards across each NASA program and project. Testing consisted of two distinct phases. Phase 1 compared analysis times and similarity of results for the automated tool and for human-computer interface (HCI) experts. In Phase 2, HCI experts critiqued the prototype tool's user interface. Based on this evaluation, it appears that a more fully developed version of the tool will be a promising complement to a human factors-oriented independent verification and validation (IV&V) process.
Videodisc-Computer Interfaces.
ERIC Educational Resources Information Center
Zollman, Dean
1984-01-01
Lists microcomputer-videodisc interfaces currently available from 26 sources, including home use systems connected through remote control jack and industrial/educational systems utilizing computer ports and new laser reflective and stylus technology. Information provided includes computer and videodisc type, language, authoring system, educational…
Human factors in air traffic control: problems at the interfaces.
Shouksmith, George
2003-10-01
The triangular ISIS model for describing the operation of human factors in complex sociotechnical organisations or systems is applied in this research to a large international air traffic control system. A large sample of senior Air Traffic Controllers were randomly assigned to small focus discussion groups, whose task was to identify problems occurring at the interfaces of the three major human factor components: individual, system impacts, and social. From these discussions, a number of significant interface problems, which could adversely affect the functioning of the Air Traffic Control System, emerged. The majority of these occurred at the Individual-System Impact and Individual-Social interfaces and involved a perceived need for further interface centered training.
A Research Roadmap for Computation-Based Human Reliability Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boring, Ronald; Mandelli, Diego; Joe, Jeffrey
2015-08-01
The United States (U.S.) Department of Energy (DOE) is sponsoring research through the Light Water Reactor Sustainability (LWRS) program to extend the life of the currently operating fleet of commercial nuclear power plants. The Risk Informed Safety Margin Characterization (RISMC) research pathway within LWRS looks at ways to maintain and improve the safety margins of these plants. The RISMC pathway includes significant developments in the area of thermalhydraulics code modeling and the development of tools to facilitate dynamic probabilistic risk assessment (PRA). PRA is primarily concerned with the risk of hardware systems at the plant; yet, hardware reliability is oftenmore » secondary in overall risk significance to human errors that can trigger or compound undesirable events at the plant. This report highlights ongoing efforts to develop a computation-based approach to human reliability analysis (HRA). This computation-based approach differs from existing static and dynamic HRA approaches in that it: (i) interfaces with a dynamic computation engine that includes a full scope plant model, and (ii) interfaces with a PRA software toolset. The computation-based HRA approach presented in this report is called the Human Unimodels for Nuclear Technology to Enhance Reliability (HUNTER) and incorporates in a hybrid fashion elements of existing HRA methods to interface with new computational tools developed under the RISMC pathway. The goal of this research effort is to model human performance more accurately than existing approaches, thereby minimizing modeling uncertainty found in current plant risk models.« less
The control of float zone interfaces by the use of selected boundary conditions
NASA Technical Reports Server (NTRS)
Foster, L. M.; Mcintosh, J.
1983-01-01
The main goal of the float zone crystal growth project of NASA's Materials Processing in Space Program is to thoroughly understand the molten zone/freezing crystal system and all the mechanisms that govern this system. The surface boundary conditions required to give flat float zone solid melt interfaces were studied and computed. The results provide float zone furnace designers with better methods for controlling solid melt interface shapes and for computing thermal profiles and gradients. Documentation and a user's guide were provided for the computer software.
ERIC Educational Resources Information Center
Batt, Russell H., Ed.
1990-01-01
Four applications of microcomputers in the chemical laboratory are presented. Included are "Mass Spectrometer Interface with an Apple II Computer,""Interfacing the Spectronic 20 to a Computer,""A pH-Monitoring and Control System for Teaching Laboratories," and "A Computer-Aided Optical Melting Point Device." Software, instrumentation, and uses are…
Designing the Instructional Interface.
ERIC Educational Resources Information Center
Lohr, L. L.
2000-01-01
Designing the instructional interface is a challenging endeavor requiring knowledge and skills in instructional and visual design, psychology, human-factors, ergonomic research, computer science, and editorial design. This paper describes the instructional interface, the challenges of its development, and an instructional systems approach to its…
Goal selection versus process control while learning to use a brain-computer interface
NASA Astrophysics Data System (ADS)
Royer, Audrey S.; Rose, Minn L.; He, Bin
2011-06-01
A brain-computer interface (BCI) can be used to accomplish a task without requiring motor output. Two major control strategies used by BCIs during task completion are process control and goal selection. In process control, the user exerts continuous control and independently executes the given task. In goal selection, the user communicates their goal to the BCI and then receives assistance executing the task. A previous study has shown that goal selection is more accurate and faster in use. An unanswered question is, which control strategy is easier to learn? This study directly compares goal selection and process control while learning to use a sensorimotor rhythm-based BCI. Twenty young healthy human subjects were randomly assigned either to a goal selection or a process control-based paradigm for eight sessions. At the end of the study, the best user from each paradigm completed two additional sessions using all paradigms randomly mixed. The results of this study were that goal selection required a shorter training period for increased speed, accuracy, and information transfer over process control. These results held for the best subjects as well as in the general subject population. The demonstrated characteristics of goal selection make it a promising option to increase the utility of BCIs intended for both disabled and able-bodied users.
A PDP-15 to industrial-14 interface at the Lewis Research Center's cyclotron
NASA Technical Reports Server (NTRS)
Kebberly, F. R.; Leonard, R. F.
1977-01-01
An interface (hardware and software) was built which permits the loading, monitoring, and control of a digital equipment industrial-14/30 programmable controller by a PDP-15 computer. The interface utilizes the serial mode for data transfer to and from the controller, so that the required hardware is essentially that of a teletype unit except for the speed of transmission. Software described here permits the user to load binary paper tape, read or load individual controller memory locations, and if desired turn controller outputs on and off directly from the computer.
Nawroth, Christian; von Borell, Eberhard; Langbein, Jan
2015-01-01
Recently, comparative research on the mechanisms and species-specific adaptive values of attributing attentive states and using communicative cues has gained increased interest, particularly in non-human primates, birds, and dogs. Here, we investigate these phenomena in a farm animal species, the dwarf goat (Capra aegagrus hircus). In the first experiment, we investigated the effects of different human head and body orientations, as well as human experimenter presence/absence, on the behaviour of goats in a food-anticipating paradigm. Over a 30-s interval, the experimenter engaged in one of four different postures or behaviours (head and body towards the subject-'Control', head to the side, head and body away from the subject, or leaving the room) before delivering a reward. We found that the level of subjects' active anticipatory behaviour was highest in the control condition and decreased with a decreasing level of attention paid to the subject by the experimenter. Additionally, goats 'stared' (i.e. stood alert) at the experimental set-up for significantly more time when the experimenter was present but paid less attention to the subject ('Head' and 'Back' condition) than in the 'Control' and 'Out' conditions. In a second experiment, the experimenter provided different human-given cues that indicated the location of a hidden food reward in a two-way object choice task. Goats were able to use both 'Touch' and 'Point' cues to infer the correct location of the reward but did not perform above the level expected by chance in the 'Head only' condition. We conclude that goats are able to differentiate among different body postures of a human, including head orientation; however, despite their success at using multiple physical human cues, they fail to spontaneously use human head direction as a cue in a food-related context.
[The current state of the brain-computer interface problem].
Shurkhay, V A; Aleksandrova, E V; Potapov, A A; Goryainov, S A
2015-01-01
It was only 40 years ago that the first PC appeared. Over this period, rather short in historical terms, we have witnessed the revolutionary changes in lives of individuals and the entire society. Computer technologies are tightly connected with any field, either directly or indirectly. We can currently claim that computers are manifold superior to a human mind in terms of a number of parameters; however, machines lack the key feature: they are incapable of independent thinking (like a human). However, the key to successful development of humankind is collaboration between the brain and the computer rather than competition. Such collaboration when a computer broadens, supplements, or replaces some brain functions is known as the brain-computer interface. Our review focuses on real-life implementation of this collaboration.
Computer Series, 62: Bits and Pieces, 25.
ERIC Educational Resources Information Center
Moore, John W., Ed.
1985-01-01
Describes: (1) a FORTH-language, computer-controlled potentiometric titration; (2) coulometric titrations using computer-interfaced potentiometric endpoint detection; (3) interfacing a scanning infrared spectrophotometer to a microcomputer; (4) demonstrations of signal-to-noise enhancement (digital filtering); (5) and an inexpensive Apple…
Ten Design Points for the Human Interface to Instructional Multimedia.
ERIC Educational Resources Information Center
McFarland, Ronald D.
1995-01-01
Ten ways to design an effective Human-Computer Interface are explained. Highlights include material delivery that relates to user knowledge; appropriate screen presentations; attention value versus learning and recall; the relationship of packaging and message; the effectiveness of visuals and text; the use of color to enhance communication; the…
Implementing Artificial Intelligence Behaviors in a Virtual World
NASA Technical Reports Server (NTRS)
Krisler, Brian; Thome, Michael
2012-01-01
In this paper, we will present a look at the current state of the art in human-computer interface technologies, including intelligent interactive agents, natural speech interaction and gestural based interfaces. We describe our use of these technologies to implement a cost effective, immersive experience on a public region in Second Life. We provision our Artificial Agents as a German Shepherd Dog avatar with an external rules engine controlling the behavior and movement. To interact with the avatar, we implemented a natural language and gesture system allowing the human avatars to use speech and physical gestures rather than interacting via a keyboard and mouse. The result is a system that allows multiple humans to interact naturally with AI avatars by playing games such as fetch with a flying disk and even practicing obedience exercises using voice and gesture, a natural seeming day in the park.
Learning to Manage: A Program Just for Directors.
ERIC Educational Resources Information Center
Thomas, Megan E.
1996-01-01
Describes the Head Start-Johnson & Johnson Management Fellows program, whose mission is strengthening management skills of Head Start directors by providing training in human resources management, organizational design and development, financial management, computers and information systems, operations, marketing, and development of strategic…
Williams, Kent E; Voigt, Jeffrey R
2004-01-01
The research reported herein presents the results of an empirical evaluation that focused on the accuracy and reliability of cognitive models created using a computerized tool: the cognitive analysis tool for human-computer interaction (CAT-HCI). A sample of participants, expert in interacting with a newly developed tactical display for the U.S. Army's Bradley Fighting Vehicle, individually modeled their knowledge of 4 specific tasks employing the CAT-HCI tool. Measures of the accuracy and consistency of task models created by these task domain experts using the tool were compared with task models created by a double expert. The findings indicated a high degree of consistency and accuracy between the different "single experts" in the task domain in terms of the resultant models generated using the tool. Actual or potential applications of this research include assessing human-computer interaction complexity, determining the productivity of human-computer interfaces, and analyzing an interface design to determine whether methods can be automated.
Human Factors and Technical Considerations for a Computerized Operator Support System Prototype
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ulrich, Thomas Anthony; Lew, Roger Thomas; Medema, Heather Dawne
2015-09-01
A prototype computerized operator support system (COSS) has been developed in order to demonstrate the concept and provide a test bed for further research. The prototype is based on four underlying elements consisting of a digital alarm system, computer-based procedures, PI&D system representations, and a recommender module for mitigation actions. At this point, the prototype simulates an interface to a sensor validation module and a fault diagnosis module. These two modules will be fully integrated in the next version of the prototype. The initial version of the prototype is now operational at the Idaho National Laboratory using the U.S. Departmentmore » of Energy’s Light Water Reactor Sustainability (LWRS) Human Systems Simulation Laboratory (HSSL). The HSSL is a full-scope, full-scale glass top simulator capable of simulating existing and future nuclear power plant main control rooms. The COSS is interfaced to the Generic Pressurized Water Reactor (gPWR) simulator with industry-typical control board layouts. The glass top panels display realistic images of the control boards that can be operated by touch gestures. A section of the simulated control board was dedicated to the COSS human-system interface (HSI), which resulted in a seamless integration of the COSS into the normal control room environment. A COSS demonstration scenario has been developed for the prototype involving the Chemical & Volume Control System (CVCS) of the PWR simulator. It involves a primary coolant leak outside of containment that would require tripping the reactor if not mitigated in a very short timeframe. The COSS prototype presents a series of operator screens that provide the needed information and soft controls to successfully mitigate the event.« less
NASA Technical Reports Server (NTRS)
Lax, F. M.
1975-01-01
A time-controlled navigation system applicable to the descent phase of flight for airline transport aircraft was developed and simulated. The design incorporates the linear discrete-time sampled-data version of the linearized continuous-time system describing the aircraft's aerodynamics. Using optimal linear quadratic control techniques, an optimal deterministic control regulator which is implementable on an airborne computer is designed. The navigation controller assists the pilot in complying with assigned times of arrival along a four-dimensional flight path in the presence of wind disturbances. The strategic air traffic control concept is also described, followed by the design of a strategic control descent path. A strategy for determining possible times of arrival at specified waypoints along the descent path and for generating the corresponding route-time profiles that are within the performance capabilities of the aircraft is presented. Using a mathematical model of the Boeing 707-320B aircraft along with a Boeing 707 cockpit simulator interfaced with an Adage AGT-30 digital computer, a real-time simulation of the complete aircraft aerodynamics was achieved. The strategic four-dimensional navigation controller for longitudinal dynamics was tested on the nonlinear aircraft model in the presence of 15, 30, and 45 knot head-winds. The results indicate that the controller preserved the desired accuracy and precision of a time-controlled aircraft navigation system.
NASA Astrophysics Data System (ADS)
Singh, Santosh Kumar; Ghatak Choudhuri, Sumit
2018-05-01
Parallel connection of UPS inverters to enhance power rating is a widely accepted practice. Inter-modular circulating currents appear when multiple inverter modules are connected in parallel to supply variable critical load. Interfacing of modules henceforth requires an intensive design, using proper control strategy. The potentiality of human intuitive Fuzzy Logic (FL) control with imprecise system model is well known and thus can be utilised in parallel-connected UPS systems. Conventional FL controller is computational intensive, especially with higher number of input variables. This paper proposes application of Hierarchical-Fuzzy Logic control for parallel connected Multi-modular inverters system for reduced computational burden on the processor for a given switching frequency. Simulated results in MATLAB environment and experimental verification using Texas TMS320F2812 DSP are included to demonstrate feasibility of the proposed control scheme.
Nambu, Isao; Ebisawa, Masashi; Kogure, Masumi; Yano, Shohei; Hokari, Haruhide; Wada, Yasuhiro
2013-01-01
The auditory Brain-Computer Interface (BCI) using electroencephalograms (EEG) is a subject of intensive study. As a cue, auditory BCIs can deal with many of the characteristics of stimuli such as tone, pitch, and voices. Spatial information on auditory stimuli also provides useful information for a BCI. However, in a portable system, virtual auditory stimuli have to be presented spatially through earphones or headphones, instead of loudspeakers. We investigated the possibility of an auditory BCI using the out-of-head sound localization technique, which enables us to present virtual auditory stimuli to users from any direction, through earphones. The feasibility of a BCI using this technique was evaluated in an EEG oddball experiment and offline analysis. A virtual auditory stimulus was presented to the subject from one of six directions. Using a support vector machine, we were able to classify whether the subject attended the direction of a presented stimulus from EEG signals. The mean accuracy across subjects was 70.0% in the single-trial classification. When we used trial-averaged EEG signals as inputs to the classifier, the mean accuracy across seven subjects reached 89.5% (for 10-trial averaging). Further analysis showed that the P300 event-related potential responses from 200 to 500 ms in central and posterior regions of the brain contributed to the classification. In comparison with the results obtained from a loudspeaker experiment, we confirmed that stimulus presentation by out-of-head sound localization achieved similar event-related potential responses and classification performances. These results suggest that out-of-head sound localization enables us to provide a high-performance and loudspeaker-less portable BCI system. PMID:23437338
Head-mounted display systems and the special operations soldier
NASA Astrophysics Data System (ADS)
Loyd, Rodney B.
1998-08-01
In 1997, the Boeing Company, working with DARPA under the Smart Modules program and the US Army Soldier Systems Command, embarked on an advanced research and development program to develop a wearable computer system tailored for use with soldiers of the US Special Operations Command. The 'special operations combat management system' is a rugged advanced wearable tactical computer, designed to provide the special operations soldier with enhanced situation awareness and battlefield information capabilities. Many issues must be considered during the design of wearable computers for a combat soldier, including the system weight, placement on the body with respect to other equipment, user interfaces and display system characteristics. During the initial feasibility study for the system, the operational environment was examined and potential users were interviewed to establish the proper display solution for the system. Many display system requirements resulted, such as head or helmet mounting, Night Vision Goggle compatibility, minimal visible light emissions, environmental performance and even the need for handheld or other 'off the head' type display systems. This paper will address these issues and other end user requirements for display systems for applications in the harsh and demanding environment of the Special Operations soldier.
Superconductor magnetic reading and writing heads
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barnes, F.S.; Dugas, M.P.
1990-11-20
This paper describes a head for interfacing with a magnetic recording media. It comprises: a member of magnetic material forming at least a portion of a magnetic flux circuit ending with a pole face surface in interfacing relation to the media for establishing a main pole in proximity to the media in the magnetic flux circuit, magnetically responsive means in magnetically coupled relation to the magnetic flux circuit, means encasing at least a portion of the external surfaces of the member with superconductive material except for the media interfacing portion of the pole face surface. The encasing means including superconductingmore » material substantially surrounding the magnetic flux circuit in proximity to the pole face surface, and means establishing an environment for the superconductive material at a temperature for maintaining the superconductive material in its superconductive state, whereby magnetic flux in the magnetic flux circuit associated with the encasing means is concentrated within the magnetic flux circuit while placement of the pole face surface in proximity to the recording media permits sensitive magnetic flux controlled information exchanges between the media and the head.« less
CDROM User Interface Evaluation: The Appropriateness of GUIs.
ERIC Educational Resources Information Center
Bosch, Victoria Manglano; Hancock-Beaulieu, Micheline
1995-01-01
Assesses the appropriateness of GUIs (graphical user interfaces), more specifically Windows-based interfaces for CD-ROM. An evaluation model is described that was developed to carry out an expert evaluation of the interfaces of seven CD-ROM products. Results are discussed in light of HCI (human-computer interaction) usability criteria and design…
Using Simulation Speeds to Differentiate Controller Interface Concepts
NASA Technical Reports Server (NTRS)
Trujillo, Anna; Pope, Alan
2008-01-01
This study investigated two concepts: (1) whether speeding a human-in-the-loop simulation (or the subject's "world") scales time stress in such a way as to cause primary task performance to reveal workload differences between experimental conditions and (2) whether using natural hand motions to control the attitude of an aircraft makes controlling the aircraft easier and more intuitive. This was accomplished by having pilots and non-pilots make altitude and heading changes using three different control inceptors at three simulation speeds. Results indicate that simulation speed does affect workload and controllability. The bank and pitch angle error was affected by simulation speed but not by a simulation speed by controller type interaction; this may have been due to the relatively easy flying task. Results also indicate that pilots could control the bank and pitch angle of an aircraft about equally as well with the glove as with the sidestick. Non-pilots approached the pilots ability to control the bank and pitch angle of an aircraft using the positional glove - where the hand angle is directly proportional to the commanded aircraft angle. Therefore, (1) changing the simulation speed lends itself to objectively indexing a subject s workload and may also aid in differentiating among interface concepts based upon performance if the task being studied is sufficiently challenging and (2) using natural body movements to mimic the movement of an airplane for attitude control is feasible.
Computer-Based Tools for Evaluating Graphical User Interfaces
NASA Technical Reports Server (NTRS)
Moore, Loretta A.
1997-01-01
The user interface is the component of a software system that connects two very complex system: humans and computers. Each of these two systems impose certain requirements on the final product. The user is the judge of the usability and utility of the system; the computer software and hardware are the tools with which the interface is constructed. Mistakes are sometimes made in designing and developing user interfaces because the designers and developers have limited knowledge about human performance (e.g., problem solving, decision making, planning, and reasoning). Even those trained in user interface design make mistakes because they are unable to address all of the known requirements and constraints on design. Evaluation of the user inter-face is therefore a critical phase of the user interface development process. Evaluation should not be considered the final phase of design; but it should be part of an iterative design cycle with the output of evaluation being feed back into design. The goal of this research was to develop a set of computer-based tools for objectively evaluating graphical user interfaces. The research was organized into three phases. The first phase resulted in the development of an embedded evaluation tool which evaluates the usability of a graphical user interface based on a user's performance. An expert system to assist in the design and evaluation of user interfaces based upon rules and guidelines was developed during the second phase. During the final phase of the research an automatic layout tool to be used in the initial design of graphical inter- faces was developed. The research was coordinated with NASA Marshall Space Flight Center's Mission Operations Laboratory's efforts in developing onboard payload display specifications for the Space Station.
Head pose estimation in computer vision: a survey.
Murphy-Chutorian, Erik; Trivedi, Mohan Manubhai
2009-04-01
The capacity to estimate the head pose of another person is a common human ability that presents a unique challenge for computer vision systems. Compared to face detection and recognition, which have been the primary foci of face-related vision research, identity-invariant head pose estimation has fewer rigorously evaluated systems or generic solutions. In this paper, we discuss the inherent difficulties in head pose estimation and present an organized survey describing the evolution of the field. Our discussion focuses on the advantages and disadvantages of each approach and spans 90 of the most innovative and characteristic papers that have been published on this topic. We compare these systems by focusing on their ability to estimate coarse and fine head pose, highlighting approaches that are well suited for unconstrained environments.
NASA Astrophysics Data System (ADS)
Schieber, Marc H.
2016-07-01
Control of the human hand has been both difficult to understand scientifically and difficult to emulate technologically. The article by Santello and colleagues in the current issue of Physics of Life Reviews[1] highlights the accelerating pace of interaction between the neuroscience of controlling body movement and the engineering of robotic hands that can be used either autonomously or as part of a motor neuroprosthesis, an artificial body part that moves under control from a human subject's own nervous system. Motor neuroprostheses typically involve a brain-computer interface (BCI) that takes signals from the subject's nervous system or muscles, interprets those signals through a decoding algorithm, and then applies the resulting output to control the artificial device.
Sensor fusion V; Proceedings of the Meeting, Boston, MA, Nov. 15-17, 1992
NASA Technical Reports Server (NTRS)
Schenker, Paul S. (Editor)
1992-01-01
Topics addressed include 3D object perception, human-machine interface in multisensor systems, sensor fusion architecture, fusion of multiple and distributed sensors, interface and decision models for sensor fusion, computational networks, simple sensing for complex action, multisensor-based control, and metrology and calibration of multisensor systems. Particular attention is given to controlling 3D objects by sketching 2D views, the graphical simulation and animation environment for flexible structure robots, designing robotic systems from sensorimotor modules, cylindrical object reconstruction from a sequence of images, an accurate estimation of surface properties by integrating information using Bayesian networks, an adaptive fusion model for a distributed detection system, multiple concurrent object descriptions in support of autonomous navigation, robot control with multiple sensors and heuristic knowledge, and optical array detectors for image sensors calibration. (No individual items are abstracted in this volume)
Relative brain displacement and deformation during constrained mild frontal head impact.
Feng, Y; Abney, T M; Okamoto, R J; Pless, R B; Genin, G M; Bayly, P V
2010-12-06
This study describes the measurement of fields of relative displacement between the brain and the skull in vivo by tagged magnetic resonance imaging and digital image analysis. Motion of the brain relative to the skull occurs during normal activity, but if the head undergoes high accelerations, the resulting large and rapid deformation of neuronal and axonal tissue can lead to long-term disability or death. Mathematical modelling and computer simulation of acceleration-induced traumatic brain injury promise to illuminate the mechanisms of axonal and neuronal pathology, but numerical studies require knowledge of boundary conditions at the brain-skull interface, material properties and experimental data for validation. The current study provides a dense set of displacement measurements in the human brain during mild frontal skull impact constrained to the sagittal plane. Although head motion is dominated by translation, these data show that the brain rotates relative to the skull. For these mild events, characterized by linear decelerations near 1.5g (g = 9.81 m s⁻²) and angular accelerations of 120-140 rad s⁻², relative brain-skull displacements of 2-3 mm are typical; regions of smaller displacements reflect the tethering effects of brain-skull connections. Strain fields exhibit significant areas with maximal principal strains of 5 per cent or greater. These displacement and strain fields illuminate the skull-brain boundary conditions, and can be used to validate simulations of brain biomechanics.
EMG and EPP-integrated human-machine interface between the paralyzed and rehabilitation exoskeleton.
Yin, Yue H; Fan, Yuan J; Xu, Li D
2012-07-01
Although a lower extremity exoskeleton shows great prospect in the rehabilitation of the lower limb, it has not yet been widely applied to the clinical rehabilitation of the paralyzed. This is partly caused by insufficient information interactions between the paralyzed and existing exoskeleton that cannot meet the requirements of harmonious control. In this research, a bidirectional human-machine interface including a neurofuzzy controller and an extended physiological proprioception (EPP) feedback system is developed by imitating the biological closed-loop control system of human body. The neurofuzzy controller is built to decode human motion in advance by the fusion of the fuzzy electromyographic signals reflecting human motion intention and the precise proprioception providing joint angular feedback information. It transmits control information from human to exoskeleton, while the EPP feedback system based on haptic stimuli transmits motion information of the exoskeleton back to the human. Joint angle and torque information are transmitted in the form of air pressure to the human body. The real-time bidirectional human-machine interface can help a patient with lower limb paralysis to control the exoskeleton with his/her healthy side and simultaneously perceive motion on the paralyzed side by EPP. The interface rebuilds a closed-loop motion control system for paralyzed patients and realizes harmonious control of the human-machine system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
R. Fink, D. Hill, J. O'Hara
2004-11-30
Nuclear plant operators face a significant challenge designing and modifying control rooms. This report provides guidance on planning, designing, implementing and operating modernized control rooms and digital human-system interfaces.
High-resolution EEG techniques for brain-computer interface applications.
Cincotti, Febo; Mattia, Donatella; Aloise, Fabio; Bufalari, Simona; Astolfi, Laura; De Vico Fallani, Fabrizio; Tocci, Andrea; Bianchi, Luigi; Marciani, Maria Grazia; Gao, Shangkai; Millan, Jose; Babiloni, Fabio
2008-01-15
High-resolution electroencephalographic (HREEG) techniques allow estimation of cortical activity based on non-invasive scalp potential measurements, using appropriate models of volume conduction and of neuroelectrical sources. In this study we propose an application of this body of technologies, originally developed to obtain functional images of the brain's electrical activity, in the context of brain-computer interfaces (BCI). Our working hypothesis predicted that, since HREEG pre-processing removes spatial correlation introduced by current conduction in the head structures, by providing the BCI with waveforms that are mostly due to the unmixed activity of a small cortical region, a more reliable classification would be obtained, at least when the activity to detect has a limited generator, which is the case in motor related tasks. HREEG techniques employed in this study rely on (i) individual head models derived from anatomical magnetic resonance images, (ii) distributed source model, composed of a layer of current dipoles, geometrically constrained to the cortical mantle, (iii) depth-weighted minimum L(2)-norm constraint and Tikhonov regularization for linear inverse problem solution and (iv) estimation of electrical activity in cortical regions of interest corresponding to relevant Brodmann areas. Six subjects were trained to learn self modulation of sensorimotor EEG rhythms, related to the imagination of limb movements. Off-line EEG data was used to estimate waveforms of cortical activity (cortical current density, CCD) on selected regions of interest. CCD waveforms were fed into the BCI computational pipeline as an alternative to raw EEG signals; spectral features are evaluated through statistical tests (r(2) analysis), to quantify their reliability for BCI control. These results are compared, within subjects, to analogous results obtained without HREEG techniques. The processing procedure was designed in such a way that computations could be split into a setup phase (which includes most of the computational burden) and the actual EEG processing phase, which was limited to a single matrix multiplication. This separation allowed to make the procedure suitable for on-line utilization, and a pilot experiment was performed. Results show that lateralization of electrical activity, which is expected to be contralateral to the imagined movement, is more evident on the estimated CCDs than in the scalp potentials. CCDs produce a pattern of relevant spectral features that is more spatially focused, and has a higher statistical significance (EEG: 0.20+/-0.114 S.D.; CCD: 0.55+/-0.16 S.D.; p=10(-5)). A pilot experiment showed that a trained subject could utilize voluntary modulation of estimated CCDs for accurate (eight targets) on-line control of a cursor. This study showed that it is practically feasible to utilize HREEG techniques for on-line operation of a BCI system; off-line analysis suggests that accuracy of BCI control is enhanced by the proposed method.
Multimodal Neuroelectric Interface Development
NASA Technical Reports Server (NTRS)
Trejo, Leonard J.; Wheeler, Kevin R.; Jorgensen, Charles C.; Totah, Joseph (Technical Monitor)
2001-01-01
This project aims to improve performance of NASA missions by developing multimodal neuroelectric technologies for augmented human-system interaction. Neuroelectric technologies will add completely new modes of interaction that operate in parallel with keyboards, speech, or other manual controls, thereby increasing the bandwidth of human-system interaction. We recently demonstrated the feasibility of real-time electromyographic (EMG) pattern recognition for a direct neuroelectric human-computer interface. We recorded EMG signals from an elastic sleeve with dry electrodes, while a human subject performed a range of discrete gestures. A machine-teaming algorithm was trained to recognize the EMG patterns associated with the gestures and map them to control signals. Successful applications now include piloting two Class 4 aircraft simulations (F-15 and 757) and entering data with a "virtual" numeric keyboard. Current research focuses on on-line adaptation of EMG sensing and processing and recognition of continuous gestures. We are also extending this on-line pattern recognition methodology to electroencephalographic (EEG) signals. This will allow us to bypass muscle activity and draw control signals directly from the human brain. Our system can reliably detect P-rhythm (a periodic EEG signal from motor cortex in the 10 Hz range) with a lightweight headset containing saline-soaked sponge electrodes. The data show that EEG p-rhythm can be modulated by real and imaginary motions. Current research focuses on using biofeedback to train of human subjects to modulate EEG rhythms on demand, and to examine interactions of EEG-based control with EMG-based and manual control. Viewgraphs on these neuroelectric technologies are also included.
The effect of switch control site on computer skills of infants and toddlers.
Glickman, L; Deitz, J; Anson, D; Stewart, K
1996-01-01
The purpose of this study was to determine whether switch control site (hand vs. head) affects the age at which children can successfully activate a computer to play a cause-and-effect game. The sample consisted of 72 participants randomly divided into two groups (head switch and hand switch), with stratification for gender and age (9-11 months, 12-14 months, 15-17 months). All participants were typically developing. After a maximum of 5 min of training, each participant was given five opportunities to activate a Jelly Bean switch to play a computer game. Competency was defined as four to five successful switch activations. Most participants in the 9-month to 11-month age group could successfully use a hand switch to activate a computer, and for the 15-month to 17-month age group, 100% of the participants met with success. By contrast, in the head switch condition, approximately one third of the participants in each of the three age ranges were successful in activating the computer to play a cause-and-effect game. The findings from this study provide developmental guidelines for using switches (head vs. hand) to activate computers to play cause-and-effect games and suggest that the clinician may consider introducing basic computer and switch skills to children as young as 9 months of age. However, the clinician is cautioned that the head switch may be more difficult to master than the hand switch and that additional research involving children with motor impairments is needed.
Focus Your Young Visitors: Kids Innovation--Fundamental Changes in Digital Edutainment.
ERIC Educational Resources Information Center
Sauer, Sebastian; Gobel, Stefan
With regard to the acceptance of human-computer interfaces, immersion represents one of the most important methods for attracting young visitors into museum exhibitions. Exciting and diversely presented content as well as intuitive, natural and human-like interfaces are indispensable to bind users to an interactive system with real and digital…
Spacecraft crew procedures from paper to computers
NASA Technical Reports Server (NTRS)
Oneal, Michael; Manahan, Meera
1991-01-01
Described here is a research project that uses human factors and computer systems knowledge to explore and help guide the design and creation of an effective Human-Computer Interface (HCI) for spacecraft crew procedures. By having a computer system behind the user interface, it is possible to have increased procedure automation, related system monitoring, and personalized annotation and help facilities. The research project includes the development of computer-based procedure system HCI prototypes and a testbed for experiments that measure the effectiveness of HCI alternatives in order to make design recommendations. The testbed will include a system for procedure authoring, editing, training, and execution. Progress on developing HCI prototypes for a middeck experiment performed on Space Shuttle Mission STS-34 and for upcoming medical experiments are discussed. The status of the experimental testbed is also discussed.
Human Machine Interfaces for Teleoperators and Virtual Environments Conference
NASA Technical Reports Server (NTRS)
1990-01-01
In a teleoperator system the human operator senses, moves within, and operates upon a remote or hazardous environment by means of a slave mechanism (a mechanism often referred to as a teleoperator). In a virtual environment system the interactive human machine interface is retained but the slave mechanism and its environment are replaced by a computer simulation. Video is replaced by computer graphics. The auditory and force sensations imparted to the human operator are similarly computer generated. In contrast to a teleoperator system, where the purpose is to extend the operator's sensorimotor system in a manner that facilitates exploration and manipulation of the physical environment, in a virtual environment system, the purpose is to train, inform, alter, or study the human operator to modify the state of the computer and the information environment. A major application in which the human operator is the target is that of flight simulation. Although flight simulators have been around for more than a decade, they had little impact outside aviation presumably because the application was so specialized and so expensive.
HRI usability evaluation of interaction modes for a teleoperated agricultural robotic sprayer.
Adamides, George; Katsanos, Christos; Parmet, Yisrael; Christou, Georgios; Xenos, Michalis; Hadzilacos, Thanasis; Edan, Yael
2017-07-01
Teleoperation of an agricultural robotic system requires effective and efficient human-robot interaction. This paper investigates the usability of different interaction modes for agricultural robot teleoperation. Specifically, we examined the overall influence of two types of output devices (PC screen, head mounted display), two types of peripheral vision support mechanisms (single view, multiple views), and two types of control input devices (PC keyboard, PS3 gamepad) on observed and perceived usability of a teleoperated agricultural sprayer. A modular user interface for teleoperating an agricultural robot sprayer was constructed and field-tested. Evaluation included eight interaction modes: the different combinations of the 3 factors. Thirty representative participants used each interaction mode to navigate the robot along a vineyard and spray grape clusters based on a 2 × 2 × 2 repeated measures experimental design. Objective metrics of the effectiveness and efficiency of the human-robot collaboration were collected. Participants also completed questionnaires related to their user experience with the system in each interaction mode. Results show that the most important factor for human-robot interface usability is the number and placement of views. The type of robot control input device was also a significant factor in certain dependents, whereas the effect of the screen output type was only significant on the participants' perceived workload index. Specific recommendations for mobile field robot teleoperation to improve HRI awareness for the agricultural spraying task are presented. Copyright © 2017 Elsevier Ltd. All rights reserved.
High-resolution, continuous field-of-view (FOV), non-rotating imaging system
NASA Technical Reports Server (NTRS)
Huntsberger, Terrance L. (Inventor); Stirbl, Robert C. (Inventor); Aghazarian, Hrand (Inventor); Padgett, Curtis W. (Inventor)
2010-01-01
A high resolution CMOS imaging system especially suitable for use in a periscope head. The imaging system includes a sensor head for scene acquisition, and a control apparatus inclusive of distributed processors and software for device-control, data handling, and display. The sensor head encloses a combination of wide field-of-view CMOS imagers and narrow field-of-view CMOS imagers. Each bank of imagers is controlled by a dedicated processing module in order to handle information flow and image analysis of the outputs of the camera system. The imaging system also includes automated or manually controlled display system and software for providing an interactive graphical user interface (GUI) that displays a full 360-degree field of view and allows the user or automated ATR system to select regions for higher resolution inspection.
Interfacing laboratory instruments to multiuser, virtual memory computers
NASA Technical Reports Server (NTRS)
Generazio, Edward R.; Stang, David B.; Roth, Don J.
1989-01-01
Incentives, problems and solutions associated with interfacing laboratory equipment with multiuser, virtual memory computers are presented. The major difficulty concerns how to utilize these computers effectively in a medium sized research group. This entails optimization of hardware interconnections and software to facilitate multiple instrument control, data acquisition and processing. The architecture of the system that was devised, and associated programming and subroutines are described. An example program involving computer controlled hardware for ultrasonic scan imaging is provided to illustrate the operational features.
Of Lice and Math: Using Models to Understand and Control Populations of Head Lice
Laguna, Mara Fabiana; Risau-Gusman, Sebastián
2011-01-01
In this paper we use detailed data about the biology of the head louse (pediculus humanus capitis) to build a model of the evolution of head lice colonies. Using theory and computer simulations, we show that the model can be used to assess the impact of the various strategies usually applied to eradicate head lice, both conscious (treatments) and unconscious (grooming). In the case of treatments, we study the difference in performance that arises when they are applied in systematic and non-systematic ways. Using some reasonable simplifying assumptions (as random mixing of human groups and the same mobility for all life stages of head lice other than eggs) we model the contagion of pediculosis using only one additional parameter. It is shown that this parameter can be tuned to obtain collective infestations whose characteristics are compatible with what is given in the literature on real infestations. We analyze two scenarios: One where group members begin treatment when a similar number of lice are present in each head, and another where there is one individual who starts treatment with a much larger threshold (“superspreader”). For both cases we assess the impact of several collective strategies of treatment. PMID:21799752
Of lice and math: using models to understand and control populations of head lice.
Laguna, María Fabiana; Laguna, Mara Fabiana; Risau-Gusman, Sebastián
2011-01-01
In this paper we use detailed data about the biology of the head louse (pediculus humanus capitis) to build a model of the evolution of head lice colonies. Using theory and computer simulations, we show that the model can be used to assess the impact of the various strategies usually applied to eradicate head lice, both conscious (treatments) and unconscious (grooming). In the case of treatments, we study the difference in performance that arises when they are applied in systematic and non-systematic ways. Using some reasonable simplifying assumptions (as random mixing of human groups and the same mobility for all life stages of head lice other than eggs) we model the contagion of pediculosis using only one additional parameter. It is shown that this parameter can be tuned to obtain collective infestations whose characteristics are compatible with what is given in the literature on real infestations. We analyze two scenarios: One where group members begin treatment when a similar number of lice are present in each head, and another where there is one individual who starts treatment with a much larger threshold ("superspreader"). For both cases we assess the impact of several collective strategies of treatment.
Quadcopter control using a BCI
NASA Astrophysics Data System (ADS)
Rosca, S.; Leba, M.; Ionica, A.; Gamulescu, O.
2018-01-01
The paper presents how there can be interconnected two ubiquitous elements nowadays. On one hand, the drones, which are increasingly present and integrated into more and more fields of activity, beyond the military applications they come from, moving towards entertainment, real-estate, delivery and so on. On the other hand, unconventional man-machine interfaces, which are generous topics to explore now and in the future. Of these, we chose brain computer interface (BCI), which allows human-machine interaction without requiring any moving elements. The research consists of mathematical modeling and numerical simulation of a drone and a BCI. Then there is presented an application using a Parrot mini-drone and an Emotiv Insight BCI.
The Self-Paced Graz Brain-Computer Interface: Methods and Applications
Scherer, Reinhold; Schloegl, Alois; Lee, Felix; Bischof, Horst; Janša, Janez; Pfurtscheller, Gert
2007-01-01
We present the self-paced 3-class Graz brain-computer interface (BCI) which is based on the detection of sensorimotor electroencephalogram (EEG) rhythms induced by motor imagery. Self-paced operation means that the BCI is able to determine whether the ongoing brain activity is intended as control signal (intentional control) or not (non-control state). The presented system is able to automatically reduce electrooculogram (EOG) artifacts, to detect electromyographic (EMG) activity, and uses only three bipolar EEG channels. Two applications are presented: the freeSpace virtual environment (VE) and the Brainloop interface. The freeSpace is a computer-game-like application where subjects have to navigate through the environment and collect coins by autonomously selecting navigation commands. Three subjects participated in these feedback experiments and each learned to navigate through the VE and collect coins. Two out of the three succeeded in collecting all three coins. The Brainloop interface provides an interface between the Graz-BCI and Google Earth. PMID:18350133
User interface issues in supporting human-computer integrated scheduling
NASA Technical Reports Server (NTRS)
Cooper, Lynne P.; Biefeld, Eric W.
1991-01-01
The topics are presented in view graph form and include the following: characteristics of Operations Mission Planner (OMP) schedule domain; OMP architecture; definition of a schedule; user interface dimensions; functional distribution; types of users; interpreting user interaction; dynamic overlays; reactive scheduling; and transitioning the interface.
Augmenting digital displays with computation
NASA Astrophysics Data System (ADS)
Liu, Jing
As we inevitably step deeper and deeper into a world connected via the Internet, more and more information will be exchanged digitally. Displays are the interface between digital information and each individual. Naturally, one fundamental goal of displays is to reproduce information as realistically as possible since humans still care a lot about what happens in the real world. Human eyes are the receiving end of such information exchange; therefore it is impossible to study displays without studying the human visual system. In fact, the design of displays is rather closely coupled with what human eyes are capable of perceiving. For example, we are less interested in building displays that emit light in the invisible spectrum. This dissertation explores how we can augment displays with computation, which takes both display hardware and the human visual system into consideration. Four novel projects on display technologies are included in this dissertation: First, we propose a software-based approach to driving multiview autostereoscopic displays. Our display algorithm can dynamically assign views to hardware display zones based on multiple observers' current head positions, substantially reducing crosstalk and stereo inversion. Second, we present a dense projector array that creates a seamless 3D viewing experience for multiple viewers. We smoothly interpolate the set of viewer heights and distances on a per-vertex basis across the arrays field of view, reducing image distortion, crosstalk, and artifacts from tracking errors. Third, we propose a method for high dynamic range display calibration that takes into account the variation of the chrominance error over luminance. We propose a data structure for enabling efficient representation and querying of the calibration function, which also allows user-guided balancing between memory consumption and the amount of computation. Fourth, we present user studies that demonstrate that the ˜ 60 Hz critical flicker fusion rate for traditional displays is not enough for some computational displays that show complex image patterns. The study focuses on displays with hidden channels, and their application to 3D+2D TV. By taking advantage of the fast growing power of computation and sensors, these four novel display setups - in combination with display algorithms - advance the frontier of computational display research.
Guidance for human interface with artificial intelligence systems
NASA Technical Reports Server (NTRS)
Potter, Scott S.; Woods, David D.
1991-01-01
The beginning of a research effort to collect and integrate existing research findings about how to combine computer power and people is discussed, including problems and pitfalls as well as desirable features. The goal of the research is to develop guidance for the design of human interfaces with intelligent systems. Fault management tasks in NASA domains are the focus of the investigation. Research is being conducted to support the development of guidance for designers that will enable them to make human interface considerations into account during the creation of intelligent systems.
Optical mass memory system (AMM-13). AMM/DBMS interface control document
NASA Technical Reports Server (NTRS)
Bailey, G. A.
1980-01-01
The baseline for external interfaces of a 10 to the 13th power bit, optical archival mass memory system (AMM-13) is established. The types of interfaces addressed include data transfer; AMM-13, Data Base Management System, NASA End-to-End Data System computer interconnect; data/control input and output interfaces; test input data source; file management; and facilities interface.
Guger, C; Schlögl, A; Walterspacher, D; Pfurtscheller, G
1999-01-01
An EEG-based brain-computer interface (BCI) is a direct connection between the human brain and the computer. Such a communication system is needed by patients with severe motor impairments (e.g. late stage of Amyotrophic Lateral Sclerosis) and has to operate in real-time. This paper describes the selection of the appropriate components to construct such a BCI and focuses also on the selection of a suitable programming language and operating system. The multichannel system runs under Windows 95, equipped with a real-time Kernel expansion to obtain reasonable real-time operations on a standard PC. Matlab controls the data acquisition and the presentation of the experimental paradigm, while Simulink is used to calculate the recursive least square (RLS) algorithm that describes the current state of the EEG in real-time. First results of the new low-cost BCI show that the accuracy of differentiating imagination of left and right hand movement is around 95%.
Web-based interactive drone control using hand gesture
NASA Astrophysics Data System (ADS)
Zhao, Zhenfei; Luo, Hao; Song, Guang-Hua; Chen, Zhou; Lu, Zhe-Ming; Wu, Xiaofeng
2018-01-01
This paper develops a drone control prototype based on web technology with the aid of hand gesture. The uplink control command and downlink data (e.g., video) are transmitted by WiFi communication, and all the information exchange is realized on web. The control command is translated from various predetermined hand gestures. Specifically, the hardware of this friendly interactive control system is composed by a quadrotor drone, a computer vision-based hand gesture sensor, and a cost-effective computer. The software is simplified as a web-based user interface program. Aided by natural hand gestures, this system significantly reduces the complexity of traditional human-computer interaction, making remote drone operation more intuitive. Meanwhile, a web-based automatic control mode is provided in addition to the hand gesture control mode. For both operation modes, no extra application program is needed to be installed on the computer. Experimental results demonstrate the effectiveness and efficiency of the proposed system, including control accuracy, operation latency, etc. This system can be used in many applications such as controlling a drone in global positioning system denied environment or by handlers without professional drone control knowledge since it is easy to get started.
Web-based interactive drone control using hand gesture.
Zhao, Zhenfei; Luo, Hao; Song, Guang-Hua; Chen, Zhou; Lu, Zhe-Ming; Wu, Xiaofeng
2018-01-01
This paper develops a drone control prototype based on web technology with the aid of hand gesture. The uplink control command and downlink data (e.g., video) are transmitted by WiFi communication, and all the information exchange is realized on web. The control command is translated from various predetermined hand gestures. Specifically, the hardware of this friendly interactive control system is composed by a quadrotor drone, a computer vision-based hand gesture sensor, and a cost-effective computer. The software is simplified as a web-based user interface program. Aided by natural hand gestures, this system significantly reduces the complexity of traditional human-computer interaction, making remote drone operation more intuitive. Meanwhile, a web-based automatic control mode is provided in addition to the hand gesture control mode. For both operation modes, no extra application program is needed to be installed on the computer. Experimental results demonstrate the effectiveness and efficiency of the proposed system, including control accuracy, operation latency, etc. This system can be used in many applications such as controlling a drone in global positioning system denied environment or by handlers without professional drone control knowledge since it is easy to get started.
A Novel Method for Intraoral Access to the Superior Head of the Human Lateral Pterygoid Muscle
Oliveira, Aleli Tôrres; Camilo, Anderson Aparecido; Bahia, Paulo Roberto Valle; Carvalho, Antonio Carlos Pires; DosSantos, Marcos Fabio; da Silva, Jorge Vicente Lopes; Monteiro, André Antonio
2014-01-01
Background. The uncoordinated activity of the superior and inferior parts of the lateral pterygoid muscle (LPM) has been suggested to be one of the causes of temporomandibular joint (TMJ) disc displacement. A therapy for this muscle disorder is the injection of botulinum toxin (BTX), of the LPM. However, there is a potential risk of side effects with the injection guide methods currently available. In addition, they do not permit appropriate differentiation between the two bellies of the muscle. Herein, a novel method is presented to provide intraoral access to the superior head of the human LPM with maximal control and minimal hazards. Methods. Computational tomography along with digital imaging software programs and rapid prototyping techniques were used to create a rapid prototyped guide to orient BTX injections in the superior LPM. Results. The method proved to be feasible and reliable. Furthermore, when tested in one volunteer it allowed precise access to the upper head of LPM, without producing side effects. Conclusions. The prototyped guide presented in this paper is a novel tool that provides intraoral access to the superior head of the LPM. Further studies will be necessary to test the efficacy and validate this method in a larger cohort of subjects. PMID:24963484
Human-machine interface for a VR-based medical imaging environment
NASA Astrophysics Data System (ADS)
Krapichler, Christian; Haubner, Michael; Loesch, Andreas; Lang, Manfred K.; Englmeier, Karl-Hans
1997-05-01
Modern 3D scanning techniques like magnetic resonance imaging (MRI) or computed tomography (CT) produce high- quality images of the human anatomy. Virtual environments open new ways to display and to analyze those tomograms. Compared with today's inspection of 2D image sequences, physicians are empowered to recognize spatial coherencies and examine pathological regions more facile, diagnosis and therapy planning can be accelerated. For that purpose a powerful human-machine interface is required, which offers a variety of tools and features to enable both exploration and manipulation of the 3D data. Man-machine communication has to be intuitive and efficacious to avoid long accustoming times and to enhance familiarity with and acceptance of the interface. Hence, interaction capabilities in virtual worlds should be comparable to those in the real work to allow utilization of our natural experiences. In this paper the integration of hand gestures and visual focus, two important aspects in modern human-computer interaction, into a medical imaging environment is shown. With the presented human- machine interface, including virtual reality displaying and interaction techniques, radiologists can be supported in their work. Further, virtual environments can even alleviate communication between specialists from different fields or in educational and training applications.
Real time eye tracking using Kalman extended spatio-temporal context learning
NASA Astrophysics Data System (ADS)
Munir, Farzeen; Minhas, Fayyaz ul Amir Asfar; Jalil, Abdul; Jeon, Moongu
2017-06-01
Real time eye tracking has numerous applications in human computer interaction such as a mouse cursor control in a computer system. It is useful for persons with muscular or motion impairments. However, tracking the movement of the eye is complicated by occlusion due to blinking, head movement, screen glare, rapid eye movements, etc. In this work, we present the algorithmic and construction details of a real time eye tracking system. Our proposed system is an extension of Spatio-Temporal context learning through Kalman Filtering. Spatio-Temporal Context Learning offers state of the art accuracy in general object tracking but its performance suffers due to object occlusion. Addition of the Kalman filter allows the proposed method to model the dynamics of the motion of the eye and provide robust eye tracking in cases of occlusion. We demonstrate the effectiveness of this tracking technique by controlling the computer cursor in real time by eye movements.
Display integration for ground combat vehicles
NASA Astrophysics Data System (ADS)
Busse, David J.
1998-09-01
The United States Army's requirement to employ high resolution target acquisition sensors and information warfare to increase its dominance over enemy forces has led to the need to integrate advanced display devices into ground combat vehicle crew stations. The Army's force structure require the integration of advanced displays on both existing and emerging ground combat vehicle systems. The fielding of second generation target acquisition sensors, color digital terrain maps and high volume digital command and control information networks on these platforms define display performance requirements. The greatest challenge facing the system integrator is the development and integration of advanced displays that meet operational, vehicle and human computer interface performance requirements for the ground combat vehicle fleet. The subject of this paper is to address those challenges: operational and vehicle performance, non-soldier centric crew station configurations, display performance limitations related to human computer interfaces and vehicle physical environments, display technology limitations and the Department of Defense (DOD) acquisition reform initiatives. How the ground combat vehicle Program Manager and system integrator are addressing these challenges are discussed through the integration of displays on fielded, current and future close combat vehicle applications.
The cortical mouse: a piece of forgotten history in noninvasive brain–computer interfaces.
Principe, Jose C
2013-07-01
Early research on brain-computer interfaces (BCIs) was fueled by the study of event-related potentials (ERPs) by Farwell and Donchin, who are rightly credited for laying important groundwork for the BCI field. However, many other researchers have made substantial contributions that have escaped the radar screen of the current BCI community. For example, in the late 1980s, I worked with a brilliant multidisciplinary research group in electrical engineering at the University of Florida, Gainesville, headed by Dr. Donald Childers. Childers should be well known to long-time members of the IEEE Engineering in Medicine and Biology Society since he was the editor-in-chief of IEEE Transactions on Biomedical Engineering in the 1970s and the recipient of one of the most prestigious society awards, the William J. Morlock Award, in 1973.
Miksztai-Réthey, Brigitta; Faragó, Kinga Bettina
2015-01-01
We studied an artificial intelligent assisted interaction between a computer and a human with severe speech and physical impairments (SSPI). In order to speed up AAC, we extended a former study of typing performance optimization using a framework that included head movement controlled assistive technology and an onscreen writing device. Quantitative and qualitative data were collected and analysed with mathematical methods, manual interpretation and semi-supervised machine video annotation. As the result of our research, in contrast to the former experiment's conclusions, we found that our participant had at least two different typing strategies. To maximize his communication efficiency, a more complex assistive tool is suggested, which takes the different methods into consideration.
Personalized keystroke dynamics for self-powered human--machine interfacing.
Chen, Jun; Zhu, Guang; Yang, Jin; Jing, Qingshen; Bai, Peng; Yang, Weiqing; Qi, Xuewei; Su, Yuanjie; Wang, Zhong Lin
2015-01-27
The computer keyboard is one of the most common, reliable, accessible, and effective tools used for human--machine interfacing and information exchange. Although keyboards have been used for hundreds of years for advancing human civilization, studying human behavior by keystroke dynamics using smart keyboards remains a great challenge. Here we report a self-powered, non-mechanical-punching keyboard enabled by contact electrification between human fingers and keys, which converts mechanical stimuli applied to the keyboard into local electronic signals without applying an external power. The intelligent keyboard (IKB) can not only sensitively trigger a wireless alarm system once gentle finger tapping occurs but also trace and record typed content by detecting both the dynamic time intervals between and during the inputting of letters and the force used for each typing action. Such features hold promise for its use as a smart security system that can realize detection, alert, recording, and identification. Moreover, the IKB is able to identify personal characteristics from different individuals, assisted by the behavioral biometric of keystroke dynamics. Furthermore, the IKB can effectively harness typing motions for electricity to charge commercial electronics at arbitrary typing speeds greater than 100 characters per min. Given the above features, the IKB can be potentially applied not only to self-powered electronics but also to artificial intelligence, cyber security, and computer or network access control.
User Centered System Design: Papers for the CHI '83 Conference on Human Factors in Computer Systems.
ERIC Educational Resources Information Center
California Univ., San Diego. Center for Human Information Processing.
Four papers from the University of California at San Diego (UCSD) Project on Human-Computer Interfaces are presented in this report. "Evaluation and Analysis of User's Activity Organization," by Liam Bannon, Allen Cypher, Steven Greenspan, and Melissa Monty, analyzes the activities performed by users of computer systems, develops a…
Analysis of hand contact areas and interaction capabilities during manipulation and exploration.
Gonzalez, Franck; Gosselin, Florian; Bachta, Wael
2014-01-01
Manual human-computer interfaces for virtual reality are designed to allow an operator interacting with a computer simulation as naturally as possible. Dexterous haptic interfaces are the best suited for this goal. They give intuitive and efficient control on the environment with haptic and tactile feedback. This paper is aimed at helping in the choice of the interaction areas to be taken into account in the design of such interfaces. The literature dealing with hand interactions is first reviewed in order to point out the contact areas involved in exploration and manipulation tasks. Their frequencies of use are then extracted from existing recordings. The results are gathered in an original graphical interaction map allowing for a simple visualization of the way the hand is used, and compared with a map of mechanoreceptors densities. Then an interaction tree, mapping the relative amount of actions made available through the use of a given contact area, is built and correlated with the losses of hand function induced by amputations. A rating of some existing haptic interfaces and guidelines for their design are finally achieved to illustrate a possible use of the developed graphical tools.
Advances in Human-Computer Interaction: Graphics and Animation Components for Interface Design
NASA Astrophysics Data System (ADS)
Cipolla Ficarra, Francisco V.; Nicol, Emma; Cipolla-Ficarra, Miguel; Richardson, Lucy
We present an analysis of communicability methodology in graphics and animation components for interface design, called CAN (Communicability, Acceptability and Novelty). This methodology has been under development between 2005 and 2010, obtaining excellent results in cultural heritage, education and microcomputing contexts. In studies where there is a bi-directional interrelation between ergonomics, usability, user-centered design, software quality and the human-computer interaction. We also present the heuristic results about iconography and layout design in blogs and websites of the following countries: Spain, Italy, Portugal and France.
The research of laser marking control technology
NASA Astrophysics Data System (ADS)
Zhang, Qiue; Zhang, Rong
2009-08-01
In the area of Laser marking, the general control method is insert control card to computer's mother board, it can not support hot swap, it is difficult to assemble or it. Moreover, the one marking system must to equip one computer. In the system marking, the computer can not to do the other things except to transmit marking digital information. Otherwise it can affect marking precision. Based on traditional control methods existed some problems, introduced marking graphic editing and digital processing by the computer finish, high-speed digital signal processor (DSP) control marking the whole process. The laser marking controller is mainly contain DSP2812, digital memorizer, DAC (digital analog converting) transform unit circuit, USB interface control circuit, man-machine interface circuit, and other logic control circuit. Download the marking information which is processed by computer to U disk, DSP read the information by USB interface on time, then processing it, adopt the DSP inter timer control the marking time sequence, output the scanner control signal by D/A parts. Apply the technology can realize marking offline, thereby reduce the product cost, increase the product efficiency. The system have good effect in actual unit markings, the marking speed is more quickly than PCI control card to 20 percent. It has application value in practicality.
TCP/IP Interface for the Satellite Orbit Analysis Program (SOAP)
NASA Technical Reports Server (NTRS)
Carnright, Robert; Stodden, David; Coggi, John
2009-01-01
The Transmission Control Protocol/ Internet protocol (TCP/IP) interface for the Satellite Orbit Analysis Program (SOAP) provides the means for the software to establish real-time interfaces with other software. Such interfaces can operate between two programs, either on the same computer or on different computers joined by a network. The SOAP TCP/IP module employs a client/server interface where SOAP is the server and other applications can be clients. Real-time interfaces between software offer a number of advantages over embedding all of the common functionality within a single program. One advantage is that they allow each program to divide the computation labor between processors or computers running the separate applications. Secondly, each program can be allowed to provide its own expertise domain with other programs able to use this expertise.
Biomechanical Studies on Patterns of Cranial Bone Fracture Using the Immature Porcine Model.
Haut, Roger C; Wei, Feng
2017-02-01
This review was prepared for the American Society of Mechanical Engineers Lissner Medal. It specifically discusses research performed in the Orthopaedic Biomechanics Laboratories on pediatric cranial bone mechanics and patterns of fracture in collaboration with the Forensic Anthropology Laboratory at Michigan State University. Cranial fractures are often an important element seen by forensic anthropologists during the investigation of pediatric trauma cases litigated in courts. While forensic anthropologists and forensic biomechanists are often called on to testify in these cases, there is little basic science developed in support of their testimony. The following is a review of studies conducted in the above laboratories and supported by the National Institute of Justice to begin an understanding of the mechanics and patterns of pediatric cranial bone fracture. With the lack of human pediatric specimens, the studies utilize an immature porcine model. Because much case evidence involves cranial bone fracture, the studies described below focus on determining input loading based on the resultant bone fracture pattern. The studies involve impact to the parietal bone, the most often fractured cranial bone, and begin with experiments on entrapped heads, progressing to those involving free-falling heads. The studies involve head drops onto different types and shapes of interfaces with variations of impact energy. The studies show linear fractures initiating from sutural boundaries, away from the impact site, for flat surface impacts, in contrast to depressed fractures for more focal impacts. The results have been incorporated into a "Fracture Printing Interface (FPI)," using machine learning and pattern recognition algorithms. The interface has been used to help interpret mechanisms of injury in pediatric death cases collected from medical examiner offices. The ultimate aim of this program of study is to develop a "Human Fracture Printing Interface" that can be used by forensic investigators in determining mechanisms of pediatric cranial bone fracture.
Young Children's Skill in Using a Mouse to Control a Graphical Computer Interface.
ERIC Educational Resources Information Center
Crook, Charles
1992-01-01
Describes a study that investigated the performance of preschoolers and children in the first three years of formal education on tasks that involved skills using a mouse-based control of a graphical computer interface. The children's performance is compared with that of novice adult users and expert users. (five references) (LRW)
Inertial Orientation Trackers with Drift Compensation
NASA Technical Reports Server (NTRS)
Foxlin, Eric M.
2008-01-01
A class of inertial-sensor systems with drift compensation has been invented for use in measuring the orientations of human heads (and perhaps other, similarly sized objects). These systems can be designed to overcome some of the limitations of prior orientation-measuring systems that are based, variously, on magnetic, optical, mechanical-linkage, and acoustical principles. The orientation signals generated by the systems of this invention could be used for diverse purposes, including controlling head-orientation-dependent virtual reality visual displays or enabling persons whose limbs are paralyzed to control machinery by means of head motions. The inventive concept admits to variations too numerous to describe here, making it necessary to limit this description to a typical system, the selected aspects of which are illustrated in the figure. A set of sensors is mounted on a bracket on a band or a cap that gently but firmly grips the wearer s head to be tracked. Among the sensors are three drift-sensitive rotationrate sensors (e.g., integrated-circuit angular- rate-measuring gyroscopes), which put out DC voltages nominally proportional to the rates of rotation about their sensory axes. These sensors are mounted in mutually orthogonal orientations for measuring rates of rotation about the roll, pitch, and yaw axes of the wearer s head. The outputs of these rate sensors are conditioned and digitized, and the resulting data are fed to an integrator module implemented in software in a digital computer. In the integrator module, the angular-rate signals are jointly integrated by any of several established methods to obtain a set of angles that represent approximately the orientation of the head in an external, inertial coordinate system. Because some drift is always present as a component of an angular position computed by integrating the outputs of angular-rate sensors, the orientation signal is processed further in a drift-compensator software module.
Face recognition with the Karhunen-Loeve transform
NASA Astrophysics Data System (ADS)
Suarez, Pedro F.
1991-12-01
The major goal of this research was to investigate machine recognition of faces. The approach taken to achieve this goal was to investigate the use of Karhunen-Loe've Transform (KLT) by implementing flexible and practical code. The KLT utilizes the eigenvectors of the covariance matrix as a basis set. Faces were projected onto the eigenvectors, called eigenfaces, and the resulting projection coefficients were used as features. Face recognition accuracies for the KLT coefficients were superior to Fourier based techniques. Additionally, this thesis demonstrated the image compression and reconstruction capabilities of the KLT. This theses also developed the use of the KLT as a facial feature detector. The ability to differentiate between facial features provides a computer communications interface for non-vocal people with cerebral palsy. Lastly, this thesis developed a KLT based axis system for laser scanner data of human heads. The scanner data axis system provides the anthropometric community a more precise method of fitting custom helmets.
A multimodal interface to resolve the Midas-Touch problem in gaze controlled wheelchair.
Meena, Yogesh Kumar; Cecotti, Hubert; Wong-Lin, KongFatt; Prasad, Girijesh
2017-07-01
Human-computer interaction (HCI) research has been playing an essential role in the field of rehabilitation. The usability of the gaze controlled powered wheelchair is limited due to Midas-Touch problem. In this work, we propose a multimodal graphical user interface (GUI) to control a powered wheelchair that aims to help upper-limb mobility impaired people in daily living activities. The GUI was designed to include a portable and low-cost eye-tracker and a soft-switch wherein the wheelchair can be controlled in three different ways: 1) with a touchpad 2) with an eye-tracker only, and 3) eye-tracker with soft-switch. The interface includes nine different commands (eight directions and stop) and integrated within a powered wheelchair system. We evaluated the performance of the multimodal interface in terms of lap-completion time, the number of commands, and the information transfer rate (ITR) with eight healthy participants. The analysis of the results showed that the eye-tracker with soft-switch provides superior performance with an ITR of 37.77 bits/min among the three different conditions (p<;0.05). Thus, the proposed system provides an effective and economical solution to the Midas-Touch problem and extended usability for the large population of disabled users.
Horschig, Jörn M; Oosterheert, Wouter; Oostenveld, Robert; Jensen, Ole
2015-11-01
Here we report that the modulation of alpha activity by covert attention can be used as a control signal in an online brain-computer interface, that it is reliable, and that it is robust. Subjects were instructed to orient covert visual attention to the left or right hemifield. We decoded the direction of attention from the magnetoencephalogram by a template matching classifier and provided the classification outcome to the subject in real-time using a novel graphical user interface. Training data for the templates were obtained from a Posner-cueing task conducted just before the BCI task. Eleven subjects participated in four sessions each. Eight of the subjects achieved classification rates significantly above chance level. Subjects were able to significantly increase their performance from the first to the second session. Individual patterns of posterior alpha power remained stable throughout the four sessions and did not change with increased performance. We conclude that posterior alpha power can successfully be used as a control signal in brain-computer interfaces. We also discuss several ideas for further improving the setup and propose future research based on solid hypotheses about behavioral consequences of modulating neuronal oscillations by brain computer interfacing.
An adaptive brain actuated system for augmenting rehabilitation
Roset, Scott A.; Gant, Katie; Prasad, Abhishek; Sanchez, Justin C.
2014-01-01
For people living with paralysis, restoration of hand function remains the top priority because it leads to independence and improvement in quality of life. In approaches to restore hand and arm function, a goal is to better engage voluntary control and counteract maladaptive brain reorganization that results from non-use. Standard rehabilitation augmented with developments from the study of brain-computer interfaces could provide a combined therapy approach for motor cortex rehabilitation and to alleviate motor impairments. In this paper, an adaptive brain-computer interface system intended for application to control a functional electrical stimulation (FES) device is developed as an experimental test bed for augmenting rehabilitation with a brain-computer interface. The system's performance is improved throughout rehabilitation by passive user feedback and reinforcement learning. By continuously adapting to the user's brain activity, similar adaptive systems could be used to support clinical brain-computer interface neurorehabilitation over multiple days. PMID:25565945
2009-12-01
Human-Computer Interface (AHCI) Style Guide, (Report No. 64201-97U/61223), Veridian, Veda Operations, Dayton Ohio. [13] CSFAB Osga, G. and Kellmeyer, D...Interface (AHCI) Style Guide, (Report No. 64201-97U/61223), Veridian, Veda Operations, Dayton Ohio. [14] Osga, G. and Kellmeyer, D. (2000), Combat
Viewpoint 9--molecular structure of aqueous interfaces
NASA Technical Reports Server (NTRS)
Pohorille, A.; Wilson, M. A.
1993-01-01
In this review we summarize recent progress in our understanding of the structure of aqueous interfaces emerging from molecular level computer simulations. It is emphasized that the presence of the interface induces specific structural effects which, in turn, influence a wide variety of phenomena occurring near the phase boundaries. At the liquid-vapor interface, the most probable orientations of a water molecule is such that its dipole moment lies parallel to the interface, one O-H bond points toward the vapor and the other O-H bond is directed toward the liquid. The orientational distributions are broad and slightly asymmetric, resulting in an excess dipole moment pointing toward the liquid. These structural preferences persist at interfaces between water and nonpolar liquids, indicating that the interactions between the two liquids in contact are weak. It was found that liquid-liquid interfaces are locally sharp but broadened by capillary waves. One consequence of anisotropic orientations of interfacial water molecules is asymmetric interactions, with respect to the sign of the charge, of ions with the water surface. It was found that even very close to the surface ions retain their hydration shells. New features of aqueous interfaces have been revealed in studies of water-membrane and water-monolayer systems. In particular, water molecules are strongly oriented by the polar head groups of the amphiphilic phase, and they penetrate the hydrophilic head-group region, but not the hydrophobic core. At infinite dilution near interfaces, amphiphilic molecules exhibit behavior different from that in the gas phase or in bulk water. This result sheds new light on the nature of hydrophobic effect in the interfacial regions. The presence of interfaces was also shown to affect both equilibrium and dynamic components of rates of chemical reactions. Applications of continuum models to interfacial problems have been, so far, unsuccessful. This, again, underscores the importance of molecular-level information about interfaces.
Fast attainment of computer cursor control with noninvasively acquired brain signals
NASA Astrophysics Data System (ADS)
Bradberry, Trent J.; Gentili, Rodolphe J.; Contreras-Vidal, José L.
2011-06-01
Brain-computer interface (BCI) systems are allowing humans and non-human primates to drive prosthetic devices such as computer cursors and artificial arms with just their thoughts. Invasive BCI systems acquire neural signals with intracranial or subdural electrodes, while noninvasive BCI systems typically acquire neural signals with scalp electroencephalography (EEG). Some drawbacks of invasive BCI systems are the inherent risks of surgery and gradual degradation of signal integrity. A limitation of noninvasive BCI systems for two-dimensional control of a cursor, in particular those based on sensorimotor rhythms, is the lengthy training time required by users to achieve satisfactory performance. Here we describe a novel approach to continuously decoding imagined movements from EEG signals in a BCI experiment with reduced training time. We demonstrate that, using our noninvasive BCI system and observational learning, subjects were able to accomplish two-dimensional control of a cursor with performance levels comparable to those of invasive BCI systems. Compared to other studies of noninvasive BCI systems, training time was substantially reduced, requiring only a single session of decoder calibration (~20 min) and subject practice (~20 min). In addition, we used standardized low-resolution brain electromagnetic tomography to reveal that the neural sources that encoded observed cursor movement may implicate a human mirror neuron system. These findings offer the potential to continuously control complex devices such as robotic arms with one's mind without lengthy training or surgery.
Robot Control Through Brain Computer Interface For Patterns Generation
NASA Astrophysics Data System (ADS)
Belluomo, P.; Bucolo, M.; Fortuna, L.; Frasca, M.
2011-09-01
A Brain Computer Interface (BCI) system processes and translates neuronal signals, that mainly comes from EEG instruments, into commands for controlling electronic devices. This system can allow people with motor disabilities to control external devices through the real-time modulation of their brain waves. In this context an EEG-based BCI system that allows creative luminous artistic representations is here presented. The system that has been designed and realized in our laboratory interfaces the BCI2000 platform performing real-time analysis of EEG signals with a couple of moving luminescent twin robots. Experiments are also presented.
Light weight portable operator control unit using an Android-enabled mobile phone
NASA Astrophysics Data System (ADS)
Fung, Nicholas
2011-05-01
There have been large gains in the field of robotics, both in hardware sophistication and technical capabilities. However, as more capable robots have been developed and introduced to battlefield environments, the problem of interfacing with human controllers has proven to be challenging. Particularly in the field of military applications, controller requirements can be stringent and can range from size and power consumption, to durability and cost. Traditional operator control units (OCUs) tend to resemble laptop personal computers (PCs), as these devices are mobile and have ample computing power. However, laptop PCs are bulky and have greater power requirements. To approach this problem, a light weight, inexpensive controller was created based on a mobile phone running the Android operating system. It was designed to control an iRobot Packbot through the Army Research Laboratory (ARL) in-house Agile Computing Infrastructure (ACI). The hardware capabilities of the mobile phone, such as Wi- Fi communications, touch screen interface, and the flexibility of the Android operating system, made it a compelling platform. The Android based OCU offers a more portable package and can be easily carried by a soldier along with normal gear requirements. In addition, the one hand operation of the Android OCU allows for the Soldier to keep an unoccupied hand for greater flexibility. To validate the Android OCU as a capable controller, experimental data was collected evaluating use of the controller and a traditional, tablet PC based OCU. Initial analysis suggests that the Android OCU performed positively in qualitative data collected from participants.
Morphology and Mobility of the Reconstructed Basilar Joint of the Pollicized Index Finger.
Strugarek-Lecoanet, Clotilde; Chevrollier, Jérémie; Pauchard, Nicolas; Blum, Alain; Dap, François; Dautel, Gilles
2016-09-01
To evaluate outcome and function of the reconstructed basilar thumb joint after index finger pollicization in patients presenting congenital thumb deficiency. Plain radiographs and 4-dimensional dynamic volume computed tomography scan were used to evaluate the outcome of 23 pollicizations performed on 14 children between 1996 and 2009. The mean follow-up was 8 years. Patients performed continuous movements of thumb opposition during the imaging studies. Four-dimensional scan images made it possible to visualize mobility within the reconstructed joint. In 14 cases, union occurred in the metacarpal head/metacarpal base interface. In the 9 other cases, there was a nonunion at this interface. The reconstructed joint was mobile in 20 cases, including 3 in which there was also mobility at the site of the nonunion. In 3 cases in our series, mobility was present only at the site of the nonunion, between the base and the head of the second metacarpal. Remodeling and flattening out of the metacarpal head occurred in 16 of 23 cases. The transposed metacarpal head remained spherical in 7 cases. The reconstructed joint adapts, both morphologically and functionally, allowing movement on all 3 spatial planes. Existing mechanical constraints on the reconstructed joint may explain its remodeled appearance. Therapeutic IV. Copyright © 2016 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.
Freshwater-Brine Mixing Zone Hydrodynamics in Salt Flats (Salar de Atacama)
NASA Astrophysics Data System (ADS)
Marazuela, M. A.; Vázquez-Suñé, E.; Custodio, E.; Palma, T.; García-Gil, A.
2017-12-01
The increase in the demand of strategic minerals for the development of medicines and batteries require detailed knowledge of the salt flats freshwater-brine interface to make its exploitation efficient. The interface zone is the result of a physical balance between the recharged and evaporated water. The sharp interface approach assumes the immiscibility of the fluids and thus neglects the mixing between them. As a consequence, for miscible fluids it is more accurate and often needed to use the mixing zone concept, which results from the dynamic equilibrium of flowing freshwater and brine. In this study, we consider two and three-dimensional scale approaches for the management of the mixing zone. The two-dimensional approach is used to understand the dynamics and the characteristics of the salt flat mixing zone, especially in the Salar de Atacama (Atacama salt flat) case. By making use of this model we analyze and quantify the effects of the aquitards on the mixing zone geometry. However, the understanding of the complex physical processes occurring in the salt flats and the management of these environments requires the adoption of three-dimensional regional scale numerical models. The models that take into account the effects of variable density represent the best management tool, but they require large computational resources, especially in the three-dimensional case. In order to avoid these computational limitations in the modeling of salt flats and their valuable ecosystems, we propose a three-step methodology, consisting of: (1) collection, validation and interpretation of the hydrogeochemical data, (2) identification and three-dimensional mapping of the mixing zone on the land surface and in depth, and (3) application of a water head correction to the freshwater and mixed water heads in order to compensate the density variations and to transform them to brine water heads. Finally, an evaluation of the sensibility of the mixing zone to anthropogenic and climate changes is included.
Wolf, M B; Garner, R P
1997-01-01
A model was developed of transient changes in metabolic heat production and core temperature for humans subjected to cold conditions. It was modified to predict thermal effects of the upper parts of the body being sprayed with water from a system designed to reduce the smoke effects of an airplane fire. Temperature changes were computed at 25 body segments in response to water immersion, cold-air exposure, and windy conditions. Inputs to the temperature controller were: (a) temperature change signals from skin segments and (b) an integrated signal of the product of skin and head-core (hypothalamic) temperature changes. The controller stimulated changes in blood flow to skin and muscle and heat production by shivering. Two controller parameters were adjusted to obtain good predictions of temperature and heat-production experimental data in head-out, water-immersion (0 degree-28 degrees C) studies in humans. A water layer on the skin whose thickness decreased transiently due to evaporation was added to describe the effects of the water-spray system. Because the layer evaporated rapidly in a very cold and windy environment, its additional cooling effect over a 60-min exposure period was minimal. The largest additional decrease in rectal temperature due to the water layer was < 1 degree C, which was in normal conditions where total decreases were small.
NASA Technical Reports Server (NTRS)
Duncan, K. M.; Harm, D. L.; Crosier, W. G.; Worthington, J. W.
1993-01-01
A unique training device is being developed at the Johnson Space Center Neurosciences Laboratory to help reduce or eliminate Space Motion Sickness (SMS) and spatial orientation disturbances that occur during spaceflight. The Device for Orientation and Motion Environments Preflight Adaptation Trainer (DOME PAT) uses virtual reality technology to simulate some sensory rearrangements experienced by astronauts in microgravity. By exposing a crew member to this novel environment preflight, it is expected that he/she will become partially adapted, and thereby suffer fewer symptoms inflight. The DOME PAT is a 3.7 m spherical dome, within which a 170 by 100 deg field of view computer-generated visual database is projected. The visual database currently in use depicts the interior of a Shuttle spacelab. The trainee uses a six degree-of-freedom, isometric force hand controller to navigate through the virtual environment. Alternatively, the trainee can be 'moved' about within the virtual environment by the instructor, or can look about within the environment by wearing a restraint that controls scene motion in response to head movements. The computer system is comprised of four personal computers that provide the real time control and user interface, and two Silicon Graphics computers that generate the graphical images. The image generator computers use custom algorithms to compensate for spherical image distortion, while maintaining a video update rate of 30 Hz. The DOME PAT is the first such system known to employ virtual reality technology to reduce the untoward effects of the sensory rearrangement associated with exposure to microgravity, and it does so in a very cost-effective manner.
NASA Technical Reports Server (NTRS)
Anderson, T. O. (Inventor)
1976-01-01
An interface logic circuit permitting the transfer of information between two computers having asynchronous clocks is disclosed. The information transfer involves utilization of control signals (including request, return-response, ready) to generate properly timed data strobe signals. Noise problems are avoided because each control signal, upon receipt, is verified by at least two clock pulses at the receiving computer. If control signals are verified, a data strobe pulse is generated to accomplish a data transfer. Once initiated, the data strobe signal is properly completed independently of signal disturbances in the control signal initiating the data strobe signal. Completion of the data strobe signal is announced by automatic turn-off of a return-response control signal.
Leveraging anatomical information to improve transfer learning in brain-computer interfaces
NASA Astrophysics Data System (ADS)
Wronkiewicz, Mark; Larson, Eric; Lee, Adrian K. C.
2015-08-01
Objective. Brain-computer interfaces (BCIs) represent a technology with the potential to rehabilitate a range of traumatic and degenerative nervous system conditions but require a time-consuming training process to calibrate. An area of BCI research known as transfer learning is aimed at accelerating training by recycling previously recorded training data across sessions or subjects. Training data, however, is typically transferred from one electrode configuration to another without taking individual head anatomy or electrode positioning into account, which may underutilize the recycled data. Approach. We explore transfer learning with the use of source imaging, which estimates neural activity in the cortex. Transferring estimates of cortical activity, in contrast to scalp recordings, provides a way to compensate for variability in electrode positioning and head morphologies across subjects and sessions. Main results. Based on simulated and measured electroencephalography activity, we trained a classifier using data transferred exclusively from other subjects and achieved accuracies that were comparable to or surpassed a benchmark classifier (representative of a real-world BCI). Our results indicate that classification improvements depend on the number of trials transferred and the cortical region of interest. Significance. These findings suggest that cortical source-based transfer learning is a principled method to transfer data that improves BCI classification performance and provides a path to reduce BCI calibration time.
Leveraging anatomical information to improve transfer learning in brain-computer interfaces.
Wronkiewicz, Mark; Larson, Eric; Lee, Adrian K C
2015-08-01
Brain-computer interfaces (BCIs) represent a technology with the potential to rehabilitate a range of traumatic and degenerative nervous system conditions but require a time-consuming training process to calibrate. An area of BCI research known as transfer learning is aimed at accelerating training by recycling previously recorded training data across sessions or subjects. Training data, however, is typically transferred from one electrode configuration to another without taking individual head anatomy or electrode positioning into account, which may underutilize the recycled data. We explore transfer learning with the use of source imaging, which estimates neural activity in the cortex. Transferring estimates of cortical activity, in contrast to scalp recordings, provides a way to compensate for variability in electrode positioning and head morphologies across subjects and sessions. Based on simulated and measured electroencephalography activity, we trained a classifier using data transferred exclusively from other subjects and achieved accuracies that were comparable to or surpassed a benchmark classifier (representative of a real-world BCI). Our results indicate that classification improvements depend on the number of trials transferred and the cortical region of interest. These findings suggest that cortical source-based transfer learning is a principled method to transfer data that improves BCI classification performance and provides a path to reduce BCI calibration time.
Leveraging anatomical information to improve transfer learning in brain-computer interfaces
Wronkiewicz, Mark; Larson, Eric; Lee, Adrian KC
2015-01-01
Objective Brain-computer interfaces (BCIs) represent a technology with the potential to rehabilitate a range of traumatic and degenerative nervous system conditions but require a time-consuming training process to calibrate. An area of BCI research known as transfer learning is aimed at accelerating training by recycling previously recorded training data across sessions or subjects. Training data, however, is typically transferred from one electrode configuration to another without taking individual head anatomy or electrode positioning into account, which may underutilize the recycled data. Approach We explore transfer learning with the use of source imaging, which estimates neural activity in the cortex. Transferring estimates of cortical activity, in contrast to scalp recordings, provides a way to compensate for variability in electrode positioning and head morphologies across subjects and sessions. Main Results Based on simulated and measured EEG activity, we trained a classifier using data transferred exclusively from other subjects and achieved accuracies that were comparable to or surpassed a benchmark classifier (representative of a real-world BCI). Our results indicate that classification improvements depend on the number of trials transferred and the cortical region of interest. Significance These findings suggest that cortical source-based transfer learning is a principled method to transfer data that improves BCI classification performance and provides a path to reduce BCI calibration time. PMID:26169961
NASA Technical Reports Server (NTRS)
Kriegler, F. J.
1974-01-01
The MIDAS System is described as a third-generation fast multispectral recognition system able to keep pace with the large quantity and high rates of data acquisition from present and projected sensors. A principal objective of the MIDAS program is to provide a system well interfaced with the human operator and thus to obtain large overall reductions in turnaround time and significant gains in throughput. The hardware and software are described. The system contains a mini-computer to control the various high-speed processing elements in the data path, and a classifier which implements an all-digital prototype multivariate-Gaussian maximum likelihood decision algorithm operating at 200,000 pixels/sec. Sufficient hardware was developed to perform signature extraction from computer-compatible tapes, compute classifier coefficients, control the classifier operation, and diagnose operation.
A USB 2.0 computer interface for the UCO/Lick CCD cameras
NASA Astrophysics Data System (ADS)
Wei, Mingzhi; Stover, Richard J.
2004-09-01
The new UCO/Lick Observatory CCD camera uses a 200 MHz fiber optic cable to transmit image data and an RS232 serial line for low speed bidirectional command and control. Increasingly RS232 is a legacy interface supported on fewer computers. The fiber optic cable requires either a custom interface board that is plugged into the mainboard of the image acquisition computer to accept the fiber directly or an interface converter that translates the fiber data onto a widely used standard interface. We present here a simple USB 2.0 interface for the UCO/Lick camera. A single USB cable connects to the image acquisition computer and the camera's RS232 serial and fiber optic cables plug into the USB interface. Since most computers now support USB 2.0 the Lick interface makes it possible to use the camera on essentially any modern computer that has the supporting software. No hardware modifications or additions to the computer are needed. The necessary device driver software has been written for the Linux operating system which is now widely used at Lick Observatory. The complete data acquisition software for the Lick CCD camera is running on a variety of PC style computers as well as an HP laptop.
Assessing the feasibility of online SSVEP decoding in human walking using a consumer EEG headset.
Lin, Yuan-Pin; Wang, Yijun; Jung, Tzyy-Ping
2014-08-09
Bridging the gap between laboratory brain-computer interface (BCI) demonstrations and real-life applications has gained increasing attention nowadays in translational neuroscience. An urgent need is to explore the feasibility of using a low-cost, ease-of-use electroencephalogram (EEG) headset for monitoring individuals' EEG signals in their natural head/body positions and movements. This study aimed to assess the feasibility of using a consumer-level EEG headset to realize an online steady-state visual-evoked potential (SSVEP)-based BCI during human walking. This study adopted a 14-channel Emotiv EEG headset to implement a four-target online SSVEP decoding system, and included treadmill walking at the speeds of 0.45, 0.89, and 1.34 meters per second (m/s) to initiate the walking locomotion. Seventeen participants were instructed to perform the online BCI tasks while standing or walking on the treadmill. To maintain a constant viewing distance to the visual targets, participants held the hand-grip of the treadmill during the experiment. Along with online BCI performance, the concurrent SSVEP signals were recorded for offline assessment. Despite walking-related attenuation of SSVEPs, the online BCI obtained an information transfer rate (ITR) over 12 bits/min during slow walking (below 0.89 m/s). SSVEP-based BCI systems are deployable to users in treadmill walking that mimics natural walking rather than in highly-controlled laboratory settings. This study considerably promotes the use of a consumer-level EEG headset towards the real-life BCI applications.
Research developing closed loop roll control for magnetic balance systems
NASA Technical Reports Server (NTRS)
Covert, E. E.; Haldeman, C. W.
1981-01-01
Computer inputs were interfaced to the magnetic balance outputs to provide computer position control and data acquisition. The use of parameter identification of a means of determining dynamic characteristics was investigated. The thyraton and motor generator power supplies for the pitch and yaw degrees of freedom were repaired. Topics covered include: choice of a method for handling dynamic system data; applications to the magnetic balance; the computer interface; and wind tunnel tests, results, and error analysis.
Biomechanical responses of a pig head under blast loading: a computational simulation.
Zhu, Feng; Skelton, Paul; Chou, Cliff C; Mao, Haojie; Yang, King H; King, Albert I
2013-03-01
A series of computational studies were performed to investigate the biomechanical responses of the pig head under a specific shock tube environment. A finite element model of the head of a 50-kg Yorkshire pig was developed with sufficient details, based on the Lagrangian formulation, and a shock tube model was developed using the multimaterial arbitrary Lagrangian-Eulerian (MMALE) approach. These two models were integrated and a fluid/solid coupling algorithm was used to simulate the interaction of the shock wave with the pig's head. The finite element model-predicted incident and intracranial pressure traces were in reasonable agreement with those obtained experimentally. Using the verified numerical model of the shock tube and pig head, further investigations were carried out to study the spatial and temporal distributions of pressure, shear stress, and principal strain within the head. Pressure enhancement was found in the skull, which is believed to be caused by shock wave reflection at the interface of the materials with distinct wave impedances. Brain tissue has a shock attenuation effect and larger pressures were observed in the frontal and occipital regions, suggesting a greater possibility of coup and contrecoup contusion. Shear stresses in the brain and deflection in the skull remained at a low level. Higher principal strains were observed in the brain near the foramen magnum, suggesting that there is a greater chance of cellular or vascular injuries in the brainstem region. Copyright © 2012 John Wiley & Sons, Ltd.
Human factors with nonhumans - Factors that affect computer-task performance
NASA Technical Reports Server (NTRS)
Washburn, David A.
1992-01-01
There are two general strategies that may be employed for 'doing human factors research with nonhuman animals'. First, one may use the methods of traditional human factors investigations to examine the nonhuman animal-to-machine interface. Alternatively, one might use performance by nonhuman animals as a surrogate for or model of performance by a human operator. Each of these approaches is illustrated with data in the present review. Chronic ambient noise was found to have a significant but inconsequential effect on computer-task performance by rhesus monkeys (Macaca mulatta). Additional data supported the generality of findings such as these to humans, showing that rhesus monkeys are appropriate models of human psychomotor performance. It is argued that ultimately the interface between comparative psychology and technology will depend on the coordinated use of both strategies of investigation.
Asai, Yoshiyuki; Tateyama, Shota; Nomura, Taishin
2013-01-01
It has been considered that the brain stabilizes unstable body dynamics by regulating co-activation levels of antagonist muscles. Here we critically reexamined this established theory of impedance control in a postural balancing task using a novel EMG-based human-computer interface, in which subjects were asked to balance a virtual inverted pendulum using visual feedback information on the pendulum's position. The pendulum was actuated by a pair of antagonist joint torques determined in real-time by activations of the corresponding pair of antagonist ankle muscles of subjects standing upright. This motor-task raises a frustrated environment; a large feedback time delay in the sensorimotor loop, as a source of instability, might favor adopting the non-reactive, preprogrammed impedance control, but the ankle muscles are relatively hard to co-activate, which hinders subjects from adopting the impedance control. This study aimed at discovering how experimental subjects resolved this frustrated environment through motor learning. One third of subjects adapted to the balancing task in a way of the impedance-like control. It was remarkable, however, that the majority of subjects did not adopt the impedance control. Instead, they acquired a smart and energetically efficient strategy, in which two muscles were inactivated simultaneously at a sequence of optimal timings, leading to intermittent appearance of periods of time during which the pendulum was not actively actuated. Characterizations of muscle inactivations and the pendulum¡Çs sway showed that the strategy adopted by those subjects was a type of intermittent control that utilizes a stable manifold of saddle-type unstable upright equilibrium that appeared in the state space of the pendulum when the active actuation was turned off. PMID:23717398
An EMG-based robot control scheme robust to time-varying EMG signal features.
Artemiadis, Panagiotis K; Kyriakopoulos, Kostas J
2010-05-01
Human-robot control interfaces have received increased attention during the past decades. With the introduction of robots in everyday life, especially in providing services to people with special needs (i.e., elderly, people with impairments, or people with disabilities), there is a strong necessity for simple and natural control interfaces. In this paper, electromyographic (EMG) signals from muscles of the human upper limb are used as the control interface between the user and a robot arm. EMG signals are recorded using surface EMG electrodes placed on the user's skin, making the user's upper limb free of bulky interface sensors or machinery usually found in conventional human-controlled systems. The proposed interface allows the user to control in real time an anthropomorphic robot arm in 3-D space, using upper limb motion estimates based only on EMG recordings. Moreover, the proposed interface is robust to EMG changes with respect to time, mainly caused by muscle fatigue or adjustments of contraction level. The efficiency of the method is assessed through real-time experiments, including random arm motions in the 3-D space with variable hand speed profiles.
Shishkin, Sergei L.; Nuzhdin, Yuri O.; Svirin, Evgeny P.; Trofimov, Alexander G.; Fedorova, Anastasia A.; Kozyrskiy, Bogdan L.; Velichkovsky, Boris M.
2016-01-01
We usually look at an object when we are going to manipulate it. Thus, eye tracking can be used to communicate intended actions. An effective human-machine interface, however, should be able to differentiate intentional and spontaneous eye movements. We report an electroencephalogram (EEG) marker that differentiates gaze fixations used for control from spontaneous fixations involved in visual exploration. Eight healthy participants played a game with their eye movements only. Their gaze-synchronized EEG data (fixation-related potentials, FRPs) were collected during game's control-on and control-off conditions. A slow negative wave with a maximum in the parietooccipital region was present in each participant's averaged FRPs in the control-on conditions and was absent or had much lower amplitude in the control-off condition. This wave was similar but not identical to stimulus-preceding negativity, a slow negative wave that can be observed during feedback expectation. Classification of intentional vs. spontaneous fixations was based on amplitude features from 13 EEG channels using 300 ms length segments free from electrooculogram contamination (200–500 ms relative to the fixation onset). For the first fixations in the fixation triplets required to make moves in the game, classified against control-off data, a committee of greedy classifiers provided 0.90 ± 0.07 specificity and 0.38 ± 0.14 sensitivity. Similar (slightly lower) results were obtained for the shrinkage Linear Discriminate Analysis (LDA) classifier. The second and third fixations in the triplets were classified at lower rate. We expect that, with improved feature sets and classifiers, a hybrid dwell-based Eye-Brain-Computer Interface (EBCI) can be built using the FRP difference between the intended and spontaneous fixations. If this direction of BCI development will be successful, such a multimodal interface may improve the fluency of interaction and can possibly become the basis for a new input device for paralyzed and healthy users, the EBCI “Wish Mouse.” PMID:27917105
CPP magnetoresistance of magnetic multilayers: A critical review
NASA Astrophysics Data System (ADS)
Bass, Jack
2016-06-01
We present a comprehensive, critical review of data and analysis of Giant (G) Magnetoresistance (MR) with Current-flow Perpendicular-to-the-layer-Planes (CPP-MR) of magnetic multilayers [F/N]n (n=number of repeats) composed of alternating nanoscale layers of ferromagnetic (F) and non-magnetic (N) metals, or of spin-valves that allow control of anti-parallel (AP) and parallel (P) orientations of the magnetic moments of adjacent F-layers. GMR, a large change in resistance when an applied magnetic field changes the moment ordering of adjacent F-layers from AP to P, was discovered in 1988 in the geometry with Current flow in the layer-Planes (CIP). The CPP-MR has two advantages over the CIP-MR: (1) relatively simple two-current series-resistor (2CSR) and more general Valet-Fert (VF) models allow more direct access to the underlying physics; and (2) it is usually larger, which should be advantageous for devices. When the first CPP-MR data were published in 1991, it was not clear whether electronic transport in GMR multilayers is completely diffusive or at least partly ballistic. It was not known whether the properties of layers and interfaces would vary with layer thickness or number. It was not known whether the CPP-MR would be dominated by scattering within the F-metals or at the F/N interfaces. Nothing was known about: (1) spin-flipping within F-metals, characterized by a spin-diffusion length, lsfF; (2) interface specific resistances (AR=area A times resistance R) for N1/N2 interfaces; (3) interface specific resistances and interface spin-dependent scattering asymmetry at F/N and F1/F2 interfaces; and (4) spin-flipping at F/N, F1/F2 and N1/N2 interfaces. Knowledge of spin-dependent scattering asymmetries in F-metals and F-alloys, and of spin-flipping in N-metals and N-alloys, was limited. Since 1991, CPP-MR measurements have quantified the scattering and spin-flipping parameters that determine GMR for a wide range of F- and N-metals and alloys and of F/N pairs. This review is designed to provide a history of how knowledge of CPP-MR parameters grew, to give credit for discoveries, to explain how combining theory and experiment has enabled extraction of quantitative information about these parameters, but also to make clear that progress was not always direct and to point out where disagreements still exist. To limit its length, the review considers only collinear orientations of the moments of adjacent F-layers. To aid readers looking for specific information, we have provided an extensive table of contents and a detailed summary. Together, these should help locate over 100 figures plus 17 tables that collect values of individual parameters. In 1997, CIP-MR replaced anisotropic MR (AMR) as the sensor in read heads of computer hard drives. In principle, the usually larger CPP-MR was a contender for the next generation read head sensor. But in 2003, CIP-MR was replaced by the even larger Tunneling MR (TMR), which has remained the read-head sensor ever since. However, as memory bits shrink to where the relatively large specific resistance AR of TMR gives too much noise and too large an R to impedance match as a read-head sensor, the door is again opened for CPP-MR. We will review progress in finding techniques and F-alloys and F/N pairs to enhance the CPP-MR, and will describe its present capabilities.
CSI computer system/remote interface unit acceptance test results
NASA Technical Reports Server (NTRS)
Sparks, Dean W., Jr.
1992-01-01
The validation tests conducted on the Control/Structures Interaction (CSI) Computer System (CCS)/Remote Interface Unit (RIU) is discussed. The CCS/RIU consists of a commercially available, Langley Research Center (LaRC) programmed, space flight qualified computer and a flight data acquisition and filtering computer, developed at LaRC. The tests were performed in the Space Structures Research Laboratory (SSRL) and included open loop excitation, closed loop control, safing, RIU digital filtering, and RIU stand alone testing with the CSI Evolutionary Model (CEM) Phase-0 testbed. The test results indicated that the CCS/RIU system is comparable to ground based systems in performing real-time control-structure experiments.
Experiments in teleoperator and autonomous control of space robotic vehicles
NASA Technical Reports Server (NTRS)
Alexander, Harold L.
1991-01-01
A program of research embracing teleoperator and automatic navigational control of freely flying satellite robots is presented. Current research goals include: (1) developing visual operator interfaces for improved vehicle teleoperation; (2) determining the effects of different visual interface system designs on operator performance; and (3) achieving autonomous vision-based vehicle navigation and control. This research program combines virtual-environment teleoperation studies and neutral-buoyancy experiments using a space-robot simulator vehicle currently under development. Visual-interface design options under investigation include monoscopic versus stereoscopic displays and cameras, helmet-mounted versus panel-mounted display monitors, head-tracking versus fixed or manually steerable remote cameras, and the provision of vehicle-fixed visual cues, or markers, in the remote scene for improved sensing of vehicle position, orientation, and motion.
Assessment of mechanical properties of human head tissues for trauma modelling.
Lozano-Mínguez, Estívaliz; Palomar, Marta; Infante-García, Diego; Rupérez, María José; Giner, Eugenio
2018-05-01
Many discrepancies are found in the literature regarding the damage and constitutive models for head tissues as well as the values of the constants involved in the constitutive equations. Their proper definition is required for consistent numerical model performance when predicting human head behaviour, and hence skull fracture and brain damage. The objective of this research is to perform a critical review of constitutive models and damage indicators describing human head tissue response under impact loading. A 3D finite element human head model has been generated by using computed tomography images, which has been validated through the comparison to experimental data in the literature. The threshold values of the skull and the scalp that lead to fracture have been analysed. We conclude that (1) compact bone properties are critical in skull fracture, (2) the elastic constants of the cerebrospinal fluid affect the intracranial pressure distribution, and (3) the consideration of brain tissue as a nearly incompressible solid with a high (but not complete) water content offers pressure responses consistent with the experimental data. Copyright © 2018 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Scarella, Gilles; Clatz, Olivier; Lanteri, Stéphane; Beaume, Grégory; Oudot, Steve; Pons, Jean-Philippe; Piperno, Sergo; Joly, Patrick; Wiart, Joe
2006-06-01
The ever-rising diffusion of cellular phones has brought about an increased concern for the possible consequences of electromagnetic radiation on human health. Possible thermal effects have been investigated, via experimentation or simulation, by several research projects in the last decade. Concerning numerical modeling, the power absorption in a user's head is generally computed using discretized models built from clinical MRI data. The vast majority of such numerical studies have been conducted using Finite Differences Time Domain methods, although strong limitations of their accuracy are due to heterogeneity, poor definition of the detailed structures of head tissues (staircasing effects), etc. In order to propose numerical modeling using Finite Element or Discontinuous Galerkin Time Domain methods, reliable automated tools for the unstructured discretization of human heads are also needed. Results presented in this article aim at filling the gap between human head MRI images and the accurate numerical modeling of wave propagation in biological tissues and its thermal effects. To cite this article: G. Scarella et al., C. R. Physique 7 (2006).
NASA Technical Reports Server (NTRS)
Rasmussen, Robert D. (Inventor); Manning, Robert M. (Inventor); Lewis, Blair F. (Inventor); Bolotin, Gary S. (Inventor); Ward, Richard S. (Inventor)
1990-01-01
This is a distributed computing system providing flexible fault tolerance; ease of software design and concurrency specification; and dynamic balance of the loads. The system comprises a plurality of computers each having a first input/output interface and a second input/output interface for interfacing to communications networks each second input/output interface including a bypass for bypassing the associated computer. A global communications network interconnects the first input/output interfaces for providing each computer the ability to broadcast messages simultaneously to the remainder of the computers. A meshwork communications network interconnects the second input/output interfaces providing each computer with the ability to establish a communications link with another of the computers bypassing the remainder of computers. Each computer is controlled by a resident copy of a common operating system. Communications between respective ones of computers is by means of split tokens each having a moving first portion which is sent from computer to computer and a resident second portion which is disposed in the memory of at least one of computer and wherein the location of the second portion is part of the first portion. The split tokens represent both functions to be executed by the computers and data to be employed in the execution of the functions. The first input/output interfaces each include logic for detecting a collision between messages and for terminating the broadcasting of a message whereby collisions between messages are detected and avoided.
Gold-standard for computer-assisted morphological sperm analysis.
Chang, Violeta; Garcia, Alejandra; Hitschfeld, Nancy; Härtel, Steffen
2017-04-01
Published algorithms for classification of human sperm heads are based on relatively small image databases that are not open to the public, and thus no direct comparison is available for competing methods. We describe a gold-standard for morphological sperm analysis (SCIAN-MorphoSpermGS), a dataset of sperm head images with expert-classification labels in one of the following classes: normal, tapered, pyriform, small or amorphous. This gold-standard is for evaluating and comparing known techniques and future improvements to present approaches for classification of human sperm heads for semen analysis. Although this paper does not provide a computational tool for morphological sperm analysis, we present a set of experiments for comparing sperm head description and classification common techniques. This classification base-line is aimed to be used as a reference for future improvements to present approaches for human sperm head classification. The gold-standard provides a label for each sperm head, which is achieved by majority voting among experts. The classification base-line compares four supervised learning methods (1- Nearest Neighbor, naive Bayes, decision trees and Support Vector Machine (SVM)) and three shape-based descriptors (Hu moments, Zernike moments and Fourier descriptors), reporting the accuracy and the true positive rate for each experiment. We used Fleiss' Kappa Coefficient to evaluate the inter-expert agreement and Fisher's exact test for inter-expert variability and statistical significant differences between descriptors and learning techniques. Our results confirm the high degree of inter-expert variability in the morphological sperm analysis. Regarding the classification base line, we show that none of the standard descriptors or classification approaches is best suitable for tackling the problem of sperm head classification. We discovered that the correct classification rate was highly variable when trying to discriminate among non-normal sperm heads. By using the Fourier descriptor and SVM, we achieved the best mean correct classification: only 49%. We conclude that the SCIAN-MorphoSpermGS will provide a standard tool for evaluation of characterization and classification approaches for human sperm heads. Indeed, there is a clear need for a specific shape-based descriptor for human sperm heads and a specific classification approach to tackle the problem of high variability within subcategories of abnormal sperm cells. Copyright © 2017 Elsevier Ltd. All rights reserved.
Sensing Passive Eye Response to Impact Induced Head Acceleration Using MEMS IMUs.
Meng, Yuan; Bottenfield, Brent; Bolding, Mark; Liu, Lei; Adams, Mark L
2018-02-01
The eye may act as a surrogate for the brain in response to head acceleration during an impact. Passive eye movements in a dynamic system are sensed by microelectromechanical systems (MEMS) inertial measurement units (IMU) in this paper. The technique is validated using a three-dimensional printed scaled human skull model and on human volunteers by performing drop-and-impact experiments with ribbon-style flexible printed circuit board IMUs inserted in the eyes and reference IMUs on the heads. Data are captured by a microcontroller unit and processed using data fusion. Displacements are thus estimated and match the measured parameters. Relative accelerations and displacements of the eye to the head are computed indicating the influence of the concussion causing impacts.
ERIC Educational Resources Information Center
Hoffman, Daniel L.
2013-01-01
The purpose of the study is to better understand the role of physicality, interactivity, and interface effects in learning with digital content. Drawing on work in cognitive science, human-computer interaction, and multimedia learning, the study argues that interfaces that promote physical interaction can provide "conceptual leverage"…
Corti, Kevin; Gillespie, Alex
2015-01-01
We use speech shadowing to create situations wherein people converse in person with a human whose words are determined by a conversational agent computer program. Speech shadowing involves a person (the shadower) repeating vocal stimuli originating from a separate communication source in real-time. Humans shadowing for conversational agent sources (e.g., chat bots) become hybrid agents (“echoborgs”) capable of face-to-face interlocution. We report three studies that investigated people’s experiences interacting with echoborgs and the extent to which echoborgs pass as autonomous humans. First, participants in a Turing Test spoke with a chat bot via either a text interface or an echoborg. Human shadowing did not improve the chat bot’s chance of passing but did increase interrogators’ ratings of how human-like the chat bot seemed. In our second study, participants had to decide whether their interlocutor produced words generated by a chat bot or simply pretended to be one. Compared to those who engaged a text interface, participants who engaged an echoborg were more likely to perceive their interlocutor as pretending to be a chat bot. In our third study, participants were naïve to the fact that their interlocutor produced words generated by a chat bot. Unlike those who engaged a text interface, the vast majority of participants who engaged an echoborg did not sense a robotic interaction. These findings have implications for android science, the Turing Test paradigm, and human–computer interaction. The human body, as the delivery mechanism of communication, fundamentally alters the social psychological dynamics of interactions with machine intelligence. PMID:26042066
Development of HEATHER for cochlear implant stimulation using a new modeling workflow.
Tran, Phillip; Sue, Andrian; Wong, Paul; Li, Qing; Carter, Paul
2015-02-01
The current conduction pathways resulting from monopolar stimulation of the cochlear implant were studied by developing a human electroanatomical total head reconstruction (namely, HEATHER). HEATHER was created from serially sectioned images of the female Visible Human Project dataset to encompass a total of 12 different tissues, and included computer-aided design geometries of the cochlear implant. Since existing methods were unable to generate the required complexity for HEATHER, a new modeling workflow was proposed. The results of the finite-element analysis agree with the literature, showing that the injected current exits the cochlea via the modiolus (14%), the basal end of the cochlea (22%), and through the cochlear walls (64%). It was also found that, once leaving the cochlea, the current travels to the implant body via the cranial cavity or scalp. The modeling workflow proved to be robust and flexible, allowing for meshes to be generated with substantial user control. Furthermore, the workflow could easily be employed to create realistic anatomical models of the human head for different bioelectric applications, such as deep brain stimulation, electroencephalography, and other biophysical phenomena.
Eye-head coordination during free exploration in human and cat.
Einhäuser, Wolfgang; Moeller, Gudrun U; Schumann, Frank; Conradt, Jörg; Vockeroth, Johannes; Bartl, Klaus; Schneider, Erich; König, Peter
2009-05-01
Eye, head, and body movements jointly control the direction of gaze and the stability of retinal images in most mammalian species. The contribution of the individual movement components, however, will largely depend on the ecological niche the animal occupies and the layout of the animal's retina, in particular its photoreceptor density distribution. Here the relative contribution of eye-in-head and head-in-world movements in cats is measured, and the results are compared to recent human data. For the cat, a lightweight custom-made head-mounted video setup was used (CatCam). Human data were acquired with the novel EyeSeeCam device, which measures eye position to control a gaze-contingent camera in real time. For both species, analysis was based on simultaneous recordings of eye and head movements during free exploration of a natural environment. Despite the substantial differences in ecological niche, photoreceptor density, and saccade frequency, eye-movement characteristics in both species are remarkably similar. Coordinated eye and head movements dominate the dynamics of the retinal input. Interestingly, compensatory (gaze-stabilizing) movements play a more dominant role in humans than they do in cats. This finding was interpreted to be a consequence of substantially different timescales for head movements, with cats' head movements showing about a 5-fold faster dynamics than humans. For both species, models and laboratory experiments therefore need to account for this rich input dynamic to obtain validity for ecologically realistic settings.
Interface Provides Standard-Bus Communication
NASA Technical Reports Server (NTRS)
Culliton, William G.
1995-01-01
Microprocessor-controlled interface (IEEE-488/LVABI) incorporates service-request and direct-memory-access features. Is circuit card enabling digital communication between system called "laser auto-covariance buffer interface" (LVABI) and compatible personal computer via general-purpose interface bus (GPIB) conforming to Institute for Electrical and Electronics Engineers (IEEE) Standard 488. Interface serves as second interface enabling first interface to exploit advantages of GPIB, via utility software written specifically for GPIB. Advantages include compatibility with multitasking and support of communication among multiple computers. Basic concept also applied in designing interfaces for circuits other than LVABI for unidirectional or bidirectional handling of parallel data up to 16 bits wide.
A haptic interface for virtual simulation of endoscopic surgery.
Rosenberg, L B; Stredney, D
1996-01-01
Virtual reality can be described as a convincingly realistic and naturally interactive simulation in which the user is given a first person illusion of being immersed within a computer generated environment While virtual reality systems offer great potential to reduce the cost and increase the quality of medical training, many technical challenges must be overcome before such simulation platforms offer effective alternatives to more traditional training means. A primary challenge in developing effective virtual reality systems is designing the human interface hardware which allows rich sensory information to be presented to users in natural ways. When simulating a given manual procedure, task specific human interface requirements dictate task specific human interface hardware. The following paper explores the design of human interface hardware that satisfies the task specific requirements of virtual reality simulation of Endoscopic surgical procedures. Design parameters were derived through direct cadaver studies and interviews with surgeons. Final hardware design is presented.
Anthropomorphic Robot Design and User Interaction Associated with Motion
NASA Technical Reports Server (NTRS)
Ellis, Stephen R.
2016-01-01
Though in its original concept a robot was conceived to have some human-like shape, most robots now in use have specific industrial purposes and do not closely resemble humans. Nevertheless, robots that resemble human form in some way have continued to be introduced. They are called anthropomorphic robots. The fact that the user interface to all robots is now highly mediated means that the form of the user interface is not necessarily connected to the robots form, human or otherwise. Consequently, the unique way the design of anthropomorphic robots affects their user interaction is through their general appearance and the way they move. These robots human-like appearance acts as a kind of generalized predictor that gives its operators, and those with whom they may directly work, the expectation that they will behave to some extent like a human. This expectation is especially prominent for interactions with social robots, which are built to enhance it. Often interaction with them may be mainly cognitive because they are not necessarily kinematically intricate enough for complex physical interaction. Their body movement, for example, may be limited to simple wheeled locomotion. An anthropomorphic robot with human form, however, can be kinematically complex and designed, for example, to reproduce the details of human limb, torso, and head movement. Because of the mediated nature of robot control, there remains in general no necessary connection between the specific form of user interface and the anthropomorphic form of the robot. But their anthropomorphic kinematics and dynamics imply that the impact of their design shows up in the way the robot moves. The central finding of this report is that the control of this motion is a basic design element through which the anthropomorphic form can affect user interaction. In particular, designers of anthropomorphic robots can take advantage of the inherent human-like movement to 1) improve the users direct manual control over robot limbs and body positions, 2) improve users ability to detect anomalous robot behavior which could signal malfunction, and 3) enable users to be better able to infer the intent of robot movement. These three benefits of anthropomorphic design are inherent implications of the anthropomorphic form but they need to be recognized by designers as part of anthropomorphic design and explicitly enhanced to maximize their beneficial impact. Examples of such enhancements are provided in this report. If implemented, these benefits of anthropomorphic design can help reduce the risk of Inadequate Design of Human and Automation Robotic Integration (HARI) associated with the HARI-01 gap by providing efficient and dexterous operator control over robots and by improving operator ability to detect malfunctions and understand the intention of robot movement.
Grissmann, Sebastian; Zander, Thorsten O; Faller, Josef; Brönstrup, Jonas; Kelava, Augustin; Gramann, Klaus; Gerjets, Peter
2017-01-01
Most brain-computer interfaces (BCIs) focus on detecting single aspects of user states (e.g., motor imagery) in the electroencephalogram (EEG) in order to use these aspects as control input for external systems. This communication can be effective, but unaccounted mental processes can interfere with signals used for classification and thereby introduce changes in the signal properties which could potentially impede BCI classification performance. To improve BCI performance, we propose deploying an approach that potentially allows to describe different mental states that could influence BCI performance. To test this approach, we analyzed neural signatures of potential affective states in data collected in a paradigm where the complex user state of perceived loss of control (LOC) was induced. In this article, source localization methods were used to identify brain dynamics with source located outside but affecting the signal of interest originating from the primary motor areas, pointing to interfering processes in the brain during natural human-machine interaction. In particular, we found affective correlates which were related to perceived LOC. We conclude that additional context information about the ongoing user state might help to improve the applicability of BCIs to real-world scenarios.
Grissmann, Sebastian; Zander, Thorsten O.; Faller, Josef; Brönstrup, Jonas; Kelava, Augustin; Gramann, Klaus; Gerjets, Peter
2017-01-01
Most brain-computer interfaces (BCIs) focus on detecting single aspects of user states (e.g., motor imagery) in the electroencephalogram (EEG) in order to use these aspects as control input for external systems. This communication can be effective, but unaccounted mental processes can interfere with signals used for classification and thereby introduce changes in the signal properties which could potentially impede BCI classification performance. To improve BCI performance, we propose deploying an approach that potentially allows to describe different mental states that could influence BCI performance. To test this approach, we analyzed neural signatures of potential affective states in data collected in a paradigm where the complex user state of perceived loss of control (LOC) was induced. In this article, source localization methods were used to identify brain dynamics with source located outside but affecting the signal of interest originating from the primary motor areas, pointing to interfering processes in the brain during natural human-machine interaction. In particular, we found affective correlates which were related to perceived LOC. We conclude that additional context information about the ongoing user state might help to improve the applicability of BCIs to real-world scenarios. PMID:28769776
1993-11-01
way is to develop a crude but working model of an entire system. The other is by developing a realistic model of the user interface , leaving out most...devices or by incorporating software for a more user -friendly interface . Automation introduces the possibility of making data entry errors. Multimode...across various human- computer interfaces . 127 a Memory: Minimize the amount of information that the user must maintain in short-term memory
NASA Astrophysics Data System (ADS)
Vijayan, Rohan; Conley, Rebekah H.; Thompson, Reid C.; Clements, Logan W.; Miga, Michael I.
2016-03-01
Brain shift describes the deformation that the brain undergoes from mechanical and physiological effects typically during a neurosurgical or neurointerventional procedure. With respect to image guidance techniques, brain shift has been shown to compromise the fidelity of these approaches. In recent work, a computational pipeline has been developed to predict "brain shift" based on preoperatively determined surgical variables (such as head orientation), and subsequently correct preoperative images to more closely match the intraoperative state of the brain. However, a clinical workflow difficulty in the execution of this pipeline has been acquiring the surgical variables by the neurosurgeon prior to surgery. In order to simplify and expedite this process, an Android, Java-based application designed for tablets was developed to provide the neurosurgeon with the ability to orient 3D computer graphic models of the patient's head, determine expected location and size of the craniotomy, and provide the trajectory into the tumor. These variables are exported for use as inputs for the biomechanical models of the preoperative computing phase for the brain shift correction pipeline. The accuracy of the application's exported data was determined by comparing it to data acquired from the physical execution of the surgeon's plan on a phantom head. Results indicated good overlap of craniotomy predictions, craniotomy centroid locations, and estimates of patient's head orientation with respect to gravity. However, improvements in the app interface and mock surgical setup are needed to minimize error.
Perception and Haptic Rendering of Friction Moments.
Kawasaki, H; Ohtuka, Y; Koide, S; Mouri, T
2011-01-01
This paper considers moments due to friction forces on the human fingertip. A computational technique called the friction moment arc method is presented. The method computes the static and/or dynamic friction moment independent of a friction force calculation. In addition, a new finger holder to display friction moment is presented. This device incorporates a small brushless motor and disk, and connects the human's finger to an interface finger of the five-fingered haptic interface robot HIRO II. Subjects' perception of friction moment while wearing the finger holder, as well as perceptions during object manipulation in a virtual reality environment, were evaluated experimentally.
NASA Astrophysics Data System (ADS)
Wei, Yanni; Luo, Yongguang; Qu, Hongtao; Zou, Juntao; Liang, Shuhua
2017-12-01
In this paper, microstructure evolution and failure analysis of the aluminum-copper interface of cathode conductive heads during their use were studied. The interface morphologies, compositions, conductivity and mechanical properties were investigated and analyzed. Obvious corrosion was found on the surface of the contact interface, which was more prevalent on an Al matrix. The crack increased sharply in the local metallurgical bonding areas on the interface, with the compound volume having no significant change. The phase transformation occurred on the interface during use, which was investigated using the elemental composition and x-ray diffraction pattern. The microhardness near the interface increased accordingly. An obvious electrical conductivity decrease appeared on the Al/Cu interface of the cathode conductive head after use over a specific time interval. Therefore, the deterioration of the microstructures and corrosion are the primary factors that affect the electrical conductivity and effective bonding, which will lead to eventual failure.
Alonso-Valerdi, Luz María
2016-01-01
A brain-computer interface (BCI) aims to establish communication between the human brain and a computing system so as to enable the interaction between an individual and his environment without using the brain output pathways. Individuals control a BCI system by modulating their brain signals through mental tasks (e.g., motor imagery or mental calculation) or sensory stimulation (e.g., auditory, visual, or tactile). As users modulate their brain signals at different frequencies and at different levels, the appropriate characterization of those signals is necessary. The modulation of brain signals through mental tasks is furthermore a skill that requires training. Unfortunately, not all the users acquire such skill. A practical solution to this problem is to assess the user probability of controlling a BCI system. Another possible solution is to set the bandwidth of the brain oscillations, which is highly sensitive to the users' age, sex and anatomy. With this in mind, NeuroIndex, a Python executable script, estimates a neurophysiological prediction index and the individual alpha frequency (IAF) of the user in question. These two parameters are useful to characterize the user EEG signals, and decide how to go through the complex process of adapting the human brain and the computing system on the basis of previously proposed methods. NeuroIndeX is not only the implementation of those methods, but it also complements the methods each other and provides an alternative way to obtain the prediction parameter. However, an important limitation of this application is its dependency on the IAF value, and some results should be interpreted with caution. The script along with some electroencephalographic datasets are available on a GitHub repository in order to corroborate the functionality and usability of this application.
Alonso-Valerdi, Luz María
2016-01-01
A brain-computer interface (BCI) aims to establish communication between the human brain and a computing system so as to enable the interaction between an individual and his environment without using the brain output pathways. Individuals control a BCI system by modulating their brain signals through mental tasks (e.g., motor imagery or mental calculation) or sensory stimulation (e.g., auditory, visual, or tactile). As users modulate their brain signals at different frequencies and at different levels, the appropriate characterization of those signals is necessary. The modulation of brain signals through mental tasks is furthermore a skill that requires training. Unfortunately, not all the users acquire such skill. A practical solution to this problem is to assess the user probability of controlling a BCI system. Another possible solution is to set the bandwidth of the brain oscillations, which is highly sensitive to the users' age, sex and anatomy. With this in mind, NeuroIndex, a Python executable script, estimates a neurophysiological prediction index and the individual alpha frequency (IAF) of the user in question. These two parameters are useful to characterize the user EEG signals, and decide how to go through the complex process of adapting the human brain and the computing system on the basis of previously proposed methods. NeuroIndeX is not only the implementation of those methods, but it also complements the methods each other and provides an alternative way to obtain the prediction parameter. However, an important limitation of this application is its dependency on the IAF value, and some results should be interpreted with caution. The script along with some electroencephalographic datasets are available on a GitHub repository in order to corroborate the functionality and usability of this application. PMID:27445783
NASA Astrophysics Data System (ADS)
Naqvi, Rizwan Ali; Park, Kang Ryoung
2016-06-01
Gaze tracking systems are widely used in human-computer interfaces, interfaces for the disabled, game interfaces, and for controlling home appliances. Most studies on gaze detection have focused on enhancing its accuracy, whereas few have considered the discrimination of intentional gaze fixation (looking at a target to activate or select it) from unintentional fixation while using gaze detection systems. Previous research methods based on the use of a keyboard or mouse button, eye blinking, and the dwell time of gaze position have various limitations. Therefore, we propose a method for discriminating between intentional and unintentional gaze fixation using a multimodal fuzzy logic algorithm applied to a gaze tracking system with a near-infrared camera sensor. Experimental results show that the proposed method outperforms the conventional method for determining gaze fixation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laros, James H.; Grant, Ryan; Levenhagen, Michael J.
Measuring and controlling the power and energy consumption of high performance computing systems by various components in the software stack is an active research area. Implementations in lower level software layers are beginning to emerge in some production systems, which is very welcome. To be most effective, a portable interface to measurement and control features would significantly facilitate participation by all levels of the software stack. We present a proposal for a standard power Application Programming Interface (API) that endeavors to cover the entire software space, from generic hardware interfaces to the input from the computer facility manager.
Junwei Ma; Han Yuan; Sunderam, Sridhar; Besio, Walter; Lei Ding
2017-07-01
Neural activity inside the human brain generate electrical signals that can be detected on the scalp. Electroencephalograph (EEG) is one of the most widely utilized techniques helping physicians and researchers to diagnose and understand various brain diseases. Due to its nature, EEG signals have very high temporal resolution but poor spatial resolution. To achieve higher spatial resolution, a novel tri-polar concentric ring electrode (TCRE) has been developed to directly measure Surface Laplacian (SL). The objective of the present study is to accurately calculate SL for TCRE based on a realistic geometry head model. A locally dense mesh was proposed to represent the head surface, where the local dense parts were to match the small structural components in TCRE. Other areas without dense mesh were used for the purpose of reducing computational load. We conducted computer simulations to evaluate the performance of the proposed mesh and evaluated possible numerical errors as compared with a low-density model. Finally, with achieved accuracy, we presented the computed forward lead field of SL for TCRE for the first time in a realistic geometry head model and demonstrated that it has better spatial resolution than computed SL from classic EEG recordings.
Effects of External Loads on Human Head Movement Control Systems
NASA Technical Reports Server (NTRS)
Nam, M. H.; Choi, O. M.
1984-01-01
The central and reflexive control strategies underlying movements were elucidated by studying the effects of external loads on human head movement control systems. Some experimental results are presented on dynamic changes weigh the addition of aviation helmet (SPH4) and lead weights (6 kg). Intended time-optimal movements, their dynamics and electromyographic activity of neck muscles in normal movements, and also in movements made with external weights applied to the head were measured. It was observed that, when the external loads were added, the subject went through complex adapting processes and the head movement trajectory and its derivatives reached steady conditions only after transient adapting period. The steady adapted state was reached after 15 to 20 seconds (i.e., 5 to 6 movements).
Rothschild, Ryan Mark
2010-01-01
The main focus of this review is to provide a holistic amalgamated overview of the most recent human in vivo techniques for implementing brain-computer interfaces (BCIs), bidirectional interfaces, and neuroprosthetics. Neuroengineering is providing new methods for tackling current difficulties; however neuroprosthetics have been studied for decades. Recent progresses are permitting the design of better systems with higher accuracies, repeatability, and system robustness. Bidirectional interfaces integrate recording and the relaying of information from and to the brain for the development of BCIs. The concepts of non-invasive and invasive recording of brain activity are introduced. This includes classical and innovative techniques like electroencephalography and near-infrared spectroscopy. Then the problem of gliosis and solutions for (semi-) permanent implant biocompatibility such as innovative implant coatings, materials, and shapes are discussed. Implant power and the transmission of their data through implanted pulse generators and wireless telemetry are taken into account. How sensation can be relayed back to the brain to increase integration of the neuroengineered systems with the body by methods such as micro-stimulation and transcranial magnetic stimulation are then addressed. The neuroprosthetic section discusses some of the various types and how they operate. Visual prosthetics are discussed and the three types, dependant on implant location, are examined. Auditory prosthetics, being cochlear or cortical, are then addressed. Replacement hand and limb prosthetics are then considered. These are followed by sections concentrating on the control of wheelchairs, computers and robotics directly from brain activity as recorded by non-invasive and invasive techniques.
Rothschild, Ryan Mark
2010-01-01
The main focus of this review is to provide a holistic amalgamated overview of the most recent human in vivo techniques for implementing brain–computer interfaces (BCIs), bidirectional interfaces, and neuroprosthetics. Neuroengineering is providing new methods for tackling current difficulties; however neuroprosthetics have been studied for decades. Recent progresses are permitting the design of better systems with higher accuracies, repeatability, and system robustness. Bidirectional interfaces integrate recording and the relaying of information from and to the brain for the development of BCIs. The concepts of non-invasive and invasive recording of brain activity are introduced. This includes classical and innovative techniques like electroencephalography and near-infrared spectroscopy. Then the problem of gliosis and solutions for (semi-) permanent implant biocompatibility such as innovative implant coatings, materials, and shapes are discussed. Implant power and the transmission of their data through implanted pulse generators and wireless telemetry are taken into account. How sensation can be relayed back to the brain to increase integration of the neuroengineered systems with the body by methods such as micro-stimulation and transcranial magnetic stimulation are then addressed. The neuroprosthetic section discusses some of the various types and how they operate. Visual prosthetics are discussed and the three types, dependant on implant location, are examined. Auditory prosthetics, being cochlear or cortical, are then addressed. Replacement hand and limb prosthetics are then considered. These are followed by sections concentrating on the control of wheelchairs, computers and robotics directly from brain activity as recorded by non-invasive and invasive techniques. PMID:21060801
NASA Astrophysics Data System (ADS)
Milanovic, Veljko; Kasturi, Abhishek; Hachtel, Volker
2015-02-01
A high brightness Head-Up Display (HUD) module was demonstrated with a fast, dual-axis MEMS mirror that displays vector images and text, utilizing its ~8kHz bandwidth on both axes. Two methodologies were evaluated: in one, the mirror steers a laser at wide angles of <48° on transparent multi-color fluorescent emissive film and displays content directly on the windshield, and in the other the mirror displays content on reflective multi-color emissive phosphor plates reflected off the windshield to create a virtual image for the driver. The display module is compact, consisting of a single laser diode, off-the-shelf lenses and a MEMS mirror in combination with a MEMS controller to enable precise movement of the mirror's X- and Y-axis. The MEMS controller offers both USB and wireless streaming capability and we utilize a library of functions on a host computer for creating content and controlling the mirror. Integration with smart phone applications is demonstrated, utilizing the mobile device both for content generation based on various messages or data, and for content streaming to the MEMS controller via Bluetooth interface. The display unit is highly resistant to vibrations and shock, and requires only ~1.5W to operate, even with content readable in sunlit outdoor conditions. The low power requirement is in part due to a vector graphics approach, allowing the efficient use of laser power, and also due to the use of a single, relatively high efficiency laser and simple optics.
Kim, Dong-Goo; Lim, Sung Eun; Kim, Dong-A; Hwang, Sung Il; Yim, You-lim; Park, Jeong Mi
2013-01-01
In order to determine the most suitable computer interfaces for patients with high cervical cord injury, we report three cases of applications of special input devices. The first was a 49-year-old patient with neurological level of injury (NLI) C4, American Spinal Injury Association Impairment Scale (ASIA)-A. He could move the cursor by using a webcam-based Camera Mouse. Moreover, clicking the mouse could only be performed by pronation of the forearm on the modified Micro Light Switch. The second case was a 41-year-old patient with NLI C3, ASIA-A. The SmartNav 4AT which responds according to head movements could provide stable performance in clicking and dragging. The third was a 13-year-old patient with NLI C1, ASIA-B. The IntegraMouse enabling clicking and dragging with fine movements of the lips. Selecting the appropriate interface device for patients with high cervical cord injury could be considered an important part of rehabilitation. We expect the standard proposed in this study will be helpful. PMID:23869346
Novel 3-D Computer Model Can Help Predict Pathogens’ Roles in Cancer | Poster
To understand how bacterial and viral infections contribute to human cancers, four NCI at Frederick scientists turned not to the lab bench, but to a computer. The team has created the world’s first—and currently, only—3-D computational approach for studying interactions between pathogen proteins and human proteins based on a molecular adaptation known as interface mimicry.
Standard interface: Twin-coaxial converter
NASA Technical Reports Server (NTRS)
Lushbaugh, W. A.
1976-01-01
The network operations control center standard interface has been adopted as a standard computer interface for all future minicomputer based subsystem development for the Deep Space Network. Discussed is an intercomputer communications link using a pair of coaxial cables. This unit is capable of transmitting and receiving digital information at distances up to 600 m with complete ground isolation between the communicating devices. A converter is described that allows a computer equipped with the standard interface to use the twin coaxial link.
Collaborative Brain-Computer Interface for Aiding Decision-Making
Poli, Riccardo; Valeriani, Davide; Cinel, Caterina
2014-01-01
We look at the possibility of integrating the percepts from multiple non-communicating observers as a means of achieving better joint perception and better group decisions. Our approach involves the combination of a brain-computer interface with human behavioural responses. To test ideas in controlled conditions, we asked observers to perform a simple matching task involving the rapid sequential presentation of pairs of visual patterns and the subsequent decision as whether the two patterns in a pair were the same or different. We recorded the response times of observers as well as a neural feature which predicts incorrect decisions and, thus, indirectly indicates the confidence of the decisions made by the observers. We then built a composite neuro-behavioural feature which optimally combines the two measures. For group decisions, we uses a majority rule and three rules which weigh the decisions of each observer based on response times and our neural and neuro-behavioural features. Results indicate that the integration of behavioural responses and neural features can significantly improve accuracy when compared with the majority rule. An analysis of event-related potentials indicates that substantial differences are present in the proximity of the response for correct and incorrect trials, further corroborating the idea of using hybrids of brain-computer interfaces and traditional strategies for improving decision making. PMID:25072739
Effects of Soft Drinks on Resting State EEG and Brain-Computer Interface Performance.
Meng, Jianjun; Mundahl, John; Streitz, Taylor; Maile, Kaitlin; Gulachek, Nicholas; He, Jeffrey; He, Bin
2017-01-01
Motor imagery-based (MI based) brain-computer interface (BCI) using electroencephalography (EEG) allows users to directly control a computer or external device by modulating and decoding the brain waves. A variety of factors could potentially affect the performance of BCI such as the health status of subjects or the environment. In this study, we investigated the effects of soft drinks and regular coffee on EEG signals under resting state and on the performance of MI based BCI. Twenty-six healthy human subjects participated in three or four BCI sessions with a resting period in each session. During each session, the subjects drank an unlabeled soft drink with either sugar (Caffeine Free Coca-Cola), caffeine (Diet Coke), neither ingredient (Caffeine Free Diet Coke), or a regular coffee if there was a fourth session. The resting state spectral power in each condition was compared; the analysis showed that power in alpha and beta band after caffeine consumption were decreased substantially compared to control and sugar condition. Although the attenuation of powers in the frequency range used for the online BCI control signal was shown, group averaged BCI online performance after consuming caffeine was similar to those of other conditions. This work, for the first time, shows the effect of caffeine, sugar intake on the online BCI performance and resting state brain signal.
Heading-vector navigation based on head-direction cells and path integration.
Kubie, John L; Fenton, André A
2009-05-01
Insect navigation is guided by heading vectors that are computed by path integration. Mammalian navigation models, on the other hand, are typically based on map-like place representations provided by hippocampal place cells. Such models compute optimal routes as a continuous series of locations that connect the current location to a goal. We propose a "heading-vector" model in which head-direction cells or their derivatives serve both as key elements in constructing the optimal route and as the straight-line guidance during route execution. The model is based on a memory structure termed the "shortcut matrix," which is constructed during the initial exploration of an environment when a set of shortcut vectors between sequential pairs of visited waypoint locations is stored. A mechanism is proposed for calculating and storing these vectors that relies on a hypothesized cell type termed an "accumulating head-direction cell." Following exploration, shortcut vectors connecting all pairs of waypoint locations are computed by vector arithmetic and stored in the shortcut matrix. On re-entry, when local view or place representations query the shortcut matrix with a current waypoint and goal, a shortcut trajectory is retrieved. Since the trajectory direction is in head-direction compass coordinates, navigation is accomplished by tracking the firing of head-direction cells that are tuned to the heading angle. Section 1 of the manuscript describes the properties of accumulating head-direction cells. It then shows how accumulating head-direction cells can store local vectors and perform vector arithmetic to perform path-integration-based homing. Section 2 describes the construction and use of the shortcut matrix for computing direct paths between any pair of locations that have been registered in the shortcut matrix. In the discussion, we analyze the advantages of heading-based navigation over map-based navigation. Finally, we survey behavioral evidence that nonhippocampal, heading-based navigation is used in small mammals and humans. Copyright 2008 Wiley-Liss, Inc.
Kaplan, A Ya
2016-01-01
Technology brain-computer interface (BCI) based on the registration and interpretation of EEG has recently become one of the most popular developments in neuroscience and psychophysiology. This is due not only to the intended future use of these technologies in many areas of practical human activity, but also to the fact that IMC--is a completely new paradigm in psychophysiology, allowing test hypotheses about the possibilities of the human brain to the development of skills of interaction with the outside world without the mediation of the motor system, i.e. only with the help of voluntary modulation of EEG generators. This paper examines the theoretical and experimental basis, the current state and prospects of development of training, communicational and assisting complexes based on BCI to control them without muscular effort on the basis of mental commands detected in the EEG of patients with severely impaired speech and motor system.
Time Counts! Some Comments on System Latency in Head-Referenced Displays
NASA Technical Reports Server (NTRS)
Ellis, Stephen R.; Adelstein, Bernard D.
2013-01-01
System response latency is a prominent characteristic of human-computer interaction. Laggy systems are; however, not simply annoying but substantially reduce user productivity. The impact of latency on head referenced display systems, particularly head-mounted systems, is especially disturbing since not only can it interfere with dynamic registration in augmented reality displays but it also can in some cases indirectly contribute to motion sickness. We will summarize several experiments using standard psychophysical discrimination techniques that suggest what system latencies will be required to achieve perceptual stability for spatially referenced computer-generated imagery. In conclusion I will speculate about other system performance characteristics that I would hope to have for a dream augmented reality system.
Fockler, S K; Vavrik, J; Kristiansen, L
1998-11-01
Three types of driver educational strategies were tested to determine the most effective approach for motivating drivers to adjust their head restraints to the correct vertical position: (1) a human interactive personal contact with a member of an ICBC-trained head restraint adjustment team, (2) a passive video presentation of the consequences of correct and incorrect head restraint adjustment, and (3) an interactive three-dimensional kinetic model showing the consequences of correct and incorrect head restraint adjustment. An experimental pretest-posttest control group design was used. A different educational treatment was used in each of three lanes of a vehicle emissions testing facility, with a fourth lane with no intervention serving as a control group. Observational and self-reported data were obtained from a total of 1,974 vehicles entering and exiting the facility. The human intervention led to significantly more drivers actually adjusting their head restraints immediately after the intervention than the passive video or interactive kinetic model approaches, which were both no different from the control group. The human intervention was recommended as the most effective and was implemented successfully on a limited basis during 3 months of 1995 and again during 3 months of 1996.
Towards Rehabilitation Robotics: Off-the-Shelf BCI Control of Anthropomorphic Robotic Arms.
Athanasiou, Alkinoos; Xygonakis, Ioannis; Pandria, Niki; Kartsidis, Panagiotis; Arfaras, George; Kavazidi, Kyriaki Rafailia; Foroglou, Nicolas; Astaras, Alexander; Bamidis, Panagiotis D
2017-01-01
Advances in neural interfaces have demonstrated remarkable results in the direction of replacing and restoring lost sensorimotor function in human patients. Noninvasive brain-computer interfaces (BCIs) are popular due to considerable advantages including simplicity, safety, and low cost, while recent advances aim at improving past technological and neurophysiological limitations. Taking into account the neurophysiological alterations of disabled individuals, investigating brain connectivity features for implementation of BCI control holds special importance. Off-the-shelf BCI systems are based on fast, reproducible detection of mental activity and can be implemented in neurorobotic applications. Moreover, social Human-Robot Interaction (HRI) is increasingly important in rehabilitation robotics development. In this paper, we present our progress and goals towards developing off-the-shelf BCI-controlled anthropomorphic robotic arms for assistive technologies and rehabilitation applications. We account for robotics development, BCI implementation, and qualitative assessment of HRI characteristics of the system. Furthermore, we present two illustrative experimental applications of the BCI-controlled arms, a study of motor imagery modalities on healthy individuals' BCI performance, and a pilot investigation on spinal cord injured patients' BCI control and brain connectivity. We discuss strengths and limitations of our design and propose further steps on development and neurophysiological study, including implementation of connectivity features as BCI modality.
Towards Rehabilitation Robotics: Off-the-Shelf BCI Control of Anthropomorphic Robotic Arms
Xygonakis, Ioannis; Pandria, Niki; Kartsidis, Panagiotis; Arfaras, George; Kavazidi, Kyriaki Rafailia; Foroglou, Nicolas
2017-01-01
Advances in neural interfaces have demonstrated remarkable results in the direction of replacing and restoring lost sensorimotor function in human patients. Noninvasive brain-computer interfaces (BCIs) are popular due to considerable advantages including simplicity, safety, and low cost, while recent advances aim at improving past technological and neurophysiological limitations. Taking into account the neurophysiological alterations of disabled individuals, investigating brain connectivity features for implementation of BCI control holds special importance. Off-the-shelf BCI systems are based on fast, reproducible detection of mental activity and can be implemented in neurorobotic applications. Moreover, social Human-Robot Interaction (HRI) is increasingly important in rehabilitation robotics development. In this paper, we present our progress and goals towards developing off-the-shelf BCI-controlled anthropomorphic robotic arms for assistive technologies and rehabilitation applications. We account for robotics development, BCI implementation, and qualitative assessment of HRI characteristics of the system. Furthermore, we present two illustrative experimental applications of the BCI-controlled arms, a study of motor imagery modalities on healthy individuals' BCI performance, and a pilot investigation on spinal cord injured patients' BCI control and brain connectivity. We discuss strengths and limitations of our design and propose further steps on development and neurophysiological study, including implementation of connectivity features as BCI modality. PMID:28948168
The myokinetic control interface: tracking implanted magnets as a means for prosthetic control.
Tarantino, S; Clemente, F; Barone, D; Controzzi, M; Cipriani, C
2017-12-07
Upper limb amputation deprives individuals of their innate ability to manipulate objects. Such disability can be restored with a robotic prosthesis linked to the brain by a human-machine interface (HMI) capable of decoding voluntary intentions, and sending motor commands to the prosthesis. Clinical or research HMIs rely on the interpretation of electrophysiological signals recorded from the muscles. However, the quest for an HMI that allows for arbitrary and physiologically appropriate control of dexterous prostheses, is far from being completed. Here we propose a new HMI that aims to track the muscles contractions with implanted permanent magnets, by means of magnetic field sensors. We called this a myokinetic control interface. We present the concept, the features and a demonstration of a prototype which exploits six 3-axis sensors to localize four magnets implanted in a forearm mockup, for the control of a dexterous hand prosthesis. The system proved highly linear (R 2 = 0.99) and precise (1% repeatability), yet exhibiting short computation delay (45 ms) and limited cross talk errors (10% the mean stroke of the magnets). Our results open up promising possibilities for amputees, demonstrating the viability of the myokinetic approach in implementing direct and simultaneous control over multiple digits of an artificial hand.
NAS infrastructure management system build 1.5 computer-human interface
DOT National Transportation Integrated Search
2001-01-01
Human factors engineers from the National Airspace System (NAS) Human Factors Branch (ACT-530) of the Federal Aviation Administration William J. Hughes Technical Center conducted an evaluation of the NAS Infrastructure Management System (NIMS) Build ...
Ethics in published brain-computer interface research
NASA Astrophysics Data System (ADS)
Specker Sullivan, L.; Illes, J.
2018-02-01
Objective. Sophisticated signal processing has opened the doors to more research with human subjects than ever before. The increase in the use of human subjects in research comes with a need for increased human subjects protections. Approach. We quantified the presence or absence of ethics language in published reports of brain-computer interface (BCI) studies that involved human subjects and qualitatively characterized ethics statements. Main results. Reports of BCI studies with human subjects that are published in neural engineering and engineering journals are anchored in the rationale of technological improvement. Ethics language is markedly absent, omitted from 31% of studies published in neural engineering journals and 59% of studies in biomedical engineering journals. Significance. As the integration of technological tools with the capacities of the mind deepens, explicit attention to ethical issues will ensure that broad human benefit is embraced and not eclipsed by technological exclusiveness.
Embedded Training Display Technology for the Army’s Future Combat Vehicles
2004-12-01
RESULTS 2.1 OLED Microdisplays and Associated Electronics The OLED kit used in developing the prototype is available from eMagin Corporation. A...port a computer. Fig. 1. SVGA PC interface kit from eMagin 2.2 Overall Optical Layout Head-mounted projection optics as opposed to... eMagin Corporation) chosen for a prototyping phase of this project is color, thus requiring optical aberration correction across the visible
Experiments on Interfaces To Support Query Expansion.
ERIC Educational Resources Information Center
Beaulieu, M.
1997-01-01
Focuses on the user and human-computer interaction aspects of the research based on the Okapi text retrieval system. Three experiments implementing different approaches to query expansion are described, including the use of graphical user interfaces with different windowing techniques. (Author/LRW)
ERIC Educational Resources Information Center
VanLehn, Kurt
2011-01-01
This article is a review of experiments comparing the effectiveness of human tutoring, computer tutoring, and no tutoring. "No tutoring" refers to instruction that teaches the same content without tutoring. The computer tutoring systems were divided by their granularity of the user interface interaction into answer-based, step-based, and…
Overview Electrotactile Feedback for Enhancing Human Computer Interface
NASA Astrophysics Data System (ADS)
Pamungkas, Daniel S.; Caesarendra, Wahyu
2018-04-01
To achieve effective interaction between a human and a computing device or machine, adequate feedback from the computing device or machine is required. Recently, haptic feedback is increasingly being utilised to improve the interactivity of the Human Computer Interface (HCI). Most existing haptic feedback enhancements aim at producing forces or vibrations to enrich the user’s interactive experience. However, these force and/or vibration actuated haptic feedback systems can be bulky and uncomfortable to wear and only capable of delivering a limited amount of information to the user which can limit both their effectiveness and the applications they can be applied to. To address this deficiency, electrotactile feedback is used. This involves delivering haptic sensations to the user by electrically stimulating nerves in the skin via electrodes placed on the surface of the skin. This paper presents a review and explores the capability of electrotactile feedback for HCI applications. In addition, a description of the sensory receptors within the skin for sensing tactile stimulus and electric currents alsoseveral factors which influenced electric signal to transmit to the brain via human skinare explained.
Johnston, R.H.
1983-01-01
Hydrologic testing in an offshore oil well abandoned by Tenneco, Inc., determined the position of the saltwater-freshwater interface in Tertiary limestones underlying the Florida-Georgia continental shelf of the U.S.A. Previous drilling (JOIDES and U.S.G.S. AMCOR projects) established the existence of freshwater far offshore in this area. At the Tenneco well 55 mi. (???88 km) east of Fernandina Beach, Florida, drill-stem tests made in the interval 1050-1070 ft. (320-326 m) below sea level in the Ocala Limestone recovered a sample with a chloride concentration of 7000 mg l-1. Formation water probably is slightly fresher. Pressure-head measurements indicated equivalent freshwater heads of 24-29 ft. (7.3-8.8 m) above sea level. At the coast (Fernandina Beach), a relatively thin transition zone separating freshwater and saltwater occurs at a depth of 2100 ft. (640 m) below sea level. Fifty-five miles (???88 km) offshore, at the Tenneco well, the base of freshwater is ???1100 ft. (???335 m) below sea level. The difference in approximate depth to the freshwater-saltwater transition at these two locations suggests an interface with a very slight landward slope. Assuming the Hubbert interface equation applies here (because the interface and therefore freshwater flow lines are nearly horizontal) the equilibrium depth to the interface should be 40 times the freshwater head above sea level. Using present-day freshwater heads along the coast in the Hubbert equation results in depths to the interface of less than the observed 2100 ft. (640 m). Substituting predevelopment heads in the equation yields depths greater than 2100 ft. (640 m). Thus the interface appears to be in a transient position between the position that would be compatible with present-day heads and the position that would be compatible with predevelopment heads. This implies that some movement of the interface from the predevelopment position has occurred during the past hundred years. The implied movement is incompatible with the hypothesis that the freshwater occurring far offshore in this area is trapped water remaining since the Pleistocene Epoch. ?? 1982.
Ahn, Minkyu; Lee, Mijin; Choi, Jinyoung; Jun, Sung Chan
2014-01-01
In recent years, research on Brain-Computer Interface (BCI) technology for healthy users has attracted considerable interest, and BCI games are especially popular. This study reviews the current status of, and describes future directions, in the field of BCI games. To this end, we conducted a literature search and found that BCI control paradigms using electroencephalographic signals (motor imagery, P300, steady state visual evoked potential and passive approach reading mental state) have been the primary focus of research. We also conducted a survey of nearly three hundred participants that included researchers, game developers and users around the world. From this survey, we found that all three groups (researchers, developers and users) agreed on the significant influence and applicability of BCI and BCI games, and they all selected prostheses, rehabilitation and games as the most promising BCI applications. User and developer groups tended to give low priority to passive BCI and the whole head sensor array. Developers gave higher priorities to “the easiness of playing” and the “development platform” as important elements for BCI games and the market. Based on our assessment, we discuss the critical point at which BCI games will be able to progress from their current stage to widespread marketing to consumers. In conclusion, we propose three critical elements important for expansion of the BCI game market: standards, gameplay and appropriate integration. PMID:25116904
Ahn, Minkyu; Lee, Mijin; Choi, Jinyoung; Jun, Sung Chan
2014-08-11
In recent years, research on Brain-Computer Interface (BCI) technology for healthy users has attracted considerable interest, and BCI games are especially popular. This study reviews the current status of, and describes future directions, in the field of BCI games. To this end, we conducted a literature search and found that BCI control paradigms using electroencephalographic signals (motor imagery, P300, steady state visual evoked potential and passive approach reading mental state) have been the primary focus of research. We also conducted a survey of nearly three hundred participants that included researchers, game developers and users around the world. From this survey, we found that all three groups (researchers, developers and users) agreed on the significant influence and applicability of BCI and BCI games, and they all selected prostheses, rehabilitation and games as the most promising BCI applications. User and developer groups tended to give low priority to passive BCI and the whole head sensor array. Developers gave higher priorities to "the easiness of playing" and the "development platform" as important elements for BCI games and the market. Based on our assessment, we discuss the critical point at which BCI games will be able to progress from their current stage to widespread marketing to consumers. In conclusion, we propose three critical elements important for expansion of the BCI game market: standards, gameplay and appropriate integration.
Design of cylindrical pipe automatic welding control system based on STM32
NASA Astrophysics Data System (ADS)
Chen, Shuaishuai; Shen, Weicong
2018-04-01
The development of modern economy makes the demand for pipeline construction and construction rapidly increasing, and the pipeline welding has become an important link in pipeline construction. At present, there are still a large number of using of manual welding methods at home and abroad, and field pipe welding especially lacks miniature and portable automatic welding equipment. An automated welding system consists of a control system, which consisting of a lower computer control panel and a host computer operating interface, as well as automatic welding machine mechanisms and welding power systems in coordination with the control system. In this paper, a new control system of automatic pipe welding based on the control panel of the lower computer and the interface of the host computer is proposed, which has many advantages over the traditional automatic welding machine.
Ma, Meng; Fallavollita, Pascal; Habert, Séverine; Weidert, Simon; Navab, Nassir
2016-06-01
In the modern day operating room, the surgeon performs surgeries with the support of different medical systems that showcase patient information, physiological data, and medical images. It is generally accepted that numerous interactions must be performed by the surgical team to control the corresponding medical system to retrieve the desired information. Joysticks and physical keys are still present in the operating room due to the disadvantages of mouses, and surgeons often communicate instructions to the surgical team when requiring information from a specific medical system. In this paper, a novel user interface is developed that allows the surgeon to personally perform touchless interaction with the various medical systems, switch effortlessly among them, all of this without modifying the systems' software and hardware. To achieve this, a wearable RGB-D sensor is mounted on the surgeon's head for inside-out tracking of his/her finger with any of the medical systems' displays. Android devices with a special application are connected to the computers on which the medical systems are running, simulating a normal USB mouse and keyboard. When the surgeon performs interaction using pointing gestures, the desired cursor position in the targeted medical system display, and gestures, are transformed into general events and then sent to the corresponding Android device. Finally, the application running on the Android devices generates the corresponding mouse or keyboard events according to the targeted medical system. To simulate an operating room setting, our unique user interface was tested by seven medical participants who performed several interactions with the visualization of CT, MRI, and fluoroscopy images at varying distances from them. Results from the system usability scale and NASA-TLX workload index indicated a strong acceptance of our proposed user interface.
Halder, S; Käthner, I; Kübler, A
2016-02-01
Auditory brain-computer interfaces are an assistive technology that can restore communication for motor impaired end-users. Such non-visual brain-computer interface paradigms are of particular importance for end-users that may lose or have lost gaze control. We attempted to show that motor impaired end-users can learn to control an auditory speller on the basis of event-related potentials. Five end-users with motor impairments, two of whom with additional visual impairments, participated in five sessions. We applied a newly developed auditory brain-computer interface paradigm with natural sounds and directional cues. Three of five end-users learned to select symbols using this method. Averaged over all five end-users the information transfer rate increased by more than 1800% from the first session (0.17 bits/min) to the last session (3.08 bits/min). The two best end-users achieved information transfer rates of 5.78 bits/min and accuracies of 92%. Our results show that an auditory BCI with a combination of natural sounds and directional cues, can be controlled by end-users with motor impairment. Training improves the performance of end-users to the level of healthy controls. To our knowledge, this is the first time end-users with motor impairments controlled an auditory brain-computer interface speller with such high accuracy and information transfer rates. Further, our results demonstrate that operating a BCI with event-related potentials benefits from training and specifically end-users may require more than one session to develop their full potential. Copyright © 2015 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
A programmable ISA to USB interface
NASA Astrophysics Data System (ADS)
Ribas, R. V.
2013-05-01
A programmable device to access and control ISA-standard camac instrumentation and interfacing it to the USB port of computers, is described in this article. With local processing capabilities and event buffering before sending data to the computer, the new acquisition system become much more efficient.
Open architecture CMM motion controller
NASA Astrophysics Data System (ADS)
Chang, David; Spence, Allan D.; Bigg, Steve; Heslip, Joe; Peterson, John
2001-12-01
Although initially the only Coordinate Measuring Machine (CMM) sensor available was a touch trigger probe, technological advances in sensors and computing have greatly increased the variety of available inspection sensors. Non-contact laser digitizers and analog scanning touch probes require very well tuned CMM motion control, as well as an extensible, open architecture interface. This paper describes the implementation of a retrofit CMM motion controller designed for open architecture interface to a variety of sensors. The controller is based on an Intel Pentium microcomputer and a Servo To Go motion interface electronics card. Motor amplifiers, safety, and additional interface electronics are housed in a separate enclosure. Host Signal Processing (HSP) is used for the motion control algorithm. Compared to the usual host plus DSP architecture, single CPU HSP simplifies integration with the various sensors, and implementation of software geometric error compensation. Motion control tuning is accomplished using a remote computer via 100BaseTX Ethernet. A Graphical User Interface (GUI) is used to enter geometric error compensation data, and to optimize the motion control tuning parameters. It is shown that this architecture achieves the required real time motion control response, yet is much easier to extend to additional sensors.
Transfer of control system interface solutions from other domains to the thermal power industry.
Bligård, L-O; Andersson, J; Osvalder, A-L
2012-01-01
In a thermal power plant the operators' roles are to control and monitor the process to achieve efficient and safe production. To achieve this, the human-machine interfaces have a central part. The interfaces need to be updated and upgraded together with the technical functionality to maintain optimal operation. One way of achieving relevant updates is to study other domains and see how they have solved similar issues in their design solutions. The purpose of this paper is to present how interface design solution ideas can be transferred from domains with operator control to thermal power plants. In the study 15 domains were compared using a model for categorisation of human-machine systems. The result from the domain comparison showed that nuclear power, refinery and ship engine control were most similar to thermal power control. From the findings a basic interface structure and three specific display solutions were proposed for thermal power control: process parameter overview, plant overview, and feed water view. The systematic comparison of the properties of a human-machine system allowed interface designers to find suitable objects, structures and navigation logics in a range of domains that could be transferred to the thermal power domain.